0% found this document useful (0 votes)
37 views13 pages

Crypto Software Assignment Material

Uploaded by

henrynichloas12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views13 pages

Crypto Software Assignment Material

Uploaded by

henrynichloas12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Lessons Learned in Implementing and Deploying Crypto Software

Peter Gutmann
University of Auckland
Abstract
Although the basic building blocks for working with strong encryption have become fairly widespread in the last
few years, experience has shown that implementers frequently misuse them in a manner that voids their security
properties. At least some of the blame lies with the tools themselves, which often make it unnecessarily easy to get
things wrong. Just as no chainsaw manufacturer would think of producing a model without a finger-guard and
cutoff mechanism, so security software designers need to consider safety features that will keep users from injuring
themselves or others. This paper examines some of the more common problem areas that exist in crypto security
software, and provides a series of design guidelines that can help minimise damage due to (mis-)use by
inexperienced users. These issues are taken from extensive real-world experience with users of security software,
and represent areas that frequently cause problems when the software is employed in practice.

1. Introduction
In the last five years or so the basic tools for strong encryption have become fairly widespread, gradually displacing
the snake oil products that they had shared the envi ronme ntwi thu ntilt
hen.Asar e
sult,i
t’sn owf ai
rl
ye asyto
obtain software that contains well-established, strong algorithms such as triple DES and RSA instead of pseudo one-
time-pa ds.Un f
ortun ately,th i
sh asn ’tsolv edt h esnakeo ilp r
oblem,bu thasmerely relocated it elsewhere.

The determined programmer can produce snake oil using any crypto tools.
What makes the new generation of dubious crypto products more problematic than their predecessors is that the
obvious danger signs that allowed bad crypto to be quickly weeded out are no longer present. A proprietary, patent-
pending, military-strength, million-bit-key, one-time pad built from encrypted prime cycle wheels is a sure warning
sign to stay well clear, but a file encryptor that uses Blowfish with a 128-bit key seems perfectly safe until further
analysis reveals that the key is obtained from an MD5 hash of an uppercase-only 8-character ASCII password. This
type of second-g enerat i
ons na keoi lcrypto,wh ichl ooksliket herea lthi
n gb u ti
sn ’t
,could be referred to as
naugahyde crypto, with an appropriately similar type of relationship to the real thing.
Mos tc ryptos oft
wa reiswr itt
enwi t
ht hea s
su mpt i
ont hatth eu serk nowswh atthe y’red oing,a ndwi l
lc hoosethe
most appropriate algorithm and mode of operation, carefully manage key generation and secure key storage, employ
the crypto in a suitably safe manner, and do a great many other things that require fairly detailed crypto knowledge.
However, since most implementers are everyday programmers whose motivation for working with crypto is defined
by“ thebo sssaidd oi t”,theinevit
abl eresultist hec reat
ionofp roduc t
swi thg enu i
n en augahy dec r
y pto.Some ti
me s
this is discovered (for example when encryption keys are generated from the process ID and time, or when the RC4
keystream is re-used multiple times so the plaintext can be recovered with a simple XOR), but more frequently it
isn’t,s ot hatprodu ctsp rovidin
go nlyi ll
u sorys ecurit
yma ybede ploy eda n du sedf ory earswi thou ta n
y onebeinga n y
the wiser.
This paper looks at some of the ways in which crypto software developers and providers can work to avoid creating
and deploying software that can be used to create naugahyde crypto. Much of the experience presented here comes
from developing and supporting the open-source cryptlib toolkit [1][2], which has provided the author with a wealth
of information on the ways in which crypto software is typically misused, and the principal areas in which users
experience problems. Additional feedback was provided from users and developers involved with other open-source
crypto efforts.
All of the events reported here are from real experiences with users, although the details have been obscured and
anonymised, particularly where the users in question have mor el awy er
st h ant
hea uthor ’
sUn iversityh a ssta
ff.In
addition a few of the more interesting stories were excluded, but are referred to indirectly in the text (although no-
one would have been able to identify the organisations involved, it was felt that having the event officially
documented rather than existing only in the memory of a few implementers was too much of a legal liability).
Although there are less references to sources than the author usually includes in his work, the reader should rest
assu r
e dt hatalloft hee vent
sme n ti
on edh erea rere al,andi t’sa lmos tc ertai
nt h
atthe yh avee it
he ru sed ,o rbeena
part of the use of, one or more of the products that are not quite referred to.

1
2. Existing Work
There exists very little published research on the topic of proactively ensuring that crypto is used in a secure manner,
as opposed to patching programs up after a problem is found. Most authors are content to present the algorithms and
mechanisms and leave the rest to the implementer. An earlier work on why cryptosystems fail concentrated mostly
on banking security [3][4], but did make the prophetic prediction that as implementers working with crypto products
“lackskillsa tsecu rityi ntegrationa ndma nage me n t
,the ywi llgoont obu ilds ystemswi thh oles”.
Another paper examined user interface problems in encryption software [5], an area that badly needs further work
by HCI researchers. There also exists a small amount of research into the usability of security mechanisms,
alt
ho ught hisdo es n’tdir
e ctly address crypto software [6]. Finally, the author of a widely-used book on crypto went
ont owr iteaf ollowu pwor kd esig nedt oa dd r
e ssth eprobl emt ha
t“ theworldwa sf ul
lofba ds ecurit
ys y st
ems
designedbype op lewh or e ad[ hisf irstb o ok]”[ 7]. Liket hea ut
horoft hispape r
,h ef oundt hat“thewe akp oin t
sh a d
nothing to do with mathematics [...] Beautiful pieces of mathematics were made irrelevant through bad
programmi ng” .Th efollowupwor ke x ami ness ec uri
tyinav e
ryg enera
l-purpose manner as a process rather than a
product, while this paper limits itself to trying to address the most commonly-made errors that occur when non-
cryptographers (mis-)apply crypto.
In addition to these works there exist a number of general-purpose references covering security issues that can occur
during application design and implementation [8][9][10]. These are targeted at application developers and are
intended to cover (and hopefully eliminate) common application security problems such as buffer overflows, race
conditions, elevation of privileges, access control issues, and so on. This work in contrast looks specifically at
problems that occur when end users (mis-)use security software, and suggests design guidelines that can help
combat such misuse.

3. Crypto Software Problems and Solutions


There are many ways in which crypto and security software can be misused. The main body of this paper covers
some of the more common problem areas, providing examples of misuse and suggesting (if possible) solutions that
may be adopted by developers of the security software to help minimise the potential for problems. While there is
nou niv er
salf ixfo rallp r
obl ems( a ndi ndeedso meoft hemh av eas oc i
alo re conomi cb asisthatc a
n ’tbee as
il
y
solved through the application of technology), it is hoped that the guidelines presented here will both alert
developers to the existence of certain problem areas and provide some assistance in combating them.

3.1 Pr
iva
teKe
ysAr
en’
t
One of the principal design features of cryptlib is that it never exposes private keys to outside access. The single
most frequently-a s
k edc ryptlibq uestioni sth erefo re“ Howd oIe xportp rivat
ek eysinpl ain t
ex tform? ”.Th er eason
s
gi
v e
nf orthisa rema nya ndv ari
ed ,andr angef rom t h el og i
cal( “Iwa nttog en erat
eat estk eyf orusewi t
hXYZ” )to
t
hedu bious( “Wewa ntt osh arethes a mep r
iv atek eya crossa llofours ervers” )throu ghtot hebi zarre(“Id on’t
know,Ij ustwa nttodoi t”).
In some cases the need to spread private keys around is motivated by financial concerns. If a company has spent
$49 5onaVe risig
nc ertif
ica t
et ha twa sd own l
oa de dt oaWi ndowsma c hineth enthe ywon ’tspen dt
hatmu cha gain
for exactly the same thing in a different format. As a result, the private key is exported from the Windows key store
(from which any Windows application can utilise it) into Netscape. And OpenSSL. And BSAFE. And cryptlib
(although cryptlib deliberately makes it rather difficult to poke keys of unknown provenance into it). Eventually,
every encryption-enabled application on the system has a copy of the key, and for good measure it may be spread
across a number of systems for use by different developers or sysadmins. Saving CA fees by re-using a single
private key for everything seems to be very popular, particularly among Windows users.
The amount of sharing of private keys across applications and machines is truly frightening. Mostly this appears to
oc curb ecauseu sersd on ’tun derstan dtheva l
u eofth epr i
vatek eyda t
a,t reati
ngi ta sj usta nothe rpi
eceof
informationt h atcanbec opieda cros stowh erev erit
’ sc onven ient.Fore xa mpleaf e wy earsa goac ompa nyh ad
developed a PGP-based encrypted file transfer system for a large customer. The system used a 2048-bit private key
m,s
that was stored on disk in plaintext for incet hes oftwa rewa sr u nasaba tchp roc es san dc ou l
dn’th al
twa it
ingf or
ap a s
swor dt ob ee ntered.On ed ayt hecustome rc alledt osayt ha tthey’dl ostthepr iva t
ek eyf ile,andc ou l
dt he
compa ny’spr og r
amme r splea serec onstr
uctitf orthe m.Th i
sc aused some consternation at the company, until one
of the developers pointed out that there were copies of the private key stored on a file server along with the source

2
code, and in other locations with the application binaries. Further investigation revealed that the developers had also
copied it to their own machines during the development process for testing purposes. Some of these machines had
later been passed on to new employees, with their original contents intact. The file server on which the development
work was stored had had its hard drives upgraded some time earlier, and the old drives (with the key on them) had
been put on a nearby shelf in case they were needed later. The server was backed up regularly, with three staff
members taking it in t urnstot aket heda y ’stape shomewi t
ht h emf o roff -site storage (the standard practice was to
drop them in the back seat of the car until they were re-used later on). In short, the only way to securely delete the
encryption key being used to protect large amounts of long-term sensitive data would have been to carpet-bomb the
city,a nde v ent henit’snotc ert
a i
nt h atco pieswou l
dn ’tha ves urviveds ome wh e re.Wh i
let hisrepresentsa
ma rv ell
ou sba ck upstrategy,it’sp robablyn otwh a
t’sre quir
e df orpr otecting private keys.

If your product allows the export of private keys in plaintext form or some other widely-
readable format, you should assume that your keys will end up in every other application
on the system, and occasionally spread across other systems as well.
At least some of the problem arises from the fact the much current software makes it unnecessarily easy to move
private keys around (see also section 3.2 for a variation of this problem). For example CAs frequently use PKCS
#12f ilest ose nda“ cert
ifi
c at
e”t oan ewu serb eca usei tma kest hingssi
mpl ert hang oingt hrou ght hemu lt
i-stage
process in which the browser generates the private key itself. These files are invariably sent in plain text email,
often with the password included. Alternatively, when the password is sent by out-of-band means, the PKCS #12
decryption key is generated directly from a hash of the uppercase-only ASCII password, despite warnings about the
insecurity of this approach being well publicised several years ago [21]. Once such file, provided as a sample to the
author, would have authorised access to third-party financial records in a European country. This method of key
handling was standard practice for the CA involved.
Another CA took this process a step further when they attempted to solve the problem of not having their root
certificate trusted by various browsers and mail programs by distributing a PKCS #12 file containing the CA root
key and certificate to allre l
yingp art
ies.Th eth inkingwa stha ton cet heCA’ spri
vatek eywa si nstal
ledont he i
r
syste m,t heu ser’sPKIs oftwa r
ewou l
dr egardth ec orr
espondin gc e r
tifi
catea sbein
gt rusted( i
ts ti
lld i
dn ’tquitef i
x
thep roblem,b u titwa sas tart
).Th is“ sol
ution”i sinfa c
tsoc ommont hatt heOpenSSLFAQc o ntai
nsa ne ntry
specifically warning against it [67].In credi
bly,d espitethestron gwa rningi nth
eFAQt ha t“thisc omma ndwi llg i
ve
awa yy ou rCA’ sp ri
v atek eya ndreduc esit
ssec u r
itytoz e
ro”,s ecu ri
tyb ook shavea ppea redtha tgivec lear,st
e p-by-
stepi n s
tru ctionsonh owt odistri
buteth eCA’ sp rivat
ek ey“ t
oa l
ly ou ru ser’swebb r
ows ers”[11].
Making it more difficult to do this sort of thing might help alleviate some of the problems. Certainly in the case of
cry ptli
bwh enu sersa r
ei n f
o r
me dt ha twh atthey ’rea s
kingfori sn’tpos sible, theyfindame ansofwor kingwi thin
those constraints (or maybe they quietly switch to CryptoAPI, which allows private keys to be sprayed around
freely). However the real problem is a social and financial one: The single biggest reason for the re-use of a single
key wherever possible is the cost of the associated certificate. A secondary reason is the complexity involved in
obtaining the certificate, even if it is otherwise free. Examples of the latter include no-assurance email certificates,
some timesk n owna s“ clown -su itc e rti
ficat
es”be cau s
eoft heleve lofi de ntitya ss
urancet he ypr ovi
d e[ 12].
Generating a new key rather than re-using the current one is therefore expensive enough and cumbersome enough
that users are given the incentive to put up with considerable inconvenience in order to re-use private keys. Users
have even tried to construct ways of sharing smart cards across multiple machines in order to solve the annoying
pr ob l
emt h atth eyc an’te xportt hep ri
v at
ek eyf romt hecard.An othera p pro ach,wh ichon l
ywor kswi ths o mec ards ,
is to generate the key externally and load it onto the card, leaving a copy of the original in software to be used from
various applications and/or machines (the fact that people were doing this was discovered because some cards or
card drivers handle external key loads in a peculiar manner, leading to requests for help from users).
PGP on the other hand, with its easily-generated, self-signed keys and certificates, suffers from no such problem,
and real-world experience indicates that users are quite happy to switch to new keys and discard their old ones
whenever they feel the need.
In order to solve this problem, it is necessary to remove the strong incentive provided by current X.509-style
certificate management to re-use private keys. One solution to this problem would be for users to be issued key-
signing certificates that they could use to create their own certificates when and as needed. This represents a
some wh ata wkwa rdwor kar
o undf ort hef acttha tX. 509do e sn’ta llowmu lt
iples i
g natur
esb indinga nin dentit
yt oa
certificate,sot hatit’sn o tpos s
iblet og ene r
a t
eas elf-signed certificate which is then endorsed through further,

3
exter na ls ignatur
es.I na nyc asesi
n cethisso lutionwou l
dd e priveCAsofr ev e nu e,it’
su n li
kelytoev erb e
implemented. As a result, even if private key sharing is made as difficult as possible, sufficiently motivated users
will still find ways to spread them around. It is, unfortunately, very difficult to fix social/economic issues using
technology.

3.2 Everything is a Certificate


In 1996 Microsoft introduced a new storage format for private keys and certificates to replace the collection of ad
hoc (and insecure) formats that had been in use before then [13][14]. Initially called PFX (Personal Information
Exchange) [15][16][17][18], it was later re-released in a cleaned-up form as PKCS #12 [19]. One of the main
motivations for its introduction was for use in Internet kiosks in which users carried their personal data around on a
floppy disk for use wherever they needed it. In practice this would have been a bad idea since Internet Explorer
x
retains copies of the key data so that the ne tuse rwh oc a mea l
on gcou l
do b tai
nt h epr eviou su s
er’sk ey sby
exporting them back onto a floppy. Internet kiosks never eventuated, but the PKCS #12 format has remained with
us.
Since PKCS #12 stores both keys and certificates, and (at least under Windows) the resulting files behave exactly
li
k ec ertificates,ma nyu sersa reu nabletodistingu i
s hc ertificate sfromPKCS# 12o b j
ec ts.I nth esa mewa yt hat“ I
’m
sen dingy ouad ocume nt”t ypica l
lyh eral
dsth ear r
iva lofaMi c ros oftWor df il
e,s o“ I’ms endin gy ou a my
certificate”i sfre que ntlya ccomp aniedb yaPKCS# 12f il
e .Th isp r
o bl
emi sn’
th elp edb yt hef actth attheWi nd ows
“Ce rtifi
c ateEx po r
tWi z ard”a ctuall
yc r
eatesPKCS#1 2f ilesa sou t
pu t
,d efa ul
tingt oe xp ortingt hep rivatek ey
alongside the certificate. The situation is further confused by some of the accompanying documentation, which
referst ot hePKCS# 1 2da taa sa“ digitalID”( rathert han“ certificate”o r“ p r
ivatek e y”), withth eimp li
cationt hat
it
’sj ustac ertificatet hath app enstor equ i
reapassword when exported. The practice of mixing public and private
keys in this manner, and referring to the process of and making the behaviour of the result identical to the behaviour
of a plain certificate, are akin to pouring weedkiller into a fruit juice bottle and storing it on an easily accessible
shelf in the kitchen cupboard.
The author, being a known open-source crypto developer, is occasionally asked for help with certificate-
management code, and has over the years accumulated a small collection ofu sers’pr ivatek eysa ndce rt
if
ica t
es,
ranging from disposable email certificates through to relatively expensive higher-assurance certificates (the users
were notified and the keys deleted where requested). The current record for a key obtained in this manner (reported
by another open-source crypto developer in a similar situation) is the key for a Verisign Class 3 code-signing
certificate, the highest-level certificate provided by Verisign which requires notarisation, background investigations,
and fairly extensive background checking [20].
Once the PKCS #12 file is obtained, the contents can generally be recovered, either by recovering the password
[21][22][23] or by taking advantage of the fact that the Certificate Export Wizard will export keys without any
passwor dift heu serju stkeepsc li
cking‘ Ne xt’i nsta ndardWi zardf a
shion( t
heya reinf acten cryptedwi t
ha
password consisting of two null characters, a Microsoft implementation bug that was reverse-engineered back into
PKCS #12).
In contrast, PGP has no such problems. PGP physically separates the public and private portion of the key into two
files, and makes it quite clear that the private-keyf il
eshou ldn ev e rbedist
ributedt oa nyone :“ kee py ours ecretkey
file to yourself [...] Never give your secret key to anyone else [...] Always keep physical control of your secret key,
an ddon ’triske x pos ingitbys tori
n gitonar emot etimesha ri
n gc ompu ter
.Ke epi tony ourownpe rs
on alco mp uter”
[24]. When distributing keys to other users, PGP only extracts the public components, even if the user explicitly
forces PGP to read from the private key file (the default is to use the public key file). Even if the user never bothers
tor e adthed oc u me ntat
ionwhi chwa rnsa bou tprivatekeys ecur ity,PGP’ ss af
e-by-default key handling ensures that
th eyc an’ta ccide n tall
yc ompr omi set hek ey .

Make very clear to users the difference between public and private keys, either in the
documentation/user interface or, better, by physically separating the two.

3.3 Making Key Management Easy


One popular solution for key management, which has been around since the technology was still referred to as
dinosaur oil, is the use of fixed, shared keys. Despite the availability of public-key encryption technology, the use of

4
this type of key management is still popular, particularly in sectors such as banking which have a great deal of
experience in working with confidential information. Portions of the process have now been overtaken by
technology, with the fax machine replacing trusted couriers for key exchange.
Another solution which is popular in EDI applications is to transmit the key in another message segment in the
transaction. If XML is being used, the encryption key is placed in a field carefully tagged as <password> or <key>.
Yet another solution, popularised in WEP, is to use a single fixed key throughout an organisation [25].
Even when public-key encryption is being used, users often design their own key-management schemes to go with
it. One (geographically distributed) organisation solved the key management problem by using the same private key
on all of their systems. This allowed them to deploy public-key encryption throughout the organisation while at the
same time eliminating any key management problems, since it was no longer necessary to track a confusing
collection of individual keys.

Straight Diffie-Hellman requires no key management. This is always better than other
no-key-management alternatives that users will create.
Obviously this method of (non-)key management is still vulnerable to a man-in-the-middle (MITM) attack, however
this requires an active attack at the time the connection is established. This type of attack is considerably more
difficult than a passive attack performed an arbitrary amount of time later, as is possible with unprotected, widely-
known, or poorly-chosen shared keys, or, worse yet, no protection at all because a general solution to the problem
isn’ta va il
able[ 26]. In situations like this the engineering approach (within ±10% of the target with reasonable
effo rt
)i sof t
enb et
terth ant hema t
h ema tician’sa pproac h(1 00% a ccuracywi thu nre aso nablee ffort
,sot hatin
practice nothing gets done).

3.4 What Time is it Anyway?


Many security protocols, and in particular almost all PKI protocols that deal with validity intervals and time periods,
assumet ha tthey’reo pe r
ati
ngi nt hepr es enc eofp rec i
sely-synchronised clocks on all systems. The fact that this
frequentl
yi sn’tth ec asewa srec ogni sedad ec a
dea g ob ot
hbys ecu rityr esearchers(mo stlya sar es ult of Kerberos
V4’ su s
eoft ime sta mps)[27][28][29] and by implementers of post-Ke rber osV4pr otoc olss uc hasI BM’ s
KryptoKnight, which replaced the timestamps with nonces [30][31][32][33], Bell-At lan t
ic ’sYa ksh a[ 34][35], and to
some extent Kerberos V5, which allowsf o r( butdoe sn’trequ i
re)n on ces[ 36]. More recently, one of the few
published papers on PKI implementation experience pointed out the problems inherent in using timestamps for
synchronisation in the CMP PKI protocol [37]. Research into the problem of time synchronisation in distributed
systems goes back over a quarter of a century [38].
The author has seen Windows machines whose time was out by tens of minutes (incorrect settings or general clock
drift), one or more hours (incorrect settings or incorrect time zone/daylight savings time adjustment), one or more
days (incorrect settings or incorrect time zone, for example a machine in New Zealand set to GMT), and various
larger units (weeks or months). In the most extreme case the time was out by several dec adesbutwa sn’tn oti
c e
dby
the user until cryptlib complained about a time problem while processing certificates with a known validity period.
In addition to the basic incorrect time problems, combinations such as an offset of one day + one hour + 15 minutes
have also been spotted [39] .An o therresearch erh asrepo r
ted“ workstationst hatares etwi t
ht hewr on gtimez one,
cloc kso ffbyawh oley ear,ors imi larnon sense ”[40]. Other problems included systems set to the US Pacific time
zone (the default for Windows) because the users had just accepted the default when they installed their systems,
and an instance where it took two weeks to convince Cisco mail server administrators that their clocks were off.
One of the few good indicators of the true scale of this problem was provided by the Welchia worm, which included
a built-in self-destruct that terminated the worm on 1 January 2004. On that date, Welchia infections dropped to
about 30% of their previous value, reaching 10% by the end of the week, with an indefinite tail that carried on well
into 2004, at which point monitoring of the worm stopped [41]. This indicates that around 10% of the infected
systems had their clocks off by up to a week, and a smaller percentage were out by months (at least), with their
owners none the wiser.
Another example of the more or less random nature of real-world system clocks was indicated by one of the
numerous instances of NTP server abuse that have occurred when vendors hard-coded incorrect or inappropriate
server addresses into applications or devices. In order to discourage users from using one particular inappropriate
server, the administrators configured it to always return a bogus time of 23:59:59 on 31 December 1999. Two and a

5
half years later (at which point logs were discontinued), the server was still getting between five and ten thousand
machines an hour setting their system clocks to this bogus date [42].
In addition to problems due to incorrect settings, there are also potential implementation problems. One PKI pilot
ran into difficulties because of differences in the calculation of offsets from GMT in different software packages
[43]. Time zone issues are extremely problematic because some operating systems handle them in a haphazard
manner or can be trivially misconfigured to get the offset wrong. Even when everything is set up correctly it can
prove almost impossible to determine the time offset from a program in one time zone with daylight savings time
adjustment and a second program in a different time zone without daylight savings time adjustment.
A further problem with a reliance on timestamps is the fact that it extends the security baseline to something which
is not normally regarded as being security-r elevan t
,andt hatth ereforewon ’tb eh and leda sc ar
e fullyaso bvio usly
-
security-relateditemss ucha spa ss
wor dsa ndc ryptotokens:“ Iftime s t
ampsa reu seda sfreshne ssg uarantees by
reference to absolute time, then the difference between local clocks at various machines must be much less than the
allowable age of a message deemed to be valid. Furthermore, the time maintenance mechanism everywhere
becomes part of the trustedc ompu tingba se”[44].
An even bigger problem with the implicit inclusion of an external time source into the TCB is that the owners of the
extern a lsou rceg en er
all
ya r
e n
’ta wa reo fthefactthatt hey ’veju stbee nma deac riti
c alse cu ri
tyc ompon en t.Asa
result, this external component is given nowhere near the level of protection that the rest of the system is, because
it’
sn o tr eg ar
de da sa nat i
-rskcompon ent.Afterall,wh o’ sg oingt ob othe rbr eakin gin t
oat imes erv i
cej ustsot hey
can change the clock?
Asi tturnsou t,thish aso cc ur
redonan umbe ro foc casion s.Fo rex ampl eIanMu rphy,a .k.a.“Ca pta i
nZa p”,
su pposed lythei ns pirationf orthef il
m“ Sn eaker s”
,s etth ec lockonAT&Tsph on ebillings ystemba ckby12h ou rs
to allow daytime callers (calling at the peak billing rate) to obtain off-peak nighttime rates. In this case there was a
financial incentive involved, but in an even more serious case that occurred in Brazil it appears to have been done
purely for the hack value. In January 2004, unknowni ntrud erssett heBr azili
anNa t
ion alObs erva torytimes ervi
c e’s
clock back by 24 hours. Compromising the nationwide reference time source, the equivalent of NIST in the US,
would have compromised every PKI that took its time from it for the 36 hours that it took until it was detected. In
particular, any certificate revocations issued during that time would have been rolled back, giving an attacker a full
one-and-a-half days to do whatever they liked with compromised keys.
To complicate things further, times are often deliberately set incorrectly to allow expired certificates to continue to
be used without paying for a new one, a trick that shareware authors countered many years ago to prevent users from
running trial versions of software indefinitely. Fo re xamp leNe tscape’ sc odesign i
ngs oftwa rewi llblind l
yt rustthe
date incorporated into a JAR file by the signer, allowing expired certificates to be rejuvenated by backdating the
signature generation time. It would also be possible to resuscitate a revoked certificate using this trick, except that
thes of t
wa redoe s n’tperformr ev ocationc h eckings oit’spo s si
bl etou s eita nywa y.Ot heru n expec t
edt rickss ucha s
setting the clock forward in time or stopping it entirely are also likely to cause problems for applications that assume
that time is monotonically increasing [45].

Don ’tincor p or
atet hes ys
temc lock( ort heot he
rp art
ie
s’s
yst
emc
loc
ks)i
nyo
urs
ecur
it
y
baseline. If you need synchronisation, use nonces.
If some sort of timeliness guarantees are required, this can still be achieved even in the presence of completely
desynchronised clocks by using the clock as a means of measuring the passage of time rather than as an absolute
indicator of time. For example a server can indicate to a client that the next update will take place 15 minutes after
thec urrentr e questwa srec eived ,aqu antit
yt hatcanb eme a su reda ccu ra
telybybot hs idese venifon esi
det hin ksi t
’s
currently September 1986. To perform this operation, the client would submit a request with a nonce, and the server
would respond with a (signed or otherwise integrity-protected) reply containing a relative time to the next update. If
thec lientd oe s
n ’tre
c eivether espons ewi thinag ivent ime ,ort h erespon sedoe s
n ’tc ontainthen on cethey sent, then
the re
’ss ome thi
n gsus pi
ciou sg oin gon .I fe ver
y t
h i
ngi sOK,t he yk nowt hee xacttime( relat
ivetot h ei
rlocalc lo ck)
of the next update, or expiry, or revalidation. Although this measure is simple and obvious, the number of security
standards that define mechanisms that assume the existence of perfectly synchronised clocks for all parties is
somewhat worrying.

6
In the presence of arbitrary end user systems, relative time measures work. Absolute
ti
meme asuresdon ’t.
For non-interactive protocol st
hatc a
n ’
tu sen onc est hes olutionbe come ss l
ightlymor ec ompl ex,bu tc ang ene r
allybe
implemented using techniques such as a one-off online query, or time-stamping [46]. Alternatively, if the use of
timestamps is unavoidable but certain assumptions can b ema dea boutthequ alityoft heti
mei n forma tion,it’s
possible to manage the risk involved in an appropriate manner. The weakest assumption is made in some of the
protocols used in telecommunications network management (TMN) [47][48], which must assume that clocks can
behave in arbitrary and unreliable ways. For example, a clock may be running too fast, or have stopped, or been
reset to a time in the past (TMN operators have a lot of practical experience with odd behaviour in various pieces of
equipment).
To resolve these issues, the protocols distinguish between four different types of time, GMT (external,
astronomically correct time), the system clock, virtual time (a monotonically increasing value), and external time
(which appears in an incoming message), and have a variety of mechanisms to handle the problem situations
mentioned above [49].
A slightly stronger assumption, used in SNMPv2, is that clocks are monotonically increasing (the equivalent of the
TMN’ svirtu alti
mev alue).I nt hissituat
iont ime is represented as a pair of values, the number of seconds since the
time counter was initialised (for example, since the equipment was rebooted), and the number of reboots. Both sides
of a communication session track both their own time and the other par ty’st i
me .I fc locksdri
ft,th eyare
resynchronised when a new message is received from the other party. This view of time treats it partly as a timer
and partly as a form of (predictable) nonce.
The strongest assumption, popular in the PKI world, is to assume perfectly synchronised, perfectly secure clocks
amon ga llparties(t
hee qu ival
ento fTMN’ sGMT) .
Another option is to leverage the experience gained from distributed transaction processing, which acknowledges
that, in general, a natural event orderingme ch ani
s mi sn’tpo s
sible,bu tt
h atinmos tc a sesitisn ’tn ec essa rys inc
ea l
l
that’ sreq uir
e disapa rtialord e r
ing( ref
erredt oa s“ ha pp ens-bef
o re”)thatintui
tivel
yc a ptu
re st h ere l
ation sbe t
we en
distr i
bu tede ven t
s[ ”Opt imi s
t i
cRe plicat
ion”, Yas ush iSaito and Marc Shapiro, ACM Computing Surveys, Vol.37,
No .1( Ma rch2 00 5),p .
42 ].Fore x ampl einp racticewh env eri
fyingas i
gna tur
ewedon ’tr
e allyc arep rec iselywh en
the certificate used to create it was rendered invalid, all we need to know is whether it was still valid at the time the
signature was generated. This is much like a sporting event in which the ordering (first, second, third) is of primary
importance, but the actual amount (three hundredths of a second) is of only peripheral interest. There are a number
of well-established concurrency-relation mechanisms such as Lamport clocks [50] that can be used to implement
this.

3.5 RSA in CBC Mode


Wh enth eRSAa lgor i
thmi su sedf orenc ryption ,theo pe r
ati
oni su suall
ypr e senteda s“ enc r
y pt
ingwi thRSA” .The
obvious consequence of this is that people try to perform bulk data encryption using pure RSA rather than using it
purely as a key exchange mechanism for a fast symmetric cipher. In most cases this misunderstanding is quickly
cleared up because the cry pt
ot oo l
k i
tAPIma kesi tobv iousthatRSAc an’tbeu sedt hatwa y,howe v ertheJ CEAPI ,
wh icha tt
e mp tstop rovid eah ighlyor t
hog o nali nter
fac etoa l
lciph ersev enifth ere sul
tingop erat
ion sd on’tma ke
much sense, allows for bizarre combinations such as RSA in CBC mode with PKCS #5 padding alongside the more
sensible DES alternative with the same mode and padding (CBC and PKCS #5 are mechanisms designed for use
with block ciphers, not public-key algorithms). As a result, when a programmer is asked to implement RSA
encryption of data, they implement the operation exactly as the API allows it. One of the most frequently-asked
questions for one open-source Java crypto toolkit covers assorted variations on the use of bulk data encryption with
RSA, usually relating to which (block cipher) padding or chaining mode to use, but eventually gravitating towards
“Whyi si tsos low? ”on c ethec oden earsc ompl eti
ona ndtesti
ngc omme nces.
Th i
sc anleadt oav ari
etyofi n terest
ingd eba t
es.Ty picallyac ustomera sksf or“RSAe ncryptionofda t
a” ,a
ndt he
implementers deliver exactly that. The customer claims that no-one with an ounce of crypto knowledge1 would ever
perform bulk data encryption with RSA and the implementers should have known better, and the implementers

1
Equivalent to 31 grams of crypto knowledge, being worth its weight in gold.

7
claimt h atthey ’
redeli
veri
ngexact
lywha
tth
ec u
stomerask
edfor
.Even
tual
lyt
hecus
tomerth
reat
enst
owit
hho
ld
pa yme ntu ntilthecodeisfi
xed,a
ndthei
mplemente
rssn
eakth
echan
gesinund
er“
Mi s
c.Exp.
”atfi
vet
ime
sthe
original price.

Don
’ti
ncl
udei
nse
cure or illogical security mechanisms in your crypto tools.

3.6 Left as an Exercise for the User


Cr y ptotool kit
ss ome ti
me sl eavep ro blemst ha tthetool ki
td evelop er
sc ou l
dn ’ts ol
v eth e
ms el
v esa sa ne xerc i
sefor
the user. For example the gathering of entropy data for key generation is often expected to be performed by user-
supplied code outside the toolkit. Experience with users has shown that they will typically go to any lengths to
avoid having to provide useful entropy to a random number generator that relies on this type of user seeding. The
first widely-known case where this occurred was with the Netscape generator, whose functioning with inadequate
input required the disabling of safety checks that were designed to prevent this problem from occurring [51]. A
more recent example of this phenomenon was provided by an update to the SSLeay/OpenSSL generator, which in
version 0.9.5 had a simple check added to the code to test whether any entropy had been added to the generator
(earlier versions would run the pseudo-random number generator (PRNG) with little or no real entropy). This
change lead to a flood of error reports to OpenSSL developers, as well as helpful suggestions on how to solve the
problem, including seeding the generator with a constant text string [52][53][54], seeding it with DSA public key
components (whose components look random enough to fool entropy checks) before using it to generate the
corresponding private key [55], seeding it with consecutive output byes from rand()[56], using the executable
image [57], using /etc/passwd [58], using /var/log/syslog [59], using a hash of the files in the current directory
[60], creating a dummy random data file and using it to fool the generator [61], downgrading to an older version
su cha s0 .9.4wh i
chdoe sn’tc heck for correct seeding [62], using the output of the unseeded generator to seed the
generator (by the same person who had originally solved the problem by downgrading to 0.9.4, after it was pointed
out that this was a bad idea) [63],an du singt hes t
ring“ 01 23 4567 89ABCDEF0”[ 64]. Another alternative,
suggested in a Usenet news posting, was to patch the code to disable the entropy check and allow the generator to
run on empty (this magical fix has since been independently rediscovered by others [65]). In later versions of the
code which used /dev/random if it was present on the system, another possible fix was to open a random disk file
and let the code read from that thinking it was reading the randomness device [66]. It is likely that considerably
more effort and ingenuity has been expended towards seeding the generator incorrectly than ever went into doing it
right.
The problem of inadequate seeding of the generator became so common that a special entry was added to the
OpenSSL frequently-asked-questions (FAQ) list telling users what to do when their previously-fine application
stopped working when they upgraded to version 0.9.5 [67] ,ands in cethisstil
ld idn’ta ppeartob es uf
ficientl at
er
ve r
sio nsoft hec odewe rechan gedtod i
sp la
yt h eFAQ’ sURLi nt hee rrormessage that was printed when the PRNG
wa sn’ts e eded.Ba sedonc omme nt
so nt h eOp enSSLde v elo
pe rsl i
st,quitean u mbe ro fthird-party applications that
used the code were experiencing problems with the improved random number handling code in the new release,
indicating that they were working with low-security cryptovariables and probably had been doing so for years.
Because of this problem, a good basis for an attack on an application based on a version of SSLeay/OpenSSL before
0.9.5 is to assume the PRNG was never seeded, and for versions after 0.9.5 to assume it was seeded with the string
“stri
n gt oma ketheran d
om n umb erg en erat
o rth i
nki th asentrop y”,av aluet hata ppea r
edi non eoft hete s
tp rog rams
included with the code and which appears to be a fa vou r
iteofu serstryingtoma k eth ege nerator“ work” .
The fact that this section has concentrated on SSLeay/OpenSSL seeding is not meant as a criticism of the software,
the change in 0.9.5 merely served to provide a useful indication of how widespread the problem of inadequate
initialisation really is. Helpful advice on bypassing the seeding of other generators (for example the one in the Java
JCE) has appeared on other mailing lists. The practical experience provided by cases such as the ones given above
shows how dangerous it is to rely on users to correctly initialise a generator — not only will they not perform it
cor rec t
ly,t hey’llg oou toftheirwa yt od oitwr on g.Al th ought h ereisn othin
gmu c hwr ongwi thth e
SSLeay/OpenSSL generator itself, the fact that its design assumes that users will initialise it correctly means that it
(and many other user-seeded generators) will in many cases not function as required.

If a security-related problem is difficult for a crypto developer to solve, there is no way a


non-c ry
p t
ou serc anbee xpectedt os olvei t.Do n’tleavehar dpr oblemsasa ne xercis
e
for the user.

8
In the above case the generator should handle not only the PRNG step but also the entropy-gathering step itself,
while still providing a means of accepting user optional entropy data for those users who do bother to initialise the
generator correctly. As a generalisation, crypto software should not leave difficult problems to the user in the hope
that they can somehow miraculously come up with a solution where the crypto developer has failed.

3.7 This Function can Never Fail


A few years ago a product was developed which employed the standard technique of using RSA to wrap a
symmetric encryption key such as a triple DES key that was then used to encrypt the messages being exchanged
(compare this with the RSA usage described in section 3.5). The output was examined during the pre-release testing
and was found to be in the correct format, with the data payload appropriately encrypted. Then one day one of the
testers noticed that a few bytes of the RSA-wrapped key data were the same in each message. A bit of digging
revealed that the key parameters being passed to the RSA encryption code were slightly wrong, and the function was
failingwi tha ne r
ro rc odeind i
ca t
in gwh atthep roble mwa s.Si nc et hiswa saf unct
iont hatcould n’tf ai
l,the
prog ramme rha dn’tc heckedt her e t
urnc o
debu th ads i
mpl yp assedt h e( random-looking but unencrypted) result on
to the next piece of code. At the receiving end, the same thing occurred, with the unencrypted symmetric key being
left untouched by the RSA decryption code. Everything appeared to work fine, the data was encrypted and
decrypted by the sender and receiver, and it was only the eagle eyes of the tester which noticed that the key being
used to perform the encryption was sitting in plain sight near the start of each message.
Another example of this problem occurred in Microsoft Internet Information Server (IIS), which tends to fail in odd
ways under load, a problem shared with MS Outlook, which will quietly disable virus scanning when the load
becomes high enough so that as much as 90% of incoming mail is never scanned for viruses [68]. In this case the
failure was caused by a race condition in which one thread received and decrypted data from the user while a second
thread, which used the same buffer for its data, took the decrypted data and sent it back to the user. As a result,
when under load IIS was sending user data submitted over an SSL connection back to the user unencrypted [69][70].
The fix was to use two buffers, one for plaintext and one for ciphertext, and zero out the ciphertext buffer between
calls. As a result, when the problem occurred, the worst that could happen was that the other side was sent an all-
zero buffer [10].
To avoid problems of this kind, implementations should be designed to fail safe even if the caller ignores return
codes. A straightforward way to do this is to set output data to a non-value (for example fill buffers with zeroes and
set numeric or boolean values to –1) as the first operation in the function being called, and to move the result data to
the output as the last operation before returning to the caller on successful completion. In this way if the function
returns at any earlier point with an error status, no sensitive data will leak back to the caller, and the fact that a
failure has taken place will be obvious even if the function return code is ignored.

Make security-critical functions fail obviously even if the user ignores return codes.
Another possible solution is to require that functions use handles to state information (similar to file or BSD sockets
handles) which record error state information and prevent any further operations from occurring until the error
condition is explicitly cleared by the user. This error-state-propagation mechanism helps make the fact that an error
has occurred more obvious to the user, even if they only check the return status at the end of a sequence of function
calls, or at sporadic intervals.

3.8 Careful with that Axe, Eugene


The functionality provided by crypto libraries constitute a powerful tool. However, like other tools, the potential for
misuse in inexperienced hands is always present. Crypto protocol design is a subtle art, and most users who cobble
their own implementations together from a collection of RSA and 3DES code will get it wrong. In this case
“wr o ng”doe sn’treferto(fore xampl e )mi ssingas ubt l
ef lawi nNe e dham-Schroeder key exchange, but to errors
sucha susingECBmode( wh i
c hd oesn ’thidep l
ainte xtda tapatt
e r
n s)ins te
adofCBC( wh ichdo e
s).
Th eu seofECBmo de,wh ichi ss i
mpl ean dstraig
h tf
or wa rda ndd oe s
n ’trequireh andli
ngofi niti
a li
sati
onv ectors
(IVs) and block chaining and synchronisation issues is depressingly widespread among users of basic collections of
encryption routines, despite this being warned against in every crypto textbook. Confusion over block cipher
chaining modes is a significant enough problem that several crypto libraries include FAQ entries explaining what to
doi fthefirst8b yt
e sofd e crypt edd ataappeart obec or rup t
ed,a nin dicati
ont hattheI Vwa sn’tse tuppr op erl
y .

9
Asi fth eu seofECBi tsel
fwa sn’tba de n oug h,usersoftencompo undt he error with further implementation
simplifications. For example one vendor chose to implement their VPN using triple DES in ECB mode, which they
sawa st h esimp lesttoimp leme n tsi
n cei td oesn’trequireanys ynchron i
sa ti
o nma n ag eme nti fpacketsa r
elost.Since
ECBmod ec anon l
ye ncryptda tainmu ltiplesofth ecipherblocks i
z e,theyd i
dn ’
te n crypta n yl
eftove rbytesa tt
he
end of the packet. The interaction of this processing mechanism with interactive user logins, which frequently
transmit the user name and password one character at a time, can be imagined by the reader.
Th ei ssuet hatnee dstobea ddressedh erei sthatthea ver ageu serh as n’treada nyc ryptob oo ks,orh asa tbe stha d
some brief exposure to portions of a popular text such as Applied Cry ptograph y,a nds impl yi s
n’tablet oo pe rate
complex (and potentially dangerous) crypto machinery without any real training. The solution to this problem is for
developers of libraries to provide crypto functionality at the highest level possible, and to discourage the use of low-
level routines by inexperienced users. The job of the crypto library should be to protect users from injuring
themselves (and others) through the misuse of basic crypto routines.
Instea dof“ encryptas e ri
esofd at
ab lock susing3DES with a 192-b itk e
y” ,userss h ouldbea bletoe xe r
cise
func t
ion alitys u
c ha s“en cryptaf il
ewi thap a s
swor d”,wh ich( apa rtf roms to
ringt h ek eyi npla
intexti ntheWi ndows
registry) is almost impossible to misuse. Although the function itself may use an iterated HMAC hash to turn the
password into a key, compress and MAC the file for space-efficient storage and integrity-protection, and finally
encrypt it using (correctly-implemented) 3DES-CBC,t heu serd oe sn’thavet ok no w( orcare)aboutt his
.

Provide crypto functionality at the highest level possible in order to prevent users from
injuring themselves and others through misuse of low-level crypto functions with
prope rti
e sthe
yar en’ tawar eof .

4. Conclusion
Although snake oil crypto is rapidly becoming a thing of the past, its position is being taken up by a new breed of
snake oil, naugahyde crypto, which misuses good crypto in a manner that makes it little more effective than the
more traditional snake oil. This paper has covered some of the more common ways in which crypto and security
software can be misused by users. Each individual problem area is accompanied (where possible) by guidelines for
measures that can help combat potential misuse, or at least warn developers that this is an area which is likely to
cause problems with users. It is hoped that this work will help reduce the incidence of naugahyde crypto in use
today.
Un fo rtunatelyth esinglel argestclasso fp roblems ,k e
yma nageme nt,c an’tbesolvedasea sil
ya sth eot heron es.
Solving this e xtr
e melyh a rdp roblemi nama nn erp r
acti
c alenoughtoe nsureu s
eswon ’
tby pas
si tf ore ase-of-use or
economic reasons will require a multi-faceted approach involving better key management user interfaces, user-
certified or provided keys of the kind used by PGP, application-specific key management such as that used with ssh,
and a variety of other approaches [71] .Un tilthek eyma nageme ntt
a ski sma demu chmor ep racti
c al,“sol ut
ion s”of
the kind presented in this paper will continue to be widely employed.

5. References

[1] “TheDe si
gnofaCr y pt
ograp hi
cSe curi
tyArc
hit
ect
ure
”,Pe
terGu
tma
nn,Proceedings of the 1999 Usenix
Security Symposium, August 1999, p.153.
[2] “cryptlibSecuri
tyToo l
kit
”,http://www.cs.auckland.ac.nz/~pgut001/cryptlib/.
[3] “Wh yCr yptosystemsFa il” ,RossAn derson,Proceedings of the ACM Conference on Computer and
Communications Security, 1993, p.215.
[4] “Wh yCr yptosystemsFa il” ,RossAn derson, Communications of the ACM, Vol.37, No.11 (November 1994),
p.32.
[5] “Wh yJ ohnnyCa n’tEncry pt:AUs ab i
lit
yEv al
uati
onofPGP5 .0” ,AlmaWh it
tena ndJ .D.Tygar,Proceedings
of the 1999 Usenix Security Symposium, August 1999, p.169.
[6] “Us er-Centered Securit
y” ,Ma r
yEllenZu r
koa ndRi cha rdSimon ,Proceedings of the 1996 New Security
Paradigms Workshop, September 1996, p.27.

10
[7] “Secretsa n dLi es”,Br uc
eSc hnei
e r
,JohnWi le
ya ndSon s
,200 0.
[8] “Buildin gSe cureSo f
twa r
e”,JohnVi eg
aa ndGa ryMc Graw,Addi son-Wesley, 2001.
[9] “Secu ri
tyEn gi
ne er
ing ”
,RossAn d e
rson,Joh nWi l
eya ndSons ,2001.
[10] “Wr it
ingSe cureCod e”,Mi c
h ae
lHo warda ndDa v i
dLe Blanc,Mic r
osoftPress,2 001 .
[11] “Li
n u xSe cu r
it
y” ,RamónHo ntañón,Sybe x, 2001 .
[12] “Re:Pu r pos eofPEM s tri
ng”,Doug Porter, posting to [email protected] mailing list, message-ID
[email protected], 16 August 1993.
[13] ”Howt ob re a
kNe t
scape’sserve
rk eyenc r
y ption ”,Pe t
e rGutma nn
,pos t
ingtot hec y p her
pu nksma ili
ngli
st
,
message-ID [email protected], 25 September 1996.
[14] ”Howt ob re a
kNe t
scape’sserve
rk eyenc r
y ption- Fo l
lowup”,Pe t
e rGutman n,p ostingt oth ecyph e
rpun
ks
mailing list, message-ID [email protected], 26 September 1996.
[15] “PFX:Pe r sonalInfor
ma tionEx ch a
ngeSy ntaxa ndPr otoco lStanda rd” ,versi
on0 .019 ,Mi crosoftCo rp or
ation,
1 September 1996.
[16] “PFX:Pe r sonalInfor
ma tionEx ch a
ngeAPI s”,v ersion0.019 ,Mi cros oftCo rporat
ion ,1Se pt ember19 96.
[17] ““PFX:Pe rsonalInforma tionEx chang eSy ntaxa ndPr otoc olStanda rd” ,vers
ion0 .020 ,Mi cro s
oftCo r por
ation,
27 January 1997.
[18] “PFX— HowNo tt
oDe si
g naCr yptoPr otoco l
/Stan dar
d” ,Pe te
rGu tma nn,
http://www.cs.auckland.ac.nz/~pgut001/pubs/pfx.html.
[19] “Pe rson alIn formati
onEx changeSy ntaxSt andard” ,PKCS #12, RSA Laboratories, 24 June 1999.
[20] “Ve r
iSi gnCe rti
fi
cati
onPr acticeStateme nt(CPS) ,Ve r
sion2. 0”,Ve r i
s i
g n,31Au gu st200 1.
[21] “Howt or ec overprivat
ek eysfo rMi c
r osoftInternetEx plorer,Interne tInforma t
ionSe r v
e r,Ou tl
ookEx p
res s
,
and many others — or — Wh e
red oy oure ncryptionk eyswa nttog ot oday?”,PeterGu t
ma nn ,
http://www.cs.auckland.ac.nz/~pgut001/pubs/breakms.txt, 21 January 1998.
[22] “Howt or ec overprivat
ek eysfo rvari
ou sMi crosof tprodu cts”,PeterGu tma nn,postingtot h e
[email protected] mailing list, message-ID [email protected], 21
January 1998.
[23] “Anu pda teonMSp rivatek ey( i
n )
securityissues”, PeterGu tma nn,p osti
ngt othec ryptog r
a phy@c 2.n et
mailing list, message-ID [email protected], 4 February 1998.
[24] “PGPUs er’sGu i
de,Vol umeI :Es se
ntialTo pics”,Ph il
ipZi mme rma n n,11Oc to
b er19 94.
[25] “In t
erc eptin gMo bil
eCommun i
cations:Th eIn s
ec urit
yof8 02 .
11” ,Ni kitaBor i
sov, IanGol d berg,an dDa vid
Wagner, Proceedings of the 7th Annual International Conference on Mobile Computing and Networking
(Mobicom 2001), 2001.
[26] “ss ma il
:Op p ort
unist
icEn crypti
oni ns endma i
l”,Da mianBe ntley,Gr e gRos e,andTa r
aWh alen,Proceedings
of the 13th Sy st
emsAdmi nistrati
onCon ference(LI SA’ 99), November 1999, p.1.
[27] “ASe curityRi skofDe pe ndingo nSy nch r
on i
zedCl ocks”, LiGon g,Operating Systems Review, Vol.26, No.1
(January 1992), p.49.
[28] “ANot eont heUseo fTi me stamp sasNo nces”,B. Cliff
ordNe uma na ndSt uartStubb l
e bi
n e, Operating
Systems Review, Vol.27, No.2 (April 1993), p.10.
[29] “Li mit at
ion soft heKe rbe r
osAu thent
icationSy stem” ,St evenBe llov ina ndMi chae lMe rrit
t,Proceedings of
the Winter 1991 Usenix Conference, 1991, p.253.
[30] “Sy stema ticDe si
gno fTwo-Pa rt
yAu th e
n ti
cationPr o
tocols”,Ray Bird, Inder Gopal, Amir Herzberg, Philippe
Janson, Shay Kutten, Refik Molva, and Moti Yung, Ad vanc esinCr y p t
o l
ogy( Crypto’91), Springer-Verlag
Lecture Notes in Computer Science No.576, August 1991, p.44.
[31] “Kr ypt oKn ightAu t
hentic a
tiona ndKe yDi st
ributionSy stem” ,Re fikMo l
va ,
Ge neTs udik,El sVa n
Herreweghen and Stefano Zatti, Proceedings of the 1992 European Symposium on Research in Computer
Se curity( ESORI CS’92), Springer-Verlag Lecture Notes in Computer Science No.648, 1992, p.155.

11
[32] “ Th eKr y ptoKnight Family of Light-We ightPr otocolsforAu t
h en ti
c ationa n dKe yDi st
ribu ti
on ”,Ra yBi rd,
Inder Gopal, Amir Herzberg, Philippe Janson, Shay Kutten, Refik Molva, and Moti Yung, IEEE/ACM
Transactions on Networking, Vol.3, No.1 (February 1995), p.31.
[33] “ Ne t
wor kSe c
u ri
ty”,Ch arlieKa u f
ma n ,Ra diaPe r
lma n,andMi k eSpe c iner,Pr en tice-Hall, 1996.
[34] “ Ya ksha :Au gme n
tingKe rbe r
oswi t
hPu bl
icKe yCr yptography ” ,Ra v iGa nesa n,Proceedings of the 1995
Sy mpos i
u mo nNe two rkan dDi stribut edSy ste
mSe curity(NDS S’ 95), February 1995, p.132.
[35] “ Th eYa ksh aSe cur
i t
ySy s t
e m”,Ra viGa nesan,Communications of the ACM, Vol.39, No.3 (March 1996),
p.55.
[36] “ Th eKe rbe r
osNe twor kAu thenticationSe r
v i
c e(V5 )”,RFC15 10 ,Jo hnKoh la ndB. Clif
fo rdNe uma n,
September 1993.
[37] “ Jon a h:Ex peri
en c
eI mpl eme n t
ingPKI XRe ferenceFr eewa r
e ”,Ma ryEl lenZu rk o, J
o hnWr ay,I anMo rri
s on ,
Mike Shanzer, Mike Crane, Pat Booth, Ellen McDermott, Warren Marcek, Ann Graham, Jim Wade, and Tom
Sandlin, Proceedings of the 1999 Usenix Security Symposium, 1999, p.185.
[38] “ Time ,Cl oc ks,andt heOr deringofEv e ntsinaDi str
ibu t
edSy s tem” ,Le s l
ieLa mp ort,Communications of the
ACM, Vol.21, No.7 (July 1978), p.558.
[39] “ Th eCl oc kGr owsa tMi dn ight
”,Pe te rNe uma nn ,Communications of the ACM, Vol.34, No.1 (January 1991),
p.170.
[40] “ Sh oott h eMe sseng er
:SomeTe c hniqu esfo rSp amCon trol”,An thon yHowe ,;login, Vol.30, No.3 (June
2005), p.12.
[41] “ Th eAr tofCo mpu terVi rusRe searcha ndDe fense”,Pe t
erSz o r,Sy ma n t
ecPr ess, 20 05.
[42] “ unwanted HTTP: who has the time” , David Malone, ;login, Vol.31, No.2 (April 2006), p.49.
[43] ” Ph as eIIBr idgeCe rtif
icationAu t
h or it
yI nt
e r
o perabil
ityDe mon s t
rationFi nalRe port”,A&NAs soc iat
esI nc,
prepared for the Maryland Procurement Office, 9 November 2001.
[44] “ Pru d ente n gi
ne er
ingpr ac t
icef orcry pt og r
ap hicprotoco l
s”,Ma rtinAb adia ndRog e
rNe edh am,IEEE
Transactions on Software Engineering, Vol.22, No.1 (January 1996), p.2. Also appeared in Proceedings of the
1994 IEEE Symposium on Security and Privacy, May 1994, p.122.
[45] “ Pra c t
icalCr y
pt ography” ,Ni el
sFe rg u so nan dBr uceSc hn ei
e r
, Wi l
eyPu blishin gI nc ,2003 .
[46] “ Inte rnet X.509 Public Key Infrastructure: Time-St ampPr ot
o col( TSP) ”,RFC31 61, Ca rlisleAda ms ,PatCa in,
Denis Pinkas, and Robert Zuccherato, August 2001.
[47] “ Se cu ri
tyTr ansforma ti
on sAppl icationSe rviceEl eme ntforRe mot eOpe rati
on sSe r
v iceEl e me nt(STASE-
ROSE) ”,ANSIT1 .259-1997, 1997.
[48] “ Se cu ri
tyTr ansforma ti
on sAppl icationSe rviceEl eme ntforRe mot eOpe rati
on sSe r
v iceEl e me nt(STASE-
ROSE) ”, ITU-T Q.813-1998, June 1998.
[49] “ Se cu ri
tyf o rTe l
ecommu nicati
on sNe t
wo r
kMa nageme nt”,Mos h eRo z enblit,IEEEPr ess, 1999.
[50] “ Time ,c loc ks
, andt heord eringofe v e ntsinad i
stri
buteds ystem” ,Le sl i
eLa mpor t,Communications of the
ACM, Vol.21, No.7 (July 1978), p.558.
[51] “ Re :Ah istoryofNe tscape /
MSI Epr ob le ms” ,Ph i
lli
pHa l
lam-Baker, posting to the cypherpunks mailing list,
message-ID [email protected], 12 September 1996.
[52] “ Re :Pr ob le mCompi lingOpe nSSLf orRSASu pport
”,Da vi
dHe sprich ,p ostingt ot h eop e nssl-dev mailing list,
5 March 2000.
[53] “ Re :“ PRNGn otse eded”i nWi ndo wNT” ,Pa bloRoy o,p ostingt ot h eop enssl-dev mailing list, 4 April 2000.
[54] “ Re :PRNGn ots e
e dedERROR” ,Ca rlDou g l
a s,posti
n gtoth eope ns sl-users mailing list, 6 April 2001.
[55] “ Bu gi n0 .9.5+f ix”,El i
asPa pavassilop oulos,po sti
ngt othe openssl-dev mailing list, 10 March 2000.
[56] “ Re :s ettingr andoms eedg enerat
o ru n de rWi ndowsNT” ,Ami tCh op ra ,postin gt oth eo pe nssl-users mailing
list, 10 May 2000.
[57] “ 1RANDqu est
ion,a nd1c ryptoqu e s t
io n”,Br ianSn yder,pos t
in gt ot h eo pen ssl-users mailing list, 21 April
2000.

12
[58] “Re :u nab l
et ol oa d‘ra
n doms tate’(Ope nSSL0. 9.5onSo la
ris)
”,Theod oreHope ,p ost
in gt ot heop enssl
-users
mailing list, 9 March 2000.
[59] “RE:h avin gt rou bl
ewi thRAND_e g d( )”,Mi haWa ng
,pos t
ingtotheope nssl-users mailing list, 22 August
2000.
[60] “Re :Howt os eedb efo
r eg en
er a
ti
n gk e y ?”,‘j
as’,post
ingtotheo pens
sl-users mailing list, 19 April 2000.
[61] “Re :“ PRNGn o tseeded”i nWi ndowsNT” ,NgPh engSi o
ng,postin
gt oth eop enss l
-dev mailing list, 6 April
2000.
[62] “Re :Bu gr e lati
n gto/ d
e v/
u randoma ndRAND_e gdinlibcr
ypto.a
”,Lou isLe Bla nc,pos tin gtot heo penss l-dev
mailing list, 30 June 2000.
[63] “Re :Bu gr e lati
n gto/ d
e v/
u randoma ndRAND_e gdinlibcr
ypto.a
”,Lou isLe Bla nc,pos tin gtot heo penss l-dev
mailing list, 30 June 2000.
[64] “Re :PRNGn ots eededERROR” ,Ca rlDou glas
,p ost
ingtotheopenssl
-users mailing list, 6 April 2001.
[65] “Erro rme ssag e:randomn umbe rg en erator:SSLEAY_RAND_ BYTES/p ossibleso lut
ion ” ,Mi c haelHy n ds,
posting to the openssl-dev mailing list, 7 May 2000.
[66] “Re :Un a blet ol oad'randoms ta
te'wh enr u nni
ngCA. pl”,CorradoDe r
en ale,pos t
ingt ot h eo pen ssl
-users
mailing list, 2 November 2000.
[67] “Ope nSSLFr e quen t
lyAs kedQu esti
on s”,http://www.openssl.org/support/faq.html.
[68] “Re :An nouncingPu bli
cAv ailab
ili
tyofNo HTMLf orOu tl
ook2000/
2002”,Ve
ssel
inBontch
ev ,pos
ti
ngtothe
[email protected] mailing list, message-ID
[email protected], 17 December 2001.
[69] “IIS4.0SSLI SAPIFi l
terCa nLe akSin gleBu ff
erofPl ainte
xt(Q24
4613)”
,Micros
oftKnowledgeBa s
eart
icl
e
Q244613, 17 April 2000.
[70] “PatchAv a
ilableforWi nd owsMu lt
it
hr eade dSSLI SAPIFi lt
erVul
ner
abil
it
y”,Micros
oftSecuri
tyBull
eti
n
MS99-053, 2 December 1999.
[71] “PKI :It’
sNotDe ad,J us
tRe sting”,PeterGu tma n
n,t oappe a
r.

13

You might also like