Crypto Software Assignment Material
Crypto Software Assignment Material
Peter Gutmann
University of Auckland
Abstract
Although the basic building blocks for working with strong encryption have become fairly widespread in the last
few years, experience has shown that implementers frequently misuse them in a manner that voids their security
properties. At least some of the blame lies with the tools themselves, which often make it unnecessarily easy to get
things wrong. Just as no chainsaw manufacturer would think of producing a model without a finger-guard and
cutoff mechanism, so security software designers need to consider safety features that will keep users from injuring
themselves or others. This paper examines some of the more common problem areas that exist in crypto security
software, and provides a series of design guidelines that can help minimise damage due to (mis-)use by
inexperienced users. These issues are taken from extensive real-world experience with users of security software,
and represent areas that frequently cause problems when the software is employed in practice.
1. Introduction
In the last five years or so the basic tools for strong encryption have become fairly widespread, gradually displacing
the snake oil products that they had shared the envi ronme ntwi thu ntilt
hen.Asar e
sult,i
t’sn owf ai
rl
ye asyto
obtain software that contains well-established, strong algorithms such as triple DES and RSA instead of pseudo one-
time-pa ds.Un f
ortun ately,th i
sh asn ’tsolv edt h esnakeo ilp r
oblem,bu thasmerely relocated it elsewhere.
The determined programmer can produce snake oil using any crypto tools.
What makes the new generation of dubious crypto products more problematic than their predecessors is that the
obvious danger signs that allowed bad crypto to be quickly weeded out are no longer present. A proprietary, patent-
pending, military-strength, million-bit-key, one-time pad built from encrypted prime cycle wheels is a sure warning
sign to stay well clear, but a file encryptor that uses Blowfish with a 128-bit key seems perfectly safe until further
analysis reveals that the key is obtained from an MD5 hash of an uppercase-only 8-character ASCII password. This
type of second-g enerat i
ons na keoi lcrypto,wh ichl ooksliket herea lthi
n gb u ti
sn ’t
,could be referred to as
naugahyde crypto, with an appropriately similar type of relationship to the real thing.
Mos tc ryptos oft
wa reiswr itt
enwi t
ht hea s
su mpt i
ont hatth eu serk nowswh atthe y’red oing,a ndwi l
lc hoosethe
most appropriate algorithm and mode of operation, carefully manage key generation and secure key storage, employ
the crypto in a suitably safe manner, and do a great many other things that require fairly detailed crypto knowledge.
However, since most implementers are everyday programmers whose motivation for working with crypto is defined
by“ thebo sssaidd oi t”,theinevit
abl eresultist hec reat
ionofp roduc t
swi thg enu i
n en augahy dec r
y pto.Some ti
me s
this is discovered (for example when encryption keys are generated from the process ID and time, or when the RC4
keystream is re-used multiple times so the plaintext can be recovered with a simple XOR), but more frequently it
isn’t,s ot hatprodu ctsp rovidin
go nlyi ll
u sorys ecurit
yma ybede ploy eda n du sedf ory earswi thou ta n
y onebeinga n y
the wiser.
This paper looks at some of the ways in which crypto software developers and providers can work to avoid creating
and deploying software that can be used to create naugahyde crypto. Much of the experience presented here comes
from developing and supporting the open-source cryptlib toolkit [1][2], which has provided the author with a wealth
of information on the ways in which crypto software is typically misused, and the principal areas in which users
experience problems. Additional feedback was provided from users and developers involved with other open-source
crypto efforts.
All of the events reported here are from real experiences with users, although the details have been obscured and
anonymised, particularly where the users in question have mor el awy er
st h ant
hea uthor ’
sUn iversityh a ssta
ff.In
addition a few of the more interesting stories were excluded, but are referred to indirectly in the text (although no-
one would have been able to identify the organisations involved, it was felt that having the event officially
documented rather than existing only in the memory of a few implementers was too much of a legal liability).
Although there are less references to sources than the author usually includes in his work, the reader should rest
assu r
e dt hatalloft hee vent
sme n ti
on edh erea rere al,andi t’sa lmos tc ertai
nt h
atthe yh avee it
he ru sed ,o rbeena
part of the use of, one or more of the products that are not quite referred to.
1
2. Existing Work
There exists very little published research on the topic of proactively ensuring that crypto is used in a secure manner,
as opposed to patching programs up after a problem is found. Most authors are content to present the algorithms and
mechanisms and leave the rest to the implementer. An earlier work on why cryptosystems fail concentrated mostly
on banking security [3][4], but did make the prophetic prediction that as implementers working with crypto products
“lackskillsa tsecu rityi ntegrationa ndma nage me n t
,the ywi llgoont obu ilds ystemswi thh oles”.
Another paper examined user interface problems in encryption software [5], an area that badly needs further work
by HCI researchers. There also exists a small amount of research into the usability of security mechanisms,
alt
ho ught hisdo es n’tdir
e ctly address crypto software [6]. Finally, the author of a widely-used book on crypto went
ont owr iteaf ollowu pwor kd esig nedt oa dd r
e ssth eprobl emt ha
t“ theworldwa sf ul
lofba ds ecurit
ys y st
ems
designedbype op lewh or e ad[ hisf irstb o ok]”[ 7]. Liket hea ut
horoft hispape r
,h ef oundt hat“thewe akp oin t
sh a d
nothing to do with mathematics [...] Beautiful pieces of mathematics were made irrelevant through bad
programmi ng” .Th efollowupwor ke x ami ness ec uri
tyinav e
ryg enera
l-purpose manner as a process rather than a
product, while this paper limits itself to trying to address the most commonly-made errors that occur when non-
cryptographers (mis-)apply crypto.
In addition to these works there exist a number of general-purpose references covering security issues that can occur
during application design and implementation [8][9][10]. These are targeted at application developers and are
intended to cover (and hopefully eliminate) common application security problems such as buffer overflows, race
conditions, elevation of privileges, access control issues, and so on. This work in contrast looks specifically at
problems that occur when end users (mis-)use security software, and suggests design guidelines that can help
combat such misuse.
3.1 Pr
iva
teKe
ysAr
en’
t
One of the principal design features of cryptlib is that it never exposes private keys to outside access. The single
most frequently-a s
k edc ryptlibq uestioni sth erefo re“ Howd oIe xportp rivat
ek eysinpl ain t
ex tform? ”.Th er eason
s
gi
v e
nf orthisa rema nya ndv ari
ed ,andr angef rom t h el og i
cal( “Iwa nttog en erat
eat estk eyf orusewi t
hXYZ” )to
t
hedu bious( “Wewa ntt osh arethes a mep r
iv atek eya crossa llofours ervers” )throu ghtot hebi zarre(“Id on’t
know,Ij ustwa nttodoi t”).
In some cases the need to spread private keys around is motivated by financial concerns. If a company has spent
$49 5onaVe risig
nc ertif
ica t
et ha twa sd own l
oa de dt oaWi ndowsma c hineth enthe ywon ’tspen dt
hatmu cha gain
for exactly the same thing in a different format. As a result, the private key is exported from the Windows key store
(from which any Windows application can utilise it) into Netscape. And OpenSSL. And BSAFE. And cryptlib
(although cryptlib deliberately makes it rather difficult to poke keys of unknown provenance into it). Eventually,
every encryption-enabled application on the system has a copy of the key, and for good measure it may be spread
across a number of systems for use by different developers or sysadmins. Saving CA fees by re-using a single
private key for everything seems to be very popular, particularly among Windows users.
The amount of sharing of private keys across applications and machines is truly frightening. Mostly this appears to
oc curb ecauseu sersd on ’tun derstan dtheva l
u eofth epr i
vatek eyda t
a,t reati
ngi ta sj usta nothe rpi
eceof
informationt h atcanbec opieda cros stowh erev erit
’ sc onven ient.Fore xa mpleaf e wy earsa goac ompa nyh ad
developed a PGP-based encrypted file transfer system for a large customer. The system used a 2048-bit private key
m,s
that was stored on disk in plaintext for incet hes oftwa rewa sr u nasaba tchp roc es san dc ou l
dn’th al
twa it
ingf or
ap a s
swor dt ob ee ntered.On ed ayt hecustome rc alledt osayt ha tthey’dl ostthepr iva t
ek eyf ile,andc ou l
dt he
compa ny’spr og r
amme r splea serec onstr
uctitf orthe m.Th i
sc aused some consternation at the company, until one
of the developers pointed out that there were copies of the private key stored on a file server along with the source
2
code, and in other locations with the application binaries. Further investigation revealed that the developers had also
copied it to their own machines during the development process for testing purposes. Some of these machines had
later been passed on to new employees, with their original contents intact. The file server on which the development
work was stored had had its hard drives upgraded some time earlier, and the old drives (with the key on them) had
been put on a nearby shelf in case they were needed later. The server was backed up regularly, with three staff
members taking it in t urnstot aket heda y ’stape shomewi t
ht h emf o roff -site storage (the standard practice was to
drop them in the back seat of the car until they were re-used later on). In short, the only way to securely delete the
encryption key being used to protect large amounts of long-term sensitive data would have been to carpet-bomb the
city,a nde v ent henit’snotc ert
a i
nt h atco pieswou l
dn ’tha ves urviveds ome wh e re.Wh i
let hisrepresentsa
ma rv ell
ou sba ck upstrategy,it’sp robablyn otwh a
t’sre quir
e df orpr otecting private keys.
If your product allows the export of private keys in plaintext form or some other widely-
readable format, you should assume that your keys will end up in every other application
on the system, and occasionally spread across other systems as well.
At least some of the problem arises from the fact the much current software makes it unnecessarily easy to move
private keys around (see also section 3.2 for a variation of this problem). For example CAs frequently use PKCS
#12f ilest ose nda“ cert
ifi
c at
e”t oan ewu serb eca usei tma kest hingssi
mpl ert hang oingt hrou ght hemu lt
i-stage
process in which the browser generates the private key itself. These files are invariably sent in plain text email,
often with the password included. Alternatively, when the password is sent by out-of-band means, the PKCS #12
decryption key is generated directly from a hash of the uppercase-only ASCII password, despite warnings about the
insecurity of this approach being well publicised several years ago [21]. Once such file, provided as a sample to the
author, would have authorised access to third-party financial records in a European country. This method of key
handling was standard practice for the CA involved.
Another CA took this process a step further when they attempted to solve the problem of not having their root
certificate trusted by various browsers and mail programs by distributing a PKCS #12 file containing the CA root
key and certificate to allre l
yingp art
ies.Th eth inkingwa stha ton cet heCA’ spri
vatek eywa si nstal
ledont he i
r
syste m,t heu ser’sPKIs oftwa r
ewou l
dr egardth ec orr
espondin gc e r
tifi
catea sbein
gt rusted( i
ts ti
lld i
dn ’tquitef i
x
thep roblem,b u titwa sas tart
).Th is“ sol
ution”i sinfa c
tsoc ommont hatt heOpenSSLFAQc o ntai
nsa ne ntry
specifically warning against it [67].In credi
bly,d espitethestron gwa rningi nth
eFAQt ha t“thisc omma ndwi llg i
ve
awa yy ou rCA’ sp ri
v atek eya ndreduc esit
ssec u r
itytoz e
ro”,s ecu ri
tyb ook shavea ppea redtha tgivec lear,st
e p-by-
stepi n s
tru ctionsonh owt odistri
buteth eCA’ sp rivat
ek ey“ t
oa l
ly ou ru ser’swebb r
ows ers”[11].
Making it more difficult to do this sort of thing might help alleviate some of the problems. Certainly in the case of
cry ptli
bwh enu sersa r
ei n f
o r
me dt ha twh atthey ’rea s
kingfori sn’tpos sible, theyfindame ansofwor kingwi thin
those constraints (or maybe they quietly switch to CryptoAPI, which allows private keys to be sprayed around
freely). However the real problem is a social and financial one: The single biggest reason for the re-use of a single
key wherever possible is the cost of the associated certificate. A secondary reason is the complexity involved in
obtaining the certificate, even if it is otherwise free. Examples of the latter include no-assurance email certificates,
some timesk n owna s“ clown -su itc e rti
ficat
es”be cau s
eoft heleve lofi de ntitya ss
urancet he ypr ovi
d e[ 12].
Generating a new key rather than re-using the current one is therefore expensive enough and cumbersome enough
that users are given the incentive to put up with considerable inconvenience in order to re-use private keys. Users
have even tried to construct ways of sharing smart cards across multiple machines in order to solve the annoying
pr ob l
emt h atth eyc an’te xportt hep ri
v at
ek eyf romt hecard.An othera p pro ach,wh ichon l
ywor kswi ths o mec ards ,
is to generate the key externally and load it onto the card, leaving a copy of the original in software to be used from
various applications and/or machines (the fact that people were doing this was discovered because some cards or
card drivers handle external key loads in a peculiar manner, leading to requests for help from users).
PGP on the other hand, with its easily-generated, self-signed keys and certificates, suffers from no such problem,
and real-world experience indicates that users are quite happy to switch to new keys and discard their old ones
whenever they feel the need.
In order to solve this problem, it is necessary to remove the strong incentive provided by current X.509-style
certificate management to re-use private keys. One solution to this problem would be for users to be issued key-
signing certificates that they could use to create their own certificates when and as needed. This represents a
some wh ata wkwa rdwor kar
o undf ort hef acttha tX. 509do e sn’ta llowmu lt
iples i
g natur
esb indinga nin dentit
yt oa
certificate,sot hatit’sn o tpos s
iblet og ene r
a t
eas elf-signed certificate which is then endorsed through further,
3
exter na ls ignatur
es.I na nyc asesi
n cethisso lutionwou l
dd e priveCAsofr ev e nu e,it’
su n li
kelytoev erb e
implemented. As a result, even if private key sharing is made as difficult as possible, sufficiently motivated users
will still find ways to spread them around. It is, unfortunately, very difficult to fix social/economic issues using
technology.
Make very clear to users the difference between public and private keys, either in the
documentation/user interface or, better, by physically separating the two.
4
this type of key management is still popular, particularly in sectors such as banking which have a great deal of
experience in working with confidential information. Portions of the process have now been overtaken by
technology, with the fax machine replacing trusted couriers for key exchange.
Another solution which is popular in EDI applications is to transmit the key in another message segment in the
transaction. If XML is being used, the encryption key is placed in a field carefully tagged as <password> or <key>.
Yet another solution, popularised in WEP, is to use a single fixed key throughout an organisation [25].
Even when public-key encryption is being used, users often design their own key-management schemes to go with
it. One (geographically distributed) organisation solved the key management problem by using the same private key
on all of their systems. This allowed them to deploy public-key encryption throughout the organisation while at the
same time eliminating any key management problems, since it was no longer necessary to track a confusing
collection of individual keys.
Straight Diffie-Hellman requires no key management. This is always better than other
no-key-management alternatives that users will create.
Obviously this method of (non-)key management is still vulnerable to a man-in-the-middle (MITM) attack, however
this requires an active attack at the time the connection is established. This type of attack is considerably more
difficult than a passive attack performed an arbitrary amount of time later, as is possible with unprotected, widely-
known, or poorly-chosen shared keys, or, worse yet, no protection at all because a general solution to the problem
isn’ta va il
able[ 26]. In situations like this the engineering approach (within ±10% of the target with reasonable
effo rt
)i sof t
enb et
terth ant hema t
h ema tician’sa pproac h(1 00% a ccuracywi thu nre aso nablee ffort
,sot hatin
practice nothing gets done).
5
half years later (at which point logs were discontinued), the server was still getting between five and ten thousand
machines an hour setting their system clocks to this bogus date [42].
In addition to problems due to incorrect settings, there are also potential implementation problems. One PKI pilot
ran into difficulties because of differences in the calculation of offsets from GMT in different software packages
[43]. Time zone issues are extremely problematic because some operating systems handle them in a haphazard
manner or can be trivially misconfigured to get the offset wrong. Even when everything is set up correctly it can
prove almost impossible to determine the time offset from a program in one time zone with daylight savings time
adjustment and a second program in a different time zone without daylight savings time adjustment.
A further problem with a reliance on timestamps is the fact that it extends the security baseline to something which
is not normally regarded as being security-r elevan t
,andt hatth ereforewon ’tb eh and leda sc ar
e fullyaso bvio usly
-
security-relateditemss ucha spa ss
wor dsa ndc ryptotokens:“ Iftime s t
ampsa reu seda sfreshne ssg uarantees by
reference to absolute time, then the difference between local clocks at various machines must be much less than the
allowable age of a message deemed to be valid. Furthermore, the time maintenance mechanism everywhere
becomes part of the trustedc ompu tingba se”[44].
An even bigger problem with the implicit inclusion of an external time source into the TCB is that the owners of the
extern a lsou rceg en er
all
ya r
e n
’ta wa reo fthefactthatt hey ’veju stbee nma deac riti
c alse cu ri
tyc ompon en t.Asa
result, this external component is given nowhere near the level of protection that the rest of the system is, because
it’
sn o tr eg ar
de da sa nat i
-rskcompon ent.Afterall,wh o’ sg oingt ob othe rbr eakin gin t
oat imes erv i
cej ustsot hey
can change the clock?
Asi tturnsou t,thish aso cc ur
redonan umbe ro foc casion s.Fo rex ampl eIanMu rphy,a .k.a.“Ca pta i
nZa p”,
su pposed lythei ns pirationf orthef il
m“ Sn eaker s”
,s etth ec lockonAT&Tsph on ebillings ystemba ckby12h ou rs
to allow daytime callers (calling at the peak billing rate) to obtain off-peak nighttime rates. In this case there was a
financial incentive involved, but in an even more serious case that occurred in Brazil it appears to have been done
purely for the hack value. In January 2004, unknowni ntrud erssett heBr azili
anNa t
ion alObs erva torytimes ervi
c e’s
clock back by 24 hours. Compromising the nationwide reference time source, the equivalent of NIST in the US,
would have compromised every PKI that took its time from it for the 36 hours that it took until it was detected. In
particular, any certificate revocations issued during that time would have been rolled back, giving an attacker a full
one-and-a-half days to do whatever they liked with compromised keys.
To complicate things further, times are often deliberately set incorrectly to allow expired certificates to continue to
be used without paying for a new one, a trick that shareware authors countered many years ago to prevent users from
running trial versions of software indefinitely. Fo re xamp leNe tscape’ sc odesign i
ngs oftwa rewi llblind l
yt rustthe
date incorporated into a JAR file by the signer, allowing expired certificates to be rejuvenated by backdating the
signature generation time. It would also be possible to resuscitate a revoked certificate using this trick, except that
thes of t
wa redoe s n’tperformr ev ocationc h eckings oit’spo s si
bl etou s eita nywa y.Ot heru n expec t
edt rickss ucha s
setting the clock forward in time or stopping it entirely are also likely to cause problems for applications that assume
that time is monotonically increasing [45].
Don ’tincor p or
atet hes ys
temc lock( ort heot he
rp art
ie
s’s
yst
emc
loc
ks)i
nyo
urs
ecur
it
y
baseline. If you need synchronisation, use nonces.
If some sort of timeliness guarantees are required, this can still be achieved even in the presence of completely
desynchronised clocks by using the clock as a means of measuring the passage of time rather than as an absolute
indicator of time. For example a server can indicate to a client that the next update will take place 15 minutes after
thec urrentr e questwa srec eived ,aqu antit
yt hatcanb eme a su reda ccu ra
telybybot hs idese venifon esi
det hin ksi t
’s
currently September 1986. To perform this operation, the client would submit a request with a nonce, and the server
would respond with a (signed or otherwise integrity-protected) reply containing a relative time to the next update. If
thec lientd oe s
n ’tre
c eivether espons ewi thinag ivent ime ,ort h erespon sedoe s
n ’tc ontainthen on cethey sent, then
the re
’ss ome thi
n gsus pi
ciou sg oin gon .I fe ver
y t
h i
ngi sOK,t he yk nowt hee xacttime( relat
ivetot h ei
rlocalc lo ck)
of the next update, or expiry, or revalidation. Although this measure is simple and obvious, the number of security
standards that define mechanisms that assume the existence of perfectly synchronised clocks for all parties is
somewhat worrying.
6
In the presence of arbitrary end user systems, relative time measures work. Absolute
ti
meme asuresdon ’t.
For non-interactive protocol st
hatc a
n ’
tu sen onc est hes olutionbe come ss l
ightlymor ec ompl ex,bu tc ang ene r
allybe
implemented using techniques such as a one-off online query, or time-stamping [46]. Alternatively, if the use of
timestamps is unavoidable but certain assumptions can b ema dea boutthequ alityoft heti
mei n forma tion,it’s
possible to manage the risk involved in an appropriate manner. The weakest assumption is made in some of the
protocols used in telecommunications network management (TMN) [47][48], which must assume that clocks can
behave in arbitrary and unreliable ways. For example, a clock may be running too fast, or have stopped, or been
reset to a time in the past (TMN operators have a lot of practical experience with odd behaviour in various pieces of
equipment).
To resolve these issues, the protocols distinguish between four different types of time, GMT (external,
astronomically correct time), the system clock, virtual time (a monotonically increasing value), and external time
(which appears in an incoming message), and have a variety of mechanisms to handle the problem situations
mentioned above [49].
A slightly stronger assumption, used in SNMPv2, is that clocks are monotonically increasing (the equivalent of the
TMN’ svirtu alti
mev alue).I nt hissituat
iont ime is represented as a pair of values, the number of seconds since the
time counter was initialised (for example, since the equipment was rebooted), and the number of reboots. Both sides
of a communication session track both their own time and the other par ty’st i
me .I fc locksdri
ft,th eyare
resynchronised when a new message is received from the other party. This view of time treats it partly as a timer
and partly as a form of (predictable) nonce.
The strongest assumption, popular in the PKI world, is to assume perfectly synchronised, perfectly secure clocks
amon ga llparties(t
hee qu ival
ento fTMN’ sGMT) .
Another option is to leverage the experience gained from distributed transaction processing, which acknowledges
that, in general, a natural event orderingme ch ani
s mi sn’tpo s
sible,bu tt
h atinmos tc a sesitisn ’tn ec essa rys inc
ea l
l
that’ sreq uir
e disapa rtialord e r
ing( ref
erredt oa s“ ha pp ens-bef
o re”)thatintui
tivel
yc a ptu
re st h ere l
ation sbe t
we en
distr i
bu tede ven t
s[ ”Opt imi s
t i
cRe plicat
ion”, Yas ush iSaito and Marc Shapiro, ACM Computing Surveys, Vol.37,
No .1( Ma rch2 00 5),p .
42 ].Fore x ampl einp racticewh env eri
fyingas i
gna tur
ewedon ’tr
e allyc arep rec iselywh en
the certificate used to create it was rendered invalid, all we need to know is whether it was still valid at the time the
signature was generated. This is much like a sporting event in which the ordering (first, second, third) is of primary
importance, but the actual amount (three hundredths of a second) is of only peripheral interest. There are a number
of well-established concurrency-relation mechanisms such as Lamport clocks [50] that can be used to implement
this.
1
Equivalent to 31 grams of crypto knowledge, being worth its weight in gold.
7
claimt h atthey ’
redeli
veri
ngexact
lywha
tth
ec u
stomerask
edfor
.Even
tual
lyt
hecus
tomerth
reat
enst
owit
hho
ld
pa yme ntu ntilthecodeisfi
xed,a
ndthei
mplemente
rssn
eakth
echan
gesinund
er“
Mi s
c.Exp.
”atfi
vet
ime
sthe
original price.
Don
’ti
ncl
udei
nse
cure or illogical security mechanisms in your crypto tools.
8
In the above case the generator should handle not only the PRNG step but also the entropy-gathering step itself,
while still providing a means of accepting user optional entropy data for those users who do bother to initialise the
generator correctly. As a generalisation, crypto software should not leave difficult problems to the user in the hope
that they can somehow miraculously come up with a solution where the crypto developer has failed.
Make security-critical functions fail obviously even if the user ignores return codes.
Another possible solution is to require that functions use handles to state information (similar to file or BSD sockets
handles) which record error state information and prevent any further operations from occurring until the error
condition is explicitly cleared by the user. This error-state-propagation mechanism helps make the fact that an error
has occurred more obvious to the user, even if they only check the return status at the end of a sequence of function
calls, or at sporadic intervals.
9
Asi fth eu seofECBi tsel
fwa sn’tba de n oug h,usersoftencompo undt he error with further implementation
simplifications. For example one vendor chose to implement their VPN using triple DES in ECB mode, which they
sawa st h esimp lesttoimp leme n tsi
n cei td oesn’trequireanys ynchron i
sa ti
o nma n ag eme nti fpacketsa r
elost.Since
ECBmod ec anon l
ye ncryptda tainmu ltiplesofth ecipherblocks i
z e,theyd i
dn ’
te n crypta n yl
eftove rbytesa tt
he
end of the packet. The interaction of this processing mechanism with interactive user logins, which frequently
transmit the user name and password one character at a time, can be imagined by the reader.
Th ei ssuet hatnee dstobea ddressedh erei sthatthea ver ageu serh as n’treada nyc ryptob oo ks,orh asa tbe stha d
some brief exposure to portions of a popular text such as Applied Cry ptograph y,a nds impl yi s
n’tablet oo pe rate
complex (and potentially dangerous) crypto machinery without any real training. The solution to this problem is for
developers of libraries to provide crypto functionality at the highest level possible, and to discourage the use of low-
level routines by inexperienced users. The job of the crypto library should be to protect users from injuring
themselves (and others) through the misuse of basic crypto routines.
Instea dof“ encryptas e ri
esofd at
ab lock susing3DES with a 192-b itk e
y” ,userss h ouldbea bletoe xe r
cise
func t
ion alitys u
c ha s“en cryptaf il
ewi thap a s
swor d”,wh ich( apa rtf roms to
ringt h ek eyi npla
intexti ntheWi ndows
registry) is almost impossible to misuse. Although the function itself may use an iterated HMAC hash to turn the
password into a key, compress and MAC the file for space-efficient storage and integrity-protection, and finally
encrypt it using (correctly-implemented) 3DES-CBC,t heu serd oe sn’thavet ok no w( orcare)aboutt his
.
Provide crypto functionality at the highest level possible in order to prevent users from
injuring themselves and others through misuse of low-level crypto functions with
prope rti
e sthe
yar en’ tawar eof .
4. Conclusion
Although snake oil crypto is rapidly becoming a thing of the past, its position is being taken up by a new breed of
snake oil, naugahyde crypto, which misuses good crypto in a manner that makes it little more effective than the
more traditional snake oil. This paper has covered some of the more common ways in which crypto and security
software can be misused by users. Each individual problem area is accompanied (where possible) by guidelines for
measures that can help combat potential misuse, or at least warn developers that this is an area which is likely to
cause problems with users. It is hoped that this work will help reduce the incidence of naugahyde crypto in use
today.
Un fo rtunatelyth esinglel argestclasso fp roblems ,k e
yma nageme nt,c an’tbesolvedasea sil
ya sth eot heron es.
Solving this e xtr
e melyh a rdp roblemi nama nn erp r
acti
c alenoughtoe nsureu s
eswon ’
tby pas
si tf ore ase-of-use or
economic reasons will require a multi-faceted approach involving better key management user interfaces, user-
certified or provided keys of the kind used by PGP, application-specific key management such as that used with ssh,
and a variety of other approaches [71] .Un tilthek eyma nageme ntt
a ski sma demu chmor ep racti
c al,“sol ut
ion s”of
the kind presented in this paper will continue to be widely employed.
5. References
[1] “TheDe si
gnofaCr y pt
ograp hi
cSe curi
tyArc
hit
ect
ure
”,Pe
terGu
tma
nn,Proceedings of the 1999 Usenix
Security Symposium, August 1999, p.153.
[2] “cryptlibSecuri
tyToo l
kit
”,http://www.cs.auckland.ac.nz/~pgut001/cryptlib/.
[3] “Wh yCr yptosystemsFa il” ,RossAn derson,Proceedings of the ACM Conference on Computer and
Communications Security, 1993, p.215.
[4] “Wh yCr yptosystemsFa il” ,RossAn derson, Communications of the ACM, Vol.37, No.11 (November 1994),
p.32.
[5] “Wh yJ ohnnyCa n’tEncry pt:AUs ab i
lit
yEv al
uati
onofPGP5 .0” ,AlmaWh it
tena ndJ .D.Tygar,Proceedings
of the 1999 Usenix Security Symposium, August 1999, p.169.
[6] “Us er-Centered Securit
y” ,Ma r
yEllenZu r
koa ndRi cha rdSimon ,Proceedings of the 1996 New Security
Paradigms Workshop, September 1996, p.27.
10
[7] “Secretsa n dLi es”,Br uc
eSc hnei
e r
,JohnWi le
ya ndSon s
,200 0.
[8] “Buildin gSe cureSo f
twa r
e”,JohnVi eg
aa ndGa ryMc Graw,Addi son-Wesley, 2001.
[9] “Secu ri
tyEn gi
ne er
ing ”
,RossAn d e
rson,Joh nWi l
eya ndSons ,2001.
[10] “Wr it
ingSe cureCod e”,Mi c
h ae
lHo warda ndDa v i
dLe Blanc,Mic r
osoftPress,2 001 .
[11] “Li
n u xSe cu r
it
y” ,RamónHo ntañón,Sybe x, 2001 .
[12] “Re:Pu r pos eofPEM s tri
ng”,Doug Porter, posting to [email protected] mailing list, message-ID
[email protected], 16 August 1993.
[13] ”Howt ob re a
kNe t
scape’sserve
rk eyenc r
y ption ”,Pe t
e rGutma nn
,pos t
ingtot hec y p her
pu nksma ili
ngli
st
,
message-ID [email protected], 25 September 1996.
[14] ”Howt ob re a
kNe t
scape’sserve
rk eyenc r
y ption- Fo l
lowup”,Pe t
e rGutman n,p ostingt oth ecyph e
rpun
ks
mailing list, message-ID [email protected], 26 September 1996.
[15] “PFX:Pe r sonalInfor
ma tionEx ch a
ngeSy ntaxa ndPr otoco lStanda rd” ,versi
on0 .019 ,Mi crosoftCo rp or
ation,
1 September 1996.
[16] “PFX:Pe r sonalInfor
ma tionEx ch a
ngeAPI s”,v ersion0.019 ,Mi cros oftCo rporat
ion ,1Se pt ember19 96.
[17] ““PFX:Pe rsonalInforma tionEx chang eSy ntaxa ndPr otoc olStanda rd” ,vers
ion0 .020 ,Mi cro s
oftCo r por
ation,
27 January 1997.
[18] “PFX— HowNo tt
oDe si
g naCr yptoPr otoco l
/Stan dar
d” ,Pe te
rGu tma nn,
http://www.cs.auckland.ac.nz/~pgut001/pubs/pfx.html.
[19] “Pe rson alIn formati
onEx changeSy ntaxSt andard” ,PKCS #12, RSA Laboratories, 24 June 1999.
[20] “Ve r
iSi gnCe rti
fi
cati
onPr acticeStateme nt(CPS) ,Ve r
sion2. 0”,Ve r i
s i
g n,31Au gu st200 1.
[21] “Howt or ec overprivat
ek eysfo rMi c
r osoftInternetEx plorer,Interne tInforma t
ionSe r v
e r,Ou tl
ookEx p
res s
,
and many others — or — Wh e
red oy oure ncryptionk eyswa nttog ot oday?”,PeterGu t
ma nn ,
http://www.cs.auckland.ac.nz/~pgut001/pubs/breakms.txt, 21 January 1998.
[22] “Howt or ec overprivat
ek eysfo rvari
ou sMi crosof tprodu cts”,PeterGu tma nn,postingtot h e
[email protected] mailing list, message-ID [email protected], 21
January 1998.
[23] “Anu pda teonMSp rivatek ey( i
n )
securityissues”, PeterGu tma nn,p osti
ngt othec ryptog r
a phy@c 2.n et
mailing list, message-ID [email protected], 4 February 1998.
[24] “PGPUs er’sGu i
de,Vol umeI :Es se
ntialTo pics”,Ph il
ipZi mme rma n n,11Oc to
b er19 94.
[25] “In t
erc eptin gMo bil
eCommun i
cations:Th eIn s
ec urit
yof8 02 .
11” ,Ni kitaBor i
sov, IanGol d berg,an dDa vid
Wagner, Proceedings of the 7th Annual International Conference on Mobile Computing and Networking
(Mobicom 2001), 2001.
[26] “ss ma il
:Op p ort
unist
icEn crypti
oni ns endma i
l”,Da mianBe ntley,Gr e gRos e,andTa r
aWh alen,Proceedings
of the 13th Sy st
emsAdmi nistrati
onCon ference(LI SA’ 99), November 1999, p.1.
[27] “ASe curityRi skofDe pe ndingo nSy nch r
on i
zedCl ocks”, LiGon g,Operating Systems Review, Vol.26, No.1
(January 1992), p.49.
[28] “ANot eont heUseo fTi me stamp sasNo nces”,B. Cliff
ordNe uma na ndSt uartStubb l
e bi
n e, Operating
Systems Review, Vol.27, No.2 (April 1993), p.10.
[29] “Li mit at
ion soft heKe rbe r
osAu thent
icationSy stem” ,St evenBe llov ina ndMi chae lMe rrit
t,Proceedings of
the Winter 1991 Usenix Conference, 1991, p.253.
[30] “Sy stema ticDe si
gno fTwo-Pa rt
yAu th e
n ti
cationPr o
tocols”,Ray Bird, Inder Gopal, Amir Herzberg, Philippe
Janson, Shay Kutten, Refik Molva, and Moti Yung, Ad vanc esinCr y p t
o l
ogy( Crypto’91), Springer-Verlag
Lecture Notes in Computer Science No.576, August 1991, p.44.
[31] “Kr ypt oKn ightAu t
hentic a
tiona ndKe yDi st
ributionSy stem” ,Re fikMo l
va ,
Ge neTs udik,El sVa n
Herreweghen and Stefano Zatti, Proceedings of the 1992 European Symposium on Research in Computer
Se curity( ESORI CS’92), Springer-Verlag Lecture Notes in Computer Science No.648, 1992, p.155.
11
[32] “ Th eKr y ptoKnight Family of Light-We ightPr otocolsforAu t
h en ti
c ationa n dKe yDi st
ribu ti
on ”,Ra yBi rd,
Inder Gopal, Amir Herzberg, Philippe Janson, Shay Kutten, Refik Molva, and Moti Yung, IEEE/ACM
Transactions on Networking, Vol.3, No.1 (February 1995), p.31.
[33] “ Ne t
wor kSe c
u ri
ty”,Ch arlieKa u f
ma n ,Ra diaPe r
lma n,andMi k eSpe c iner,Pr en tice-Hall, 1996.
[34] “ Ya ksha :Au gme n
tingKe rbe r
oswi t
hPu bl
icKe yCr yptography ” ,Ra v iGa nesa n,Proceedings of the 1995
Sy mpos i
u mo nNe two rkan dDi stribut edSy ste
mSe curity(NDS S’ 95), February 1995, p.132.
[35] “ Th eYa ksh aSe cur
i t
ySy s t
e m”,Ra viGa nesan,Communications of the ACM, Vol.39, No.3 (March 1996),
p.55.
[36] “ Th eKe rbe r
osNe twor kAu thenticationSe r
v i
c e(V5 )”,RFC15 10 ,Jo hnKoh la ndB. Clif
fo rdNe uma n,
September 1993.
[37] “ Jon a h:Ex peri
en c
eI mpl eme n t
ingPKI XRe ferenceFr eewa r
e ”,Ma ryEl lenZu rk o, J
o hnWr ay,I anMo rri
s on ,
Mike Shanzer, Mike Crane, Pat Booth, Ellen McDermott, Warren Marcek, Ann Graham, Jim Wade, and Tom
Sandlin, Proceedings of the 1999 Usenix Security Symposium, 1999, p.185.
[38] “ Time ,Cl oc ks,andt heOr deringofEv e ntsinaDi str
ibu t
edSy s tem” ,Le s l
ieLa mp ort,Communications of the
ACM, Vol.21, No.7 (July 1978), p.558.
[39] “ Th eCl oc kGr owsa tMi dn ight
”,Pe te rNe uma nn ,Communications of the ACM, Vol.34, No.1 (January 1991),
p.170.
[40] “ Sh oott h eMe sseng er
:SomeTe c hniqu esfo rSp amCon trol”,An thon yHowe ,;login, Vol.30, No.3 (June
2005), p.12.
[41] “ Th eAr tofCo mpu terVi rusRe searcha ndDe fense”,Pe t
erSz o r,Sy ma n t
ecPr ess, 20 05.
[42] “ unwanted HTTP: who has the time” , David Malone, ;login, Vol.31, No.2 (April 2006), p.49.
[43] ” Ph as eIIBr idgeCe rtif
icationAu t
h or it
yI nt
e r
o perabil
ityDe mon s t
rationFi nalRe port”,A&NAs soc iat
esI nc,
prepared for the Maryland Procurement Office, 9 November 2001.
[44] “ Pru d ente n gi
ne er
ingpr ac t
icef orcry pt og r
ap hicprotoco l
s”,Ma rtinAb adia ndRog e
rNe edh am,IEEE
Transactions on Software Engineering, Vol.22, No.1 (January 1996), p.2. Also appeared in Proceedings of the
1994 IEEE Symposium on Security and Privacy, May 1994, p.122.
[45] “ Pra c t
icalCr y
pt ography” ,Ni el
sFe rg u so nan dBr uceSc hn ei
e r
, Wi l
eyPu blishin gI nc ,2003 .
[46] “ Inte rnet X.509 Public Key Infrastructure: Time-St ampPr ot
o col( TSP) ”,RFC31 61, Ca rlisleAda ms ,PatCa in,
Denis Pinkas, and Robert Zuccherato, August 2001.
[47] “ Se cu ri
tyTr ansforma ti
on sAppl icationSe rviceEl eme ntforRe mot eOpe rati
on sSe r
v iceEl e me nt(STASE-
ROSE) ”,ANSIT1 .259-1997, 1997.
[48] “ Se cu ri
tyTr ansforma ti
on sAppl icationSe rviceEl eme ntforRe mot eOpe rati
on sSe r
v iceEl e me nt(STASE-
ROSE) ”, ITU-T Q.813-1998, June 1998.
[49] “ Se cu ri
tyf o rTe l
ecommu nicati
on sNe t
wo r
kMa nageme nt”,Mos h eRo z enblit,IEEEPr ess, 1999.
[50] “ Time ,c loc ks
, andt heord eringofe v e ntsinad i
stri
buteds ystem” ,Le sl i
eLa mpor t,Communications of the
ACM, Vol.21, No.7 (July 1978), p.558.
[51] “ Re :Ah istoryofNe tscape /
MSI Epr ob le ms” ,Ph i
lli
pHa l
lam-Baker, posting to the cypherpunks mailing list,
message-ID [email protected], 12 September 1996.
[52] “ Re :Pr ob le mCompi lingOpe nSSLf orRSASu pport
”,Da vi
dHe sprich ,p ostingt ot h eop e nssl-dev mailing list,
5 March 2000.
[53] “ Re :“ PRNGn otse eded”i nWi ndo wNT” ,Pa bloRoy o,p ostingt ot h eop enssl-dev mailing list, 4 April 2000.
[54] “ Re :PRNGn ots e
e dedERROR” ,Ca rlDou g l
a s,posti
n gtoth eope ns sl-users mailing list, 6 April 2001.
[55] “ Bu gi n0 .9.5+f ix”,El i
asPa pavassilop oulos,po sti
ngt othe openssl-dev mailing list, 10 March 2000.
[56] “ Re :s ettingr andoms eedg enerat
o ru n de rWi ndowsNT” ,Ami tCh op ra ,postin gt oth eo pe nssl-users mailing
list, 10 May 2000.
[57] “ 1RANDqu est
ion,a nd1c ryptoqu e s t
io n”,Br ianSn yder,pos t
in gt ot h eo pen ssl-users mailing list, 21 April
2000.
12
[58] “Re :u nab l
et ol oa d‘ra
n doms tate’(Ope nSSL0. 9.5onSo la
ris)
”,Theod oreHope ,p ost
in gt ot heop enssl
-users
mailing list, 9 March 2000.
[59] “RE:h avin gt rou bl
ewi thRAND_e g d( )”,Mi haWa ng
,pos t
ingtotheope nssl-users mailing list, 22 August
2000.
[60] “Re :Howt os eedb efo
r eg en
er a
ti
n gk e y ?”,‘j
as’,post
ingtotheo pens
sl-users mailing list, 19 April 2000.
[61] “Re :“ PRNGn o tseeded”i nWi ndowsNT” ,NgPh engSi o
ng,postin
gt oth eop enss l
-dev mailing list, 6 April
2000.
[62] “Re :Bu gr e lati
n gto/ d
e v/
u randoma ndRAND_e gdinlibcr
ypto.a
”,Lou isLe Bla nc,pos tin gtot heo penss l-dev
mailing list, 30 June 2000.
[63] “Re :Bu gr e lati
n gto/ d
e v/
u randoma ndRAND_e gdinlibcr
ypto.a
”,Lou isLe Bla nc,pos tin gtot heo penss l-dev
mailing list, 30 June 2000.
[64] “Re :PRNGn ots eededERROR” ,Ca rlDou glas
,p ost
ingtotheopenssl
-users mailing list, 6 April 2001.
[65] “Erro rme ssag e:randomn umbe rg en erator:SSLEAY_RAND_ BYTES/p ossibleso lut
ion ” ,Mi c haelHy n ds,
posting to the openssl-dev mailing list, 7 May 2000.
[66] “Re :Un a blet ol oad'randoms ta
te'wh enr u nni
ngCA. pl”,CorradoDe r
en ale,pos t
ingt ot h eo pen ssl
-users
mailing list, 2 November 2000.
[67] “Ope nSSLFr e quen t
lyAs kedQu esti
on s”,http://www.openssl.org/support/faq.html.
[68] “Re :An nouncingPu bli
cAv ailab
ili
tyofNo HTMLf orOu tl
ook2000/
2002”,Ve
ssel
inBontch
ev ,pos
ti
ngtothe
[email protected] mailing list, message-ID
[email protected], 17 December 2001.
[69] “IIS4.0SSLI SAPIFi l
terCa nLe akSin gleBu ff
erofPl ainte
xt(Q24
4613)”
,Micros
oftKnowledgeBa s
eart
icl
e
Q244613, 17 April 2000.
[70] “PatchAv a
ilableforWi nd owsMu lt
it
hr eade dSSLI SAPIFi lt
erVul
ner
abil
it
y”,Micros
oftSecuri
tyBull
eti
n
MS99-053, 2 December 1999.
[71] “PKI :It’
sNotDe ad,J us
tRe sting”,PeterGu tma n
n,t oappe a
r.
13