0% found this document useful (0 votes)
60 views9 pages

Improving Web Application Vulnerability Detection Leveraging Ensemble Fuzzing

This document presents an ensemble fuzzing approach to improve web application vulnerability detection. The approach uses multiple fuzzers together to achieve better code coverage and accuracy compared to individual fuzzers. An implementation was created using three fuzzers to test PHP web applications. Preliminary results showed the ensemble approach performed better at finding vulnerabilities than the individual fuzzers. The approach first focuses on improving crawlers to explore more application paths, then leverages results across fuzzers to gain better vulnerability detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views9 pages

Improving Web Application Vulnerability Detection Leveraging Ensemble Fuzzing

This document presents an ensemble fuzzing approach to improve web application vulnerability detection. The approach uses multiple fuzzers together to achieve better code coverage and accuracy compared to individual fuzzers. An implementation was created using three fuzzers to test PHP web applications. Preliminary results showed the ensemble approach performed better at finding vulnerabilities than the individual fuzzers. The approach first focuses on improving crawlers to explore more application paths, then leverages results across fuzzers to gain better vulnerability detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/351239334

Improving Web Application Vulnerability Detection Leveraging Ensemble


Fuzzing

Conference Paper · April 2021


DOI: 10.5220/0010484904050412

CITATIONS READS

0 11

2 authors:

João Caseirito Ibéria Medeiros


University of Lisbon University of Lisbon
2 PUBLICATIONS   0 CITATIONS    45 PUBLICATIONS   314 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

XIVT - eXcellence In Variant Testing View project

SEGRID View project

All content following this page was uploaded by João Caseirito on 22 September 2021.

The user has requested enhancement of the downloaded file.


Improving Web Application Vulnerability Detection Leveraging
Ensemble Fuzzing

João Caseirito, Ibéria Medeiros


LASIGE, Faculdade de Ciências,Universidade de Lisboa, Portugal
[email protected], [email protected]

Keywords: Fuzzing, Web Applications, Vulnerability Discovery

Abstract: The vast majority of online services we use nowadays provide their web application to the users. The correct-
ness of the source code of these applications is crucial to prevent attackers from exploiting its vulnerabilities,
leading to severe consequences like the disclosure of sensitive information or the degradation of the availability
of the application. Currently, multiple existent solutions analyse and detect vulnerabilities in the source code.
Attackers, however, do not usually have access to the source code and must work with the information that is
made public. Their goals are clear – exploit vulnerabilities without accessing the code –, and they resort of
black-box fuzzing tools to achieve such. In this paper, we propose an ensemble fuzzing approach to check the
correctness of the web applications from the point of view of an attacker and, in a posterior phase, analyse the
source code to correlate with the collected information. The approach focuses first on the quality of fuzzers’
crawlers and afterwards on fuzzers capabilities of exploiting the results of all crawlers between them, in order
to provide better coverage and precision in the detection of web vulnerabilities. Our preliminary results show
that the ensemble performs better than fuzzers individually.

1 INTRODUCTION (CMS) used to create blogs and web applications eas-


ily and intuitively, making it very appealing to inexpe-
The Internet is currently part of our daily life, and so rienced users. It is written in PHP, the server-side lan-
web applications are built upon it. There are over 1.8 guage mainly used to manage back-end data in web
billion websites active in the wild, and every month applications. Almost 80% of websites use PHP as
this number increases by hundreds of thousands as their server-side language, and almost 40% of them
new web services and applications emerge online1 . use WP to manage their systems3 . Despite their us-
Companies are constantly faced with the deci- age, PHP has not a specification language, turning it
sion to choose between functionality and security for poor and limited for data validation, requiring the pro-
their web services, as the time for security assessment grammers to use the right functions and have a stable
may exceed the planned service launch time and of- set of good practices. Pairing this lack of knowledge
ten companies choose to release applications with in- and bad practices with the existing vulnerabilities and
complete security testing. Sohoel et al. (Sohoel et al., limitations of the language, web applications are left
2018) studied how startups consider software security with security bugs that are in the wild, vulnerable and
in their applications. The applications tested had sig- open for anyone willing to explore their flaws.
nificant security breaches, and those who had the low- Cross Site Scripting (XSS)4 and SQL injection
est awareness about good practices and well-known (SQLi)5 are injection web vulnerabilities that con-
vulnerabilities had the most critical security holes. tinue to be very prevalent, specially in applications
Also, none of the companies had done prior security with legacy code. In fact, they are in first place in the
testing and relied on secure code from third parties. OWASP (Open Web Application Security Project)6
Web applications’ development is getting easier, Top 10 2017, a report of the top vulnerabilities found
even for those who lack programming knowledge. 3 https://w3techs.com/
WordPress2 (WP) is a content management system 4 https://owasp.org/www-community/attacks/xss/
1 https://www.internetlivestats.com/ 5 https://owasp.org/www-community/attacks/
total-number-of-websites/ SQL Injection
2 https://wordpress.com/ 6 https://owasp.org/
in the wild (Williams and Wichers, 2017). Its im- better than fuzzers individually and the fuzzers’ per-
pact is huge and can result in information disclosure, formance vary with the complexity of applications.
data loss, denial of service, between others. XSS vul- The contributions of the paper are: (1) an ensem-
nerabilities, despite having a more moderate impact, ble fuzzing approach to find flaws in web applica-
were found in around two thirds of the applications tions, with better code coverage and accuracy; (2) an
in the OWASP study, revealing their big prevalence. implementation of the approach with three fuzzers;
Although this vulnerabilities are really well known, (3) an experimental evaluation providing distinct re-
there are recent cases with some of them in applica- sults with fuzzers in the ensemble and individually.
tions used by millions (Ryan, 2020).
Static analysis examines the source code and
identifies vulnerabilities without executing the code 2 WEB APPLICATION
(Chess and McGraw, 2004; Jovanovic et al., 2006;
Medeiros et al., 2014). Although it finds a big per- VULNERABILITIES
centage of the vulnerabilities, it has a considerable
amount of false positives. Fuzzing, on the other The difficulty of protecting the code of web appli-
hand, is an (semi)automatic software testing tech- cation relies on treating the user inputs unduly (e.g.,
nique to find vulnerabilities without having access to $ GET), leaving applications vulnerable and an easy
the source code, which involves providing invalid and target for attackers. Attackers inject malicious data
unexpected inputs to the application and monitoring through the application attack surface and check if
for exceptions (Sutton et al., 2007). It has an inferior they exploit some bug existent in the target appli-
false positives’ rate and is able to detect issues caused cation. Vulnerabilities associated with user inputs
by the code’s interactions with other components and are called input validation vulnerabilities, because
vulnerabilities that may be too complicated to static user inputs are improperly validated or sanitized, or
analysis to find (Duchene et al., 2014). However, one surface vulnerabilities, because they are exploited
of the biggest challenges in black-box fuzzing tools through the attack surface of the application.
is to determine the interactions that can change the SQLi and reflected XSS are two vulnerability
application’s state. For example, sending the same classes of this kind. SQLi is associated with mal-
requests in a different order or with different inputs crafted user inputs (e.g., ’, OR) used in SQL state-
can result in different paths explored. Without tak- ments without any sanitization, and then executed
ing this into account, a big portion of the applica- over the database through an appropriated function
tion code could be missed and multiple vulnerabilities (e.g., mysqli query in PHP). XSS injects malicious
overlooked (Doupé et al., 2012). scripts (e.g., JavaScript) which are used in output
This paper presents an ensemble fuzzing approach functions (e.g., echo), allowing the exploitation of
for discovering vulnerabilities in web applications vulnerabilities that reflect data from the browser’s vic-
written in PHP, and, in a posterior phase, to iden- tim to the attacker. Another type of XSS is the stored
tify the code that contain the vulnerabilities exploited. XSS, which is made in two steps. First, the malicious
To the best of our knowledge, it is the first ensemble script is stored in some place in the target application
approach of fuzzing for web applications. The ap- (e.g., database, blog file), and later it is loaded and
proach focuses first on the quality of fuzzers’ crawlers used in an output function.
and then on fuzzers’ capabilities to exploit the re-
sults of all crawlers between them, in order to pro-
vide better code coverage and precision in detecting 3 RELATED WORK
vulnerabilities. The ensemble fuzzing is composed
of three open-source fuzzers, namely, OWASP ZAP7 ,
w3af8 , and Wapiti9 , and aims to address the issue Black-box fuzzing is a software testing technique that,
stated above. In a second phase, the approach will be without accessing the source code of the program, in-
extended to identify the vulnerabilities in the applica- jects random inputs to find bugs in it (Miller et al.,
tions’ code by correlating information on how to char- 1990), which involves monitoring for exceptions,
acterize and exploit vulnerabilities with data provided such as crashes, memory leaks, and information dis-
by fuzzers’ attacks and the monitoring of target appli- closure (Sutton et al., 2007). The injected inputs are
cations. Our preliminary results show that it performs generated based on two approaches: generation and
mutation. In the former, the fuzzer generates inputs
7 https://www.zaproxy.org/ without relying on previous inputs or existing seeds,
8 http://w3af.org/ usually learning input models and generating new in-
9 https://wapiti.sourceforge.io/ puts based on the learned models. In the latter, the
Fuzzers
fuzzer modifies inputs based on defined patterns and Fuzzers
existing seeds (Sutton et al., 2007). crawler scanner
Chen et al. (Chen et al., 2019) proposed an en-
semble fuzzing approach to increase the performance
in bug discovery. First, they selected fuzzers based
on their input generation strategies, seed selection
Vulns.
and
mutation processes,Web application
and the diversity ofexploited
coverage in- Vulns.
Web application
formation granularity. Secondly, they implemented exploited
a system global asynchronous and local synchronous
that periodically Request
synchronizes Request
fuzzers attacking the
same application, sharing between
extractor these fuzzers in-
uniformizer Request Request
teresting seeds that cover new paths or generate new extractor uniformizer
crashes. WAF-A-Mole (Demetrio et al., 2020) is a
Distributor
tool that models the presence of an adversary based on Distributor
a guided mutation fuzz approach. The tool explores
failing tests, which are repeatedly mutated with muta- Figure 1: Overview of the ensemble fuzzing approach.
tion operators. Sargsyan et al (Sargsyan et al., 2019)
and Araujo et al (Araújo et al., 2020) presented a di- plication under test, need to inspect its attack surface
rected fuzzing approach with the goal of executing in- to extract the URLs (web requests) they need to per-
teresting code fragments as fast as possible. All these form the attacks. These fuzzers contain two compo-
approaches are for C/C++ code, and not applicable nents – crawler and scanner10 – to perform, respec-
for web applications. Our approach aims to explore tively, the inspection of the attack surface and the at-
the latter’s flaws and uses the ensemble fuzzing con- tacks. In addition, the latter depends on the results
cept by sharing all the different requests between the from the former. However, there is no guarantee that
ensemble and then using each fuzzer’s capability to the crawler extracts all entry points from the attack
explore those requests. surface and compose with them all valid URLs the
KameleonFuzz (Duchene et al., 2014) is an exten- application contains. On the other hand, there is also
sion of LigRE (Doupé et al., 2012; Duchène et al., no guarantee that the scanner has the ability to exer-
2013), a fuzzer that performs control flow analysis to cise the URLs with the correct injected code, capable
detect reflected and stored XSS vulnerabilities. Lang- of exploiting the existing vulnerabilities in the appli-
Fuzz (Holler et al., 2012) is a blackbox fuzzing tool cation through these URLs.
for script interpreters. Vimpari et al (Vimpari, 2015) Therefore, we propose an approach that combines
presented a study of open-source fuzzers, analysing different fuzzers and resorts from the best they have
the differences between them from the perspective in order to improve the resolution of these issues. The
of someone with basic knowledge in software test- approach comprises two phases – Crawling and Scan-
ing and their usability. They ended up selecting 6 ning – and their interactions are illustrated in Figure
fuzzers: Radamsa, MiniFuzz, Burp Suite, JBroFuzz, 1, respectively, by the orange and the black arrows.
w3af and OWASP ZAP, being the last four web ap-
plication fuzzers. However, they did not explored the Crawling phase. This phase starts with the initial
fuzzers’ ability to exercise the results of the different URL of the target application. Based on this URL,
crawlers between them, which is what we make. the crawlers explore recursively the attack surface of
the web application in order to discover all requests
that the application receives. The Request extractor
4 ENSEMBLE FUZZING module extracts the requests they made, analyses the
content of their responses and collects the entry points
APPROACH they contain, i.e., the application points that receive
user inputs. Next, the Request Uniformizer module
The ensemble fuzzing aims to improve the detection compares the requests in order to remove the dupli-
of web vulnerabilities and increase the code cover- cated ones, and then converts the distinct ones into an
age. Also, it intents to minimize the problem of in- uniform format that will allow to be used by different
teractions that can change the application’s state, pre- fuzzers in the attack phase. At the end of this phase, a
sented previously (Doupé et al., 2012), by increasing list of distinct requests (uniformized) is obtained, in-
the probability of exploiting a vulnerability by using
several fuzzers that will exercise the same requests. 10 We denominate scanner instead fuzzer to distinguish it
Web application fuzzers before attacking the ap- from the fuzzer as a whole.
dicating for each one which fuzzers found it. Despite a better performing and vulnerability exploitation.
the duplicated requests being eliminated, the list con- One important configuration they contain, besides the
tains all fuzzers that found them. Also, in order to get URL of the target application, is the user authenti-
the best results of each crawler and to ensure that the cation way on the application, as nowadays several
results obtained from a crawler are not being influ- web applications have a user authentication process
enced by other crawler’s execution, the application’s for preventing unauthorized users from gaining ac-
data is reset after each execution. cess to sensitive information. Hence, next we describe
the authentication forms each fuzzer offers, which we
opted and how we configured it. In addition, to avoid
Attack phase. This phase aims to attack the web
bias analysis conclusions, the crawlers’ depth level
application to exploit existing vulnerabilities in its
was set to the same level and they were run only once.
source code, using the requests resulting from the pre-
Wapiti authentication is based on session cookies.
vious phase. The Distributor module receives the list
The utility wapiti-getcookie, given the login URL, can
of request and distributes it to the fuzzers. However,
fetch the session cookies from the application that are
as each fuzzer has a different format to store the re-
later imported to the wapiti scanner. However, this
quests found by its crawler in its database, the dis-
utility, in some cases, does not detect the hidden val-
tributor, before distributing the requests, must prepare
ues that are sent in the authentication process. To cir-
each one according to the format of each fuzzer. Next,
cumvent this issue, we created a script that generates
each fuzzer delivers the requests to its scanner for this
the cookie with the mandatory values.
latter to exercise them with malcraft inputs and ex-
w3af offers two authentication modes: autocom-
ploit some vulnerability. At the end, for each fuzzer,
plete and cookie-based. For the former, the user pro-
a list of the vulnerabilities found is provided, as well
vides the login URL, the authentication parameters
as the request that exploited them.
and some information about a successful login. It
works well most of times, however, when it does not,
Assessment. We evaluated the fuzzers based on the we use the second mode which its configuration is
coverage and precision of their crawler and scanner. very similar to Wapiti and where the cookie needs to
We compared the crawlers through the number of be converted to the w3af-readable format.
URL’s (requests) and the type of endpoints explored. ZAP has multiple ways of dealing with authenti-
The scanners were assessed for their ability to ex- cation. We explored two of them: form-based and
ploit vulnerabilities, given the same list of requests script-based. In both, we must supply some applica-
for all fuzzers and their unique vulnerability findings. tion context, such as the login and logout regex, and
Note that, since all scanners use the same requests, the username and password. The form-based mode
we are able to detect limitations in the capability of automatically detects the data and format required for
exploring vulnerabilities by comparing the results of authentication, given the login page, and anti-CSRF
each fuzzer, as the results are not crawling dependent. tokens if the name of the tokens used by the website
Also, for the results found by each scanner, we manu- is in the ZAP Anti-CSRF list. Although this automa-
ally inspected them to verify what vulnerability each tion seems useful, in some cases it is unable to find the
scanner (fuzzer) found (see Section 6). correct data format, resulting in failed authentication
attempts. The script-based mode resolves this issue.
In this mode, the authentication data is provided in a
5 IMPLEMENTATION AND script that, when executed, will perform the actions
required for authentication. This mode achieves bet-
CONFIGURATION ter results than the first, and so we opted by it.
The proposed approach was implemented with three
fuzzers, namely the Wapiti 3.0.3, with the Web Spider
plugin for crawling, the w3af 2019.1.2, with the 6 EXPERIMENTS AND
Web Spider plugin for crawling, and the OWASP EVALUATION
ZAP 2.9.0 (ZAP for short) with the Spider tool for
crawling and the active-scan tool for attacking. For The objective of the experimental evaluation was to
attacking, all tools were configured for sqli and xss. answer the following questions: (1) Can the ensem-
ZAP was also configured with xss persistent and ble fuzzing lead to the discovery of vulnerabilities that
xss reflected modules. The plugin xss of the first two would be missed if the fuzzers used only the requests
fuzzers also allows the discovery of persistent XSS. found by their crawler? (2) Can the ensemble fuzzing
Fuzzers must be correctly configured to achieve improve the overall coverage and precision of the vul-
nerability detection? the same base URL; (iii) requests with the same vari-
ables and same values in the query string, despite or-
6.1 Tested Web Application der. Initially we considered similar requests that had
the same variables in the query string, despite it’s
The ensemble fuzzing was evaluated with three value. Although the majority of the requests with the
known vulnerable open-source web applications. same variables lead to the same page, in some cases
the value determined the page to be presented. To
avoid missing this endpoints, we compared the values
DVWA. Damn Vulnerable Web App (DVWA11 ) is a
as well. (iv) requests with the same post data. This
PHP/MySQL web application designed to be vulner-
criteria was defined for simplistic reasons, as com-
able to SQL injections, XSS and other vulnerability
paring the data parameters with all the types that are
classes. The application requires user authentication,
possible to send is not trivial.
few pages can be accessed without an active session,
and the different pages can be accessed via simple hy- Based on these four criteria, we were able to auto-
perlink tags inside of list items. It has also the option matically compare the requests made by all crawlers
of changing the level of security used. For our testing and verify which ones represent the unique and sim-
purposes we used the level “low” (no security). ilar requests between crawlers. Figure 2 presents the
results of this comparison for each web application.
Mutillidae. Mutillidae12 is a deliberately vulnera-
ble web application that contains at least one vulner- As we can observe, there is no crawler that is the
ability type belonging to OWASP Top 10 (Williams best. All crawlers had different results. The rate of
and Wichers, 2017). To access its different pages, common requests discovered by all crawlers was 28%
nested HTML unordered lists, that change with “on- (29) in DVWA, 20% (90) in Multilidae, and 13% (52)
mouseover” events, are used. There is a base URL in bWAPP. However, the number of equal requests
and most of the pages are accessible by changing the outputted by two crawlers was grater compared with
value of a variable in the query string. the previous rate. For example, for Multilidae and
bWAPP it was, respectively, 35% (161 out of 461)
and 45% (182 out of 405). Almost or even more than
bWAPP. Buggy Web Application (bWAPP13 ) is a 50% of the requests found by each crawler were only
vulnerable web application with over one hundred discovered by it (called the unique requests).
vulnerabilities, including those described in OWASP
Top 10. Such as DVWA, it also requires authentica- Some interesting unique requests were found in
tion and the pages accessible without it are limited DVWA and Mutillidae. For DVWA, ZAP unique
and vulnerability free. URL’s are related to the login page and pages that
are accessible from there, in which w3af and Wapiti
6.2 Crawlers Evaluation excluded from their results. Of the few unique re-
quests found by Wapiti, one has an entry point only
This section has the goal of assessing the capabilities accessible through a redirect Javascript function. ZAP
of the crawlers and understanding their discrepancies. misses this completely and w3af reaches the function
Before crawling the web applications, a man- but excludes the URL due to an erroneous redirect. As
ual analysis was made to identify how is performed we will see in Section 6.3 this request will allow the
the authentication process, and the entry points that exploitation of vulnerabilities. w3af unique requests
would modify the application’s state in an unwanted are mainly images that are missed or ignored by the
way, as for example the logout endpoint. Although other fuzzers. For Multilidae, the unique URLs for
this last identification requires some manual work, it ZAP and Wapiti are related to a public phpmyadmin
leads to more constant results, and so to a more fair service that they were able to detect, while for w3af
and complete crawler comparison. are related to folders that have common names, such
The evaluation of the crawlers was based on the as “/javascript” or “/webservices”.
results provided by the Request Uniformizer module,
where we compared the requests executed by each These results denote the existing discrepancy in
crawler pursuing the following criteria: (i) requests what crawlers can discover. Also, in the first instance,
with the same method (eg., GET); (ii) requests with we can say that there are advantages in crawling appli-
cations with an ensemble fuzzing, because there are
11 http://www.dvwa.co.uk/ requests that are missed by some crawlers that may
12 https://github.com/webpwnized/mutillidae contain vulnerabilities and that would not be found
13 http://www.itsecgames.com/ by the fuzzers whose crawlers have lost.
W3af Wapiti
17 W3af
Wapiti 2
Wapiti W3af
1
7 2
53
33
42 52
154 26
29 149
6 120 90
39
7
101
11
28
ZAP ZAP ZAP
(a) DVWA (b) Mutillidae (c) bWAPP
Figure 2: Number of URLs found by crawlers of each fuzzer when inspect each web application.
Table 1: Number of successfully explored vulnerabilities in the tested web applications.
Web App Vulnerability Wapiti Wapiti-EF W3af W3af-EF ZAP ZAP-EF False Positives
SQLi 2 2 3 3 3 3 1
DVWA Reflected XSS 5 5 1 5 7 8 1
Stored XSS 0 0 0 2 2 2 0
SQLi 17 17 4 12 9 9 0
Mutillidae Reflected XSS 52 54 26 43 22 24 1
Stored XSS 17 17 1 11 3 3 1
SQLi 1 1 0 1 0 0 0
bWAPP Reflected XSS 9 21 1 13 19 20 11
Stored XSS 1 4 0 2 1 1 0

6.3 Scanners Evaluation crawlers. Also, w3af-EF was the fuzzer that had its
precision more increased. ZAP-EF exploited more
vulnerabilities in DVWA, while in the Mutillidae and
In this section it is assessed the capabilities of the
bWAPP applications, Wapiti-EF had better results.
fuzzers’ scanners, individually and in the ensemble.
In order to assess the quality of the scanners, we
Also, the section intents to find out if scanners have
compared the reported vulnerabilities between each
the ability to explore requests found by other crawlers
other, identifying the unique and common findings.
than their own fuzzer and, hence, exploit vulnerabili-
Figure 3 displays the vulnerability distribution for
ties from them.
each application. The common findings ranges 27%
For the individual evaluation, each scanner was – 50% between all scanners, and 14% – 49% between
run as standalone with the requests that its crawler two scanners. The unique findings in average are
discovered. On the other hand, in the ensemble, each 39%. Wapiti-EF and ZAP-EF had a greater number
scanner run with the requests found by the ensemble of unique findings, varying according to the applica-
fuzzing and provided by the distributor, after they be- tion tested. w3af-EF, on the other hand, had a low rate
ing formatted according to the fuzzer format. Hence, of unique findings (1 or 0).
having a list of requests gotten by the three fuzzers’ During the analysis we found interesting cases
crawlers, each fuzzer was fed with the list, and each when we compared the results of the scanners and the
fuzzer’s scanner used the requests, exercised them requests of the crawlers. Also, we tried all outputted
with diverse inputs and carried out attacks on the web results on the tested web applications (i.e., we per-
applications to try to exploit SQLi and XSS. formed the attacks manually) and we found that some
Table 1 summarize the results obtained from the of them were false positives. The next two sections
scanners on both evaluations, where columns 3, 5 and we present some of these cases.
7 regard individual fuzzers and columns 4, 6, and
8 to fuzzers in the ensemble (EF). The results vary 6.3.1 Identifying Interesting Cases
with the complexity of each web application. Again,
there is no scanner that is the best. It is visible that We considered an attack interesting if it fits in one of
fuzzers within the ensemble improves their precision the following cases: (i) the crawler of the fuzzer Fx
on discovering vulnerabilities, denoting thus that they missed the URLi and the remaining crawlers found it
are able to explore request that were found by other but were unable to explore it. In the ensemble, only
Wapiti W3af Wapiti W3af Wapiti W3af

0 40 6
0 1 17 1 10 0

7 25 10
0 2 6 0 0 0

4 5 11

ZAP ZAP ZAP


(a) DVWA (b) Mutillidae (c) bWAPP
Figure 3: Vulnerability distribution by each fuzzer when attacking each web application.

Fx was able to explore the URLi and found the vulner- only found the parameters from a form that is used
ability; (ii) only the crawler of the fuzzer Fx was able to update information, missing a parameter that is
to find the URLi and explored it. In the ensemble, all needed to send the request; hence, its exploration
fuzzers were able to explore URLi and found the vul- failed. ZAP found the correct URL and explored it
nerability. (iii) both fuzzers Fx and Fy found the same successfully. When the correct URL (from ZAP) was
base URL, however Fy was not able to retrieve a valid explored by all fuzzers, they all were able to explore
request from it because Fy missed a variable or sent the vulnerability. The code that validates this request
an erroneous value. Fx discovered the correct request is presented in Listing 1. The vulnerable code can
(URLi ) and was able to explore it. In the ensemble, Fy only be accessible if all the parameters form, first-
also was able to explore URLi . name and lastname are set, making the parameter
We found 18 interesting cases, namely 6 from the form, discovered by ZAP, crucial in the vulnerability
first case, 11 from the second and 1 from the last. exploitation.
Table 2 presents an interesting XSS vulnerability we
Listing 1: Source code that validates the requests sent to
found of each case. We used a binary representation /bWAPP/xss php self.php
to identify if the vulnerability was explored or not.
if(isset( $_GET [" form " ]) && isset( $_GET [" firstname "
The ”–EF” columns represent the attacks performed
]) && isset( $_GET [" lastname " ]) ) {
in the ensemble, and the others columns represent the $firstname = $_GET [" firstname " ];
attacks performed by the fuzzer individually. $lastname = $_GET [" lastname " ];
Table 2: An example of XSS of each case. if( $firstname == "" or $lastname == "")
echo " < font color =\" red \" > Please enter both
URL Wapiti Wapiti-EF W3af W3af-EF ZAP ZAP-EF
/bWAPP/csrf 3.php 0 1 0 0 0 0 fields ... </ font >";
/DVWA/vulnerabilities/ 1 1 0 1 0 1 else
csrf/test credentials.php
/bWAPP/xss php self.php 0 1 0 1 1 1 echo " Welcome ". xss ( $firstname )." ". xss (
$lastname );
}
In the first row, it is represented a vulnerability
where the request was missed by the Wapiti’s crawler
but caught by other one’s crawlers, in which these 6.3.2 Identifying False Positives
were unable to exploit that vulnerability. Observing
the application code, there is a submit button that By manually performing all the attacks reported by
sends some values, including one that is in a hidden each fuzzer’s scanner, we confirmed a total of 15 false
input Wapiti was able to manage this hidden input, positives: 13 from ZAP and 2 from w3af. The last
unlike the other fuzzers. column of the Table 1 presents them on the three
The second row is the case mentioned in Section applications. Most of the ZAP false positives were
6.2. Wapiti’s crawler is the only one that found the XSS attacks in the application bWAPP, as the val-
URL, being able to explore it. When the URL is ues of the variables instantiated by ZAP were not
shared between all fuzzers, all of them exploited the used, since it was only checked if the variable was
vulnerability too. set. One example of a w3af false positive was in
In the last row, the w3af’s crawler missed the DVWA, as it reported a SQLi attack in “/DVWA/in-
URL, unlike the other crawlers. But, ZAP and Wapiti structions.php?doc=”, where the attacked parameter
returned different parameters for this URL. Wapiti was “doc”. Listing 2 presents a fragment of code that
deals with this parameter. Here, the value of the vari- Proceedings of the 28th USENIX Security Symposium,
able is only used to compare with values of an existing pages 1967–1983.
array, and if it does not match any of it’s values, a de- Chess, B. and McGraw, G. (2004). Static analysis for secu-
fault one is used. The value is not used anywhere else rity. IEEE Security & Privacy, 2(6):76–79.
in the application, and so no SQLi attack is possible. Demetrio, L., Valenza, A., Costa, G., and Lagorio, G.
(2020). WAF-A-MoLE. Proceedings of the 35th An-
Listing 2: Source code of the page attacked by the w3af nual ACM Symposium on Applied Computing.
fuzzer that resulted in a false positive. Doupé, A., Cavedon, L., Kruegel, C., and Vigna, G. (2012).
Enemy of the state: A state-aware black-box web vul-
$docs = array(
nerability scanner. In Proceedings of the USENIX
’ readme ’=> array( ’ file ’ => ’ README . md ’) ,
Conference on Security Symposium, pages 26–26.
’PDF ’=> array( ’ file ’ => ’ docs / pdf . html ’) ,
);
Duchène, F., Rawat, S., Richier, J., and Groz, R. (2013).
$selectedDocId = isset( $_GET [ ’doc ’]) ? $_GET [ ’doc ’
Ligre: Reverse-engineering of control and data flow
] : ’’;
models for black-box XSS detection. In Proceedings
if( ! array_key_exists ( $selectedDocId , $docs ) ) {
of the Working Conference on Reverse Engineering,
$selectedDocId = ’ readme ’;
pages 252–261.
} Duchene, F., Rawat, S., Richier, J.-L., and Groz, R. (2014).
$readFile = $docs [ $selectedDocId ][ ’ file ’]; Kameleonfuzz: evolutionary fuzzing for black-box
xss detection. In Proceedings of the ACM Conference
on Data and Application Security and Privacy, pages
37–48.
Holler, C., Herzig, K., and Zeller, A. (2012). Fuzzing with
7 CONCLUSIONS code fragments. In Proceedings of the 21st USENIX
Security Symposium, pages 445–458.
The paper presented an ensemble fuzzing approach Jovanovic, N., Kruegel, C., and Kirda, E. (2006). Pre-
for web applications to improve the detection of vul- cise alias analysis for static detection of web appli-
cation vulnerabilities. In Proceedings of the workshop
nerabilities by exploring all returned requests of all on Programming languages and analysis for security,
fuzzers’ crawlers and increase the code coverage of pages 27–36.
such applications. The approach was implemented Medeiros, I., Neves, N. F., and Correia, M. (2014). Au-
with three open-source web fuzzers and evaluated tomatic detection and correction of web application
with three well known vulnerable applications. The vulnerabilities using data mining to predict false posi-
preliminary results are promising and showed that tives. In Proceedings of the International Conference
there are advantages to have such ensemble, specially on World Wide Web, pages 63–74.
in those cases where it is able to detect vulnerabilities Miller, B. P., Fredriksen, L., and So, B. (1990). An empiri-
that would be missed if the fuzzers would run indi- cal study of the reliability of UNIX utilities. Commu-
nications of the ACM, 33(12):32–44.
vidually. As a further step, we want to identify in the
Ryan, K. (2020). Patched zoom exploit: Altering camera
code of the applications the vulnerabilities exploited
settings via remote sql injection.
by inspecting the code traces resulting from fuzzers.
Sargsyan, S., Kurmangaleev, S., Hakobyan, J., Mehrabyan,
M., Asryan, S., and Movsisyan, H. (2019). Directed
fuzzing based on program dynamic instrumentation.
ACKNOWLEDGMENTS. This work was par- In Proceedings of the International Conference on En-
tially supported by the national funds through gineering Technologies and Computer Science, pages
FCT with reference to SEAL project (PTDC/CCI- 30–33.
INF/29058/2017) and LASIGE Research Unit Sohoel, H., Jaatun, M., and Boyd, C. (2018). Owasp top 10
(UIDB/00408/2020 and UIDP/00408/2020). - do startups care? pages 1–8.
Sutton, M., Greene, A., and Amini, P. (2007). Fuzzing:
brute force vulnerability discovery. Addison-Wesley.
REFERENCES Vimpari, M. (2015). An evaluation of free fuzzing tools.
Williams, J. and Wichers, D. (2017). OWASP Top 10 -
2017 rcl - the ten most critical web application secu-
Araújo, F., Medeiros, I., and Neves, N. (2020). Generat- rity risks. Technical report, OWASP Foundation.
ing tests for the discovery of security flaws in prod-
uct variants. In Proceedings of the IEEE International
Conference on Software Testing, Verification and Val-
idation Workshops, pages 133–142.
Chen, Y., Jiang, Y., Ma, F., Liang, J., Wang, M., Zhou, C.,
Jiao, X., and Su, Z. (2019). Enfuzz: Ensemble fuzzing
with seed synchronization among diverse fuzzers. In

View publication stats

You might also like