Cyber Unit 1 CH 2
Cyber Unit 1 CH 2
exercises
Buffer overflow.
Processing reports
Vulnerability scanners generate reports of the vulnerabilities they find, but before
we discuss how these reports are used, we must discuss False Positives and False
Negatives.
Both false positives and false negatives are errors in the results of a scan.
A false positive is when the scanner says there is a vulnerability, but there
actually isn’t a vulnerability. This can waste the time of security personnel trying
to fix a vulnerability that simply doesn’t exist.
A false negative is when the scanner says there isn’t a vulnerability, but there
actually is. This means that even if a scan says it found 0 vulnerabilities, that
doesn’t mean there are no vulnerabilities present.
There are many tools available that can automate the process but, as with all
tools, it is important to understand their limitations.
Web application scanning tools will automatically review a website by crawling
through all its links, reviewing each page using an algorithm to match responses
to signatures.
If a match is found, the tool may perform additional checks to determine a degree
of certainty, if there is a vulnerability.
Example 1:
Using its database of signatures, the scanner identifies that a version of a library
in use has vulnerabilities. It then reports the vulnerability and the page it was
found on.
Example 2:
The scanner identifies an input field, which it tests to see if a blind injection attack
is possible, inserting input that contains a delay and monitoring the speed of
response for a delay. The response takes longer than normal, so the scanner
marks the input field as being vulnerable to a blind injection attack.
Example 3:
The tester attempts to get the web application to run the vulnerable function in
the library; if it does, it is a genuine vulnerability. If the application does not use
the function or allow the tester to trick it into calling the vulnerable function, it is
not a vulnerability.
Example 4:
The tester uses a range of inputs with different delays to see if the response time
changes correspondingly, while examining the output. If the response time
changes according to the delay, it is a genuine vulnerability. If the response time
is constant or the output explains the delay, such as a timeout because the
application didn’t understand the input, then it is a false positive.
It is also important to understand they will not necessarily find all vulnerabilities.
The same goes for human testing.
Type I error – false positive, a result that indicates a vulnerability is present when
it is not. This creates noise and results in unnecessary remediation work.
Type II error – false negative, where a vulnerability is present but is not identified.
The false negative is the more serious error, as it creates a false sense of security.
How to identify false negatives is beyond the scope of this article, but our general
advice is to use multiple tools and techniques for vulnerability identification, and
not to assume a clean result from a tool or tester means you are 100% secure.
One of the misconceptions about the pentest is that it provides the attacker with
a full view of the network, and you are safe once penetration testing has been
performed. This is not the case when attackers have found a vulnerability in the
business process of your secure application.
Figure 1.1: The three methods of assessing the vulnerability of systems and the breadth and depth to
which they are successful
Often, all three different testing methodologies refer to the term hack or
compromise. We will hack your network and show you where your weaknesses
are; but wait, does the client or business owner understand the difference
between these terms? How do we measure it? What are the criteria? And when
do we know that the hack or compromise is complete? All the questions point to
only one thing: what the purpose of the testing is, and what the primary goal in
mind is.