python librariesforhacking
python librariesforhacking
CYBER SECURITY
2.0 Introduction
2.1 Learning Outcomes
2.4 How is Python used in Pen Testing?
2.5 Python Libraries for Cyber Security
2.5.1 Faker
2.5.2 Scapy
2.5.3 Beautiful Soup
2.6 Let’s Sum it Up
2.6 Check Your Progress
2.0 Introduction
In this unit, we will discuss python libraries for cybersecurity. Many
Cybersecurity development projects require cryptographic
functions like data encryption, decryption, and secret key
generation. Python Cryptography library can do all these tasks
since it is loaded with cryptographic algorithms. Hence, we will
discuss this in detail in this unit.
2.1 Learning Outcomes
19
Deterioration: To secure their data, Rapid advances in advanced
devices, as well as conflicting expectations imposed by personal
privacy rules and national security, have damaged businesses'
capacity
…………………………………………………………………………
……………………………………………………………………
……………………………………………………………………
…………………………………………………………………….
2.4.1 Faker
21
To construct your dataset, faker is an open-source python package,
i.e., random data with random properties such as name, age, and
location. It supports all major locations and languages because it
helps create data depending on location.
Compatibility
Implementation
Import-Package:
Fakers may print/get phony data, such as a fake name, address, email,
SMS message, etc.
print('Address: ',exp.address())
print('DOB: ',exp.date_of_birth())
for i in range(5):
“They may also make sentences using their specified word library,
which comprises terms of our choosing, and the faker would make
fake phrases with those alphabets.
words = ['Hello','Abhishek','all', 'are','where','why',]
exp.sentence(ext_word_list=words)
23
“Faker is also able to generate a randomized database.
# Use bothify to generate random numbers(#) or letters(?). Can limit the letters
used with letters=
print(fake.bothify('PROD-??-##', letters='ABCDE'))
print(fake.bothify('iPhone-#'))
25
# Create fake True/False values
# Random True/False
print(fake.boolean())
# Specify % True
print(fake.boolean(chance_of_getting_true=25))
“You can give a list of numbers from which to pick at random for
category columns. If you don't want each item inside the table to
get an equal probability of being picked, you can define the weight
to assign to each value instead. “
import numpy as np
”
Create number rows with dispersion in the middle.
Dates and date times can be constructed in a variety of ways. You can
choose a date inside this decade, a century, a year, a month, or a
period of interval between two date ranges.”
print(fake.year())
print(fake.month())
print(fake.day_of_month())
print(fake.day_of_week())
print(fake.month_name())
print(fake.past_date('-1y'))
print(fake.future_date('+1d'))
2.4.2 Scapy
In pen testing, it's utilized for activities including screening, exploring,
unit testing, and detection; Scapy is a strong Python package. The
program aims to sniff, transmit, analyze, and alter network
packets. The primary concept is to send packets and receive a
meaningful answer. Scapy offers an advantage over comparable
programmers like Nmap, which often respond with a
straightforward (open/closed/filtered) status. Engineers may create
packets (requests), record responses (answers) as packet pairs
(requests | answers), and provide the results in the form of a
(requests | answers) list using Scapy. Many modules ignore
packets the target network/host does not reply to. Scapy, on the
other hand, gives users all the information by constructing an
additional list of mismatched (unaddressed) packages. Scapy may
also deliver incorrect messages to a targeted server, inject 802.11
signals, decode WEP-encrypted VOIP packets, and so on, in
addition to probing packets. Scapy can provide the following
capabilities by simply utilizing the import line to import the
module.
27
The expression "from scapy. all import *" in the following code
instructs the processor to export all of the Scapy module's
capabilities. The desired Scapy functions can be imported by
substituting the asterisk (*) sign with the relevant procedures.
Consider scenario.
#! /usr/bin/env pythonj8
import sys
from scapy.all import ICMP, IP, ARP
Scapy is a Python library with its command prompt interpreter (CLI)
for creating, modifying, sending, and capturing network packets.
It may be used interactively or as a framework by exporting it into
Python scripts. It's also compatible with Linux, Mac OS X, and
Microsoft.
True impact and receiving replies are the two main functions of Scapy.
It sends packets, receives answers, compares demands with
responses, and generates a list of package pairs (demand,
response) and a list of data packets that are mismatched. It has a
significant benefit over Nmap or hping in that the response is not
restricted to (accessible) but includes the entire package.
29
send(): send packets to level 2.
sendp(): send packets to level 3.
sr(): send and receive packets at level 3.
srp(): send and receive packets at level 2.
sr1(): send and receive only the first packet at level 3.
srp1(): sends and receives only the first packet to level 2.
sniff(): packet sniffing.
traceroute(): command trace route.
arping(): Send who-has ARP requests to determine which machines
are up in the network.
Example of use through Command Line Interpreter (CLI)
Making an ICMP-type package may be the easiest way to go. In this
scenario, we'll create a packet that we'll keep in a variable (P) and
parameterize with the IP layer (destination ip), a second ICMP
layer, and finally, a payload ("hello SanExperts").
Then we'll transmit it using the sr1 command (p)
To do a scan of sites in our subnet, for example, just run the command
(srp) and display the values of the what if they have answered
(ans):
31
If we need help on any of the commands, we can show it by using the
‘help()’ command:
We check the correct syntax and begin the procedure that will do the
scan in the main() section of the programme.
33
Next, we discussed the subclass that will do the scan, distinguishing
TCP from UDP:
We describe the different levels with the shipment data and the
relevant settings to assemble our package:
And we send it by (sr1) and save the outcome inside a variable (result).
35
and flags are formed to identify the port's status based on each
answer.
In this example, the "SYN, ACK" flag is looked for in the response,
and if it is found, it is assumed that a connection has been initiated
and the ports are open.
We'll use a similar procedure with UDP. We'll transmit the packet and
wait a while to see whether the data is received because it's an
offline protocol. We'll have to figure out whether the channel is
open and closed later, based on the sort of answer.
If we want further information, such as if the channels are "filtered,"
we should add that data as part of the portfolio in the following
TCP and UDP testers:
37
2.4.3 Beautiful Soup
Beautiful Soup is a Python tool for parsing and extracting data from
HTML and XML files. It connects with your chosen parsing to
provide smooth parse tree navigation, search, and editing. It's not
uncommon for developers to save hours, if not days, of work.
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
When we run the "three sisters" text through Beautiful Soup, we get
Easy preparation objects, which is a layered data model that
reflects the document:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.prettify())
# <html>
# <head>
# <title>
# The Dormouse's story
# </title>
# </head>
# <body>
# <p class="title">
# <b>
# The Dormouse's story
# </b>
# </p>
# <p class="story">
# Once upon a time there were three little sisters; and their names were
# <a class="sister" href="http://example.com/elsie" id="link1">
# Elsie
# </a>
# ,
# <a class="sister" href="http://example.com/lacie" id="link2">
# Lacie
# </a>
# and
# <a class="sister" href="http://example.com/tillie" id="link2">
# Tillie
# </a>
# ; and they lived at the bottom of a well.
# </p>
# <p class="story">
# ...
# </p>
# </body>
# </html>
“
“Here are a few quick methods to get around that data model:
soup.title
# <title>The Dormouse's story</title>
soup.title.name
# u'title'
soup.title.string
# u'The Dormouse's story'
soup.title.parent.name
# u'head'
soup.p
# <p class="title"><b>The Dormouse's story</b></p>
soup.p['class']
# u'title'
soup.a
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
soup.find_all('a')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
soup.find(id="link3")
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
39
for link in soup.find_all('a'):
print(link.get('href'))
# http://example.com/elsie
# http://example.com/lacie
# http://example.com/tillie
”
Because Beautiful Soup 4 is delivered via PyPi, you may install it with
a simple install or pip if you don't have access to the system
packager. The package is called Beautifulsoup4 and operates on
Both 2 & Python 3. Check that you're using the correct model of
pip or easy download for a particular Python edition (when you're
using Python 3, these would be labeled pip3 and easy install3,
respectively).
When you don't have an easy setup or pip loaded, you may download
the Beautiful Soup 4 data source and install it with setup.py.
You may use the Beautiful Broth state to include the whole collection
with your app when everything else fails. You may utilise
Beautiful Soup without downloading it by obtaining the tar ball
and pasting the bs4 folder into the source folder of your
application.
Beautiful Soup is written in Python 2.7 and Python 3.2, but it should
work with more current versions too.
Installing a parser
Beautiful Soup supports the HTML parser included in Python’s
standard library and several third-party Python parsers. One is
the lxml parser. Depending on your setup, you might install lxml
with one of these commands:
$ easy_install lxml
$ easy_install html5lib
soup = BeautifulSoup("<html>data</html>")
41
BeautifulSoup("Sacré bleu!")
<html><head></head><body>Sacré bleu!</body></html>
Beautiful Soup then parses the document using the best available
parser. It will use an HTML parser unless you specifically tell it to
use an XML. (See Parsing XML.)
Kinds of objects
Tag
Tags have a lot of attributes and methods, and we will cover most of
them in Navigating the tree and Searching the tree. For now, the
most important features of a tag are its name and attributes.
Name
tag.name
# u'b'
If you change a tag’s name, the change will be reflected in any HTML
markup generated by Beautiful Soup:
tag.name = "blockquote"
tag
# <blockquote class="boldest">Extremely bold</blockquote>
Attributes
A tag may have any number of attributes. The
tag <b id="boldest"> has an attribute “id” whose value is
“boldest.” You can access a tag’s attributes by treating the tag like
a dictionary:
tag['id']
# u'boldest'
tag.attrs
# {u'id': 'boldest'}
You can add, remove, and modify a tag’s attributes. Again, this is done
by treating the tag as a dictionary:
tag['id'] = 'verybold'
tag['another-attribute'] = 1
tag
# <b another-attribute="1" id="verybold"></b>
del tag['id']
del tag['another-attribute']
tag
# <b></b>
tag['id']
# KeyError: 'id'
print(tag.get('id'))
# None
Multi-valued attributes
HTML 4 defines a few attributes that can have multiple values. HTML
5 removes a couple of them but defines a few more. The most
common multi-valued attribute is class (a tag can have more than
one CSS class). Others
include rel , rev , accept-charset , headers , and accesskey .
Beautiful Soup presents the value(s) of a multi-valued attribute as
a list:
43
If an attribute looks like it has more than one value, but it’s not a
multi-valued attribute as defined by any version of the HTML
standard, Beautiful Soup will leave the attribute alone:
When you turn a tag back into a string, multiple attribute values are
consolidated:
id_soup.p.get_attribute_list('id')
# ["my id"]
NavigableString
tag.string
# u'Extremely bold.'
type(tag.string)
# <class 'bs4.element.NavigableString'>
unicode_string = unicode(tag.string)
unicode_string
# u'Extremely bold.'
type(unicode_string)
# <type 'unicode'>
You can’t edit a string in place, but you can replace one string with
another, using replace_with():
BeautifulSoup
You can also pass a BeautifulSoup object into one of the methods
defined in Modifying the tree, just as you would a Tag. This lets
you do things like combine two parsed documents:
soup.name
# u'[document]'
Check the progress-2:
Note: a) Space is given below for writing your answers.
b) Check your answers with the one given at this unit's end.
i) Write short note on scapy library
……………………………………………………………………
…………………………………………………………………
…………………………………………………………………
…………………………………………………………………
……………………………………………………………….
47
1) It is defined as a broad field of application in which cyber
security professionals try to analyze an organization's
security, also known as penetration testing. Cyber security
specialists may alert the organization to serious security
flaws, allowing it to start preparing for the internet.
Penetration testing is divided into seven steps
Stage 1: Pre-Engagement - A cybersecurity team determines the
objectives and logistics of the pen test.
Stage 2: Information Gathering - Pen testers rely on the Python
libraries: NMAP, Twisted, Beautiful Soup, Scapy, Socket,
Mechanize, and Devploit.
Stage 3: Threat Modeling - Pen testers rely on the Python libraries:
Python Framework and Threat-modeling 0.0.1.
Stage 4: Vulnerability Scanning - Pen testers rely on the Python
libraries: Vulners 1.5.13, Safety, and Scapy.
Stage 5: Exploitation - Pen testers rely on the Python libraries:
Pymetaploit3 (to implement the Metasploit framework), Scapy,
Socket, and BYOB.
Stage 6: Post-Exploitation - Pen testers rely on the Python
libraries: Pymetaploit3, BYOB, and RSPET.
Stage 7: Reporting - Pen testers rely on the Python libraries: Sys,
Plotly, Pandas, and NLTK.
2.
ii) Beautiful Soup is a Python tool for parsing and extracting data
from HTML and XML files. It connects with your chosen parsing
to provide smooth parse tree navigation, search, and editing. It's
not uncommon for developers to save hours, if not days, of work.
When you're using a current edition of Deb or Ubuntu Linux, then may
download Beautiful Soup:
Install python-bs4 using apt-get (for Python 2)
Install python3-bs4 using apt-get (for Python 3)
You can also pass a BeautifulSoup object into one of the methods
defined in Modifying the tree, just as you would a Tag.
49