Linux For Hackers
Linux For Hackers
This is one of the most important hacking books that came after a long
time. A must read for all beginners looking toward cybersecurity as a
career pathway
STEVE , SENIOR PENETRATION TESTER
Hacking Essentials Series has been one of the favorite sequential hacking
books I have read in my lifetime
ANONYMOUS HACKER REVIEW
TYE DARWIN
Edited by
DYE GUIND
GVS PUBLICATIONS
IF YOU ARE a beginner you might be probably wondering what Linux is? Let
me help you with the basic stuff. Linux is an operating system just like windows
and MacOS but with more security and stability and customizability. And you
know what? It is completely free and is open-source (that means source code is
not hidden or encrypted).
Seems awesome right? But you might wonder why Linux is less popular among
normal users. Linux is an operating system that is specifically developed by
professionals for professionals. It offers very less creative software when
compared to Windows and macOS. You can't find a photoshop version for Linux
because Adobe thinks that Linux is not a platform that developers use to create
their products.
Then, you may wonder who uses Linux as a daily work machine?
Linux may not be a creative operating system but is definitely a preferred choice
by developers , programmers, ethical and unethical hackers, database system
managers and system administrators. Linux is complex and definitely has a
learning curve. Linux enthusiasts should be aware of commands and particularly
an in-depth understanding of shell programming to achieve what they want using
the Linux kernel.
Are you overwhelmed by the possibilities of Linux? Don't worry because we are
here to help you introduce Linux with simple explanations and instructions.
Welcome to the first module of this book where you will get a simple
introduction to the Linux ecosystem and a detailed step-by-step installation
instructions. Let us go!
"I have heard of Linux, but after learning the Linux system, what
can you do on it? Or what can the Linux system specifically
do?"
With this question, the book begins with an overview of Linux and its
relationship with open source software. We will also talk about application fields
and future development trends of Linux.
Red Hat Linux is the earliest personal version of Linux released by Red Hat. Its
1.0 version was released on November 3, 1994. Although its history is not as
long as other famous Linux distributions, it has a much longer history than many
newer Linux distributions.
Since the release of Red Hat 9.0, Red Hat no longer develops desktop Linux
distribution kits. Red Hat Linux has stopped development and concentrated all
its efforts on the development of the server version, which is the Red Hat
Enterprise Linux version. On April 30, 2004, Red Hat officially stopped
supporting Red Hat 9.0 version, which marked the official end of desktop Red
Hat Linux. The original desktop Red Hat Linux distribution was merged with
Fedora from the open source community to become the Fedora Core distribution.
Red Hat is currently divided into two series: Red Hat Enterprise Linux, which is
provided by Red Hat with paid technical support and updates, and the free
Fedora Core developed by the community.
Fedora Core
Fedora Core (FC) is positioned by Red Hat as a testing platform for new
technologies, and many new technologies will be tested in FC. If these new
technologies are stable, Red Hat will consider these software joining Red Hat
Enterprise Linux.
Fedora Core 1 was released in late 2003, and FC is positioned for a desktop user.
FC provides the latest software package, and its version update cycle is also very
short, only 6 months. Due to frequent version updates, performance and stability
cannot be guaranteed and it is generally not recommended to use Fedora Core on
the server.
For users, Fedora is a free operating system with complete functions and rapid
updates. Therefore, for personal applications, such as development and to
experience new features, you can choose this release version.
Kali Linux
As a hacker you need to be aware of the perfect Linux distro that exists for
hackers. Previously, it is called as Blacktrack and is maintained by a number of
volunteers. What Kali Linux excels is to provide tools that are essential for both
hackers and forensic specialists in the same operating system.
While now a days it is true that Kali is being given competition with Parrot
Linux, another famous Linux distro we still recommend beginners to use Kali
Linux to begin their hacking career. Kali Linux is also free to download and
fully supports all open-source policies. If you need to be serious with hacking
then it is obvious that you take advantage by using Kali as your main operating
system.
CentOS
SuSE Linux
SUSE is the most famous Linux distribution in Germany and enjoys a high
reputation, but its fate is quite rough now. On November 4, 2003, Novell
announced the acquisition of SUSE. In January 2004, the acquisition was
successfully completed, and Novell also officially named SUSE as SUSE Linux.
Novell's acquisition of SUSE accelerated the development of SUSE Linux and
changed the free SUSE Linux into an openSUSE community project.
But in 2010, Attachmate acquired Novell. After being acquired, the development
of SUSE Linux was blocked. And just 3 years later, SUSE changed ownership
again. In September 2014, Attachmate was acquired by the listed company
Micro Focus. Fortunately, SUSE officially announced that open source is the
foundation of SUSE development and will continue to contribute to open source.
SUSE still Will fully support openSUSE.
Ubuntu Linux
WHAT NEXT?
With this, we completed a brief introduction to various Linux distros and
provided tons of examples to explain which is the best operating system for each
individual purpose. In the next chapter we will talk about some tips that can help
you to become a proficient Linux expert.
INSTALLING LINUX
Before learning the various operations of Linux, you must first install the Linux
system. Compared with the installation of the Windows system, the installation
of the Linux system has many points to pay attention to, such as choosing the
appropriate installation method and determining the naming scheme you want to
proceed with. This chapter will take the Linux Mint latest distro version as an
example to explain the installation process of the Linux system in detail and help
INSTALLATION REQUIREMENTS
Generally, each Linux distribution will give a list of minimum requirements and
recommended configurations for the system, and different installation options
(such as graphical interface or character interface) have different requirements
for the system. While you are downloading the installation disk image from the
official website of the Linux distro the maintainers of the website will usually
display the minimum requirements. If not, please look out at the forums or do a
quick google search for the details.
Linux has very low hardware requirements. Most of the machines that can run
Windows can be used to install Linux, and the running speed will be much faster
than Windows. The minimum hardware configuration for installing Linux is not
discussed here, only some special applications and special installations are
explained. This is due to the fact that we are not aware of the distro you are
going to install. For this particular Linux Mint version 1GB is the minimum
required RAM memory.
If you want to install a graphical interface (that is, X-Window in Linux), or run
office software such as OpenOffice, the system's graphics card and memory
requirements are higher, preferably a discrete graphics card, otherwise the
display effect of the graphical interface will not be ideal . Linux is not
particularly a well known operating system for gaming. If you are into gaming
then I think that you should better stick with windows PC.
Most of the drivers on Linux are written by open source people based on the
information provided by the hardware manufacturer, and some of them are
difficult to write because the hardware manufacturer refuses to provide the
information. In recent years, because Linux has become hot, many hardware
manufacturers have changed their normals and actively assisted Linux
developers to provide hardware information, but they are still conservative.
/dev/hda2/
dev/sdb3
LOCALIZATION SETTINGS
a) Click the DATA & TIME part of the interface to select the system time zone
and time. If you are in America, select North America for Region and New York
for City. After the selection is complete, click the Done button on the upper left
to return to the interface.
b) In the next interface shown, for the KEYBOARD option, just keep the default
English (US). Then click the LANGUAGE SUPPORT option, select the
language pack that needs to be installed, you can choose the language pack to
install according to your own needs, here choose English and French for
example. Two additional language packs will be downloaded. After the selection
is complete, click the Done button on the upper left to return to the interface.
THE PREVIOUS CHAPTER has been a good start for linux enthusiasts to
understand about different processes that will help to understand what Linux is.
In this chapter we will talk about different system installation settings that are
normally used in Linux installation procedure. Follow along!
For the meaning of each partition, please refer to the next chapter for more
detailed information.
The root partition contains all the directories of the Linux system. If only the
root partition is allocated when the system is installed, the above /boot, /usr, and
/var will all be included in the root partition, that is, these partitions will occupy
the space of the root partition. If you divide /boot, /usr, etc. separately, these
partitions will no longer occupy the space of the root partition.
After understanding some basic partitioning knowledge, start disk partitioning
below.
2) In the interface shown, first select the 100GB sda disk, and then there will be
two partition options in the lower left corner. The first is Automatically
configure partitioning, which means automatic partitioning. The second is I will
configure partitioning, which means manual partitioning.
If you are not familiar with partitioning, you can directly select automatic
partitioning by default. However, for learning considerations, it is recommended
to choose manual partitioning, even if you are a novice choosing manual
partitioning is more helpful for understanding and understanding of system
partitions. Select manual partition here. After selecting, click the Done button in
the upper left corner to enter the interface shown.
WHAT NEXT?
While this section may seem complex it is very important to make your linux
distro function. If not done properly your hard disk may become prone to errors.
In the next chapter, we will talk about networking settings that need to be filled
before installing the system. Follow along!
THIS CHAPTER TALKS about the networking settings that you need to enter
before starting the Installation procedure of Linux mint. As we well know that
network settings are utmost important for the functioning of a system it is
mandatory ti learn about them. Let us go and configure the network setting to
complete the installation procedure.
root@server : netstat
// Look out for the “ethernet” word in the output information.
WHAT TO DO NEXT?
Congratulations. You have now installed the Linux Mint in your system. After a
successful installation it is highly recommended to update the system so that
there will be no lags or bugs in the installed operating system. For every couple
WHAT NEXT?
With this, we almost completed the first module of this book successfully. Even
though being a short introduction to Linux and its installation procedure this
module is opening to the much important roadmap that you are going to take
now to become a Linux enthusiast and a cyber security specialist.
In the next module of this book you will learn about important Linux topics such
as Linux file system, Process management and Log analysis. There are a lot of
other topics that we are going to discuss. Follow along!
IN THE PREVIOUS module of this book you learned about basics of Linux
that are absolutely necessary to wander around the Linux world. Learning Linux
is not an easy task as it is developed by thousands of developers from twenty
years.
A small titbit:
Do you know that the number of lines that are written to make Linux kernel
work are stunningly 1 million lines? Yes, it’s right . Linux kernel is
approximately 1 million lines large and is still running. So, if you want to learn
all the components of Linux and all the drivers that are supposed to be run with
the Linux kernel then I think that your approach is wrong.
apt-get update
// Use this command to update all your software by one click
Apt-get upgrade
// use this command to upgrade all the software packages that are installed on the system
Apt-get upgrade python
// This command will check whether python has any updates and if there are any updates
available then it will install them cumulatively.
Now, all you need to do is search your preferred package in the search column of
the GUI installer so that it can be installed seamlessly along with all of it’s
dependencies that are required to run the software.
Removing software is also easy with a GUI based installer. All you need to do is
find the software name and select it to remove from your system along with all
of its dependencies.
Now, in the next section of this chapter we will talk about git which is a famous
developer tool that needs to be mastered if you are into open-source and working
in teams. You might have already heard “Github” that uses the git structure to
share code with others.
GIT INSTALLATION
If you never heard about Github it is a git based code sharing platform that offers
both free and professional plans. In the free plan your code will be public and is
therefore automatically comes under open-source license.
What does git do exactly?
Git uses a complex technique that involves commits to automatically update the
source code in a project in a way that it supports all the present code.
It becomes handy especially if you are working in teams to check the
compatibility of the newly written code. It also makes testing process easy for
enterprise applications.
root@server : ps
// This displays all the processes that are running in the Linux system
PID
45637
// The numbers are the identification for a process
If you want to make any changes to a particular process then you should note
down the process number. The output information will also show other
information such as the process name, time and the type of command it is using.
If you have opened the process from a terminal then the bash cmd will appear in
the output information.
Note :
Finding processes is sometimes an overwhelming task because you need to
search from tens of processes that are running. Always make sure that you are
confident before changing the process status. If you by any mean end the system
processes that are running then your Linux system may stop responding until the
next reboot.
root@server : ps aux
// This prints an output that describes a lot of information about a process
The aux command also describes about the instance that started the process. This
is essential for forensic investigators because they can easily track the origin
point of the attack using this procedure.
This will display an output like we have shown before with all the information
that one needs to understand about the process.
root@server : top
// This will display an output information that is allocated with maximum system resources
So, when there is a descent in system resources first the process with -10 nice
priority will end automatically. Lower priority processes should not be system
processes at any case. System administrators usually spend a lot of time to
prioritize the perfect processed for the system. If you are a hacker you should
also learn to prioritize to effectively manage the resources while trying to steal
information.
At any time you can change the priority order of the process using the renice
command.
In the next section, we will talk about killing processes. Learning to kill
processes is a much hyped skill for hackers as they often need to switch off the
intrusion detection systems that the administrators use.
In the last section of this chapter we will discuss about scheduling processes.
root@server : at 9:00pm
// This will start the at command utility that can schedule the process
root@server : at > /root/users/startscript
// At the above mentioned time the script mentioned will start as a process
By using scheduling functionality hackers can run remote programs even when
they lose access. You can bind programs with metasploit vulnerable scripts to
make them send information to your mobile or server within seconds.
WHAT NEXT?
With this, we have completed a brief and complete introduction to process
management. In the next chapter, we will talk about file management in Linux
with tons of examples. Let us go!
Users - Users are individuals who can access a particular area of the system
Groups - Groups are bundle of users who can access particular resources combinely.
We will now discuss about different types of users in Linux in much more detail.
In the above command “chown” is the default command that tells the Linux
kernel to provide permissions. “Sample” stands for the name of the user and the
“/home/pictures” is the directory that is being given access to.
root@server : ls -l /var/games
This command will display an output information saying about your current user
status and whether or not you have access to this directory. You will also
information about the owner of the file. Another significant option is details
about the time when it was modified. If you are a system administrator you
should be constantly checking the file modification status to be safe from
attackers who try to gain the system permissions at any cost to steal sensitive
information.
If you are a hacker trying to gain control over the system you should know a way
to modify permissions in linux. This is where the “chmod” command comes into
use.
IN EARLY 90’S it is almost difficult to track hackers who are using techniques
like phreaking to make free calls from the telephone services. However, after the
rapid growth of Internet and its potential to earn money has made ISP’s restrict
users and track them. Even governments now a days restricts their users from
accessing websites that are illegal according to their rules. For example, a lot of
countries like India, China restrict torrent websites. China even censors websites
such as Google, Facebook because they feel that they are missing the user
privacy.
While banning websites is often a way to destroy economic growth of their rival
countries now a days user privacy is also a huge issue that is being taken
seriously. Much recently US government has decided to restrict its citizens from
downloading Tiktok, a famous video sharing social networking platform because
of the claims that they are misusing the user data. While some of these claims
may be debatable hackers who are trying to attack systems of enterprises that
have quality intrusion detection systems should be aware that they are not being
tracked. This chapter will try to ignite some valuable information to you. Let us
go!
This will display a trace route for the url address that you have provided. It will
show the number of hops that will take to reach the destination and will also
provide the size of the network packets.
With this sufficient information you are now all set to understand the philosophy
of Onion network. Let us find more about it.
root@server : proxychains
No matter what task you do , if you are willing to be anonymous then you can
enter ‘proxy chains’ command in the beginning so that it will be routed through
the proxies that are entered in it’s config file. In the config file of proxy chains
you can enter a single proxy address or can enter multiple of them. Advanced
hackers use random proxies and multiple proxies to improve their security.
In the next section, we will talk about Virtual Private Networks (VPN’s) and
encrypted mail in detail. Let us go!
WHAT NEXT?
With this, we have given a complete introduction to the security concerns that a
beginner Linux user can have. As a hacker you need to constantly delete your
logs and cookies to not being tracked by the Network providers or internet
gaints. With this sufficient knowledge you are now all set to learn about Logging
system in Linux. This is the final chapter of this module and can give us a good
clarity about an important expertise that all hacker should master. Let us jump
into into right now.
IN THE PREVIOUS chapter we talked about ways to make you be secure and
anonymous in the internet. As a hacker staying in the dark and stealing sensitive
information is usually your main motto. As a fellow cyber security enthusiast I
would suggest you to maintain a good grasp about different logging systems in
Linux distros to help you understand what is actually going in the system that
you are trying to gain access. Also, deleting sensitive log files that can reveal
your identity is also an important task for hackers to master. To help you
understand different logging systems and daemons that make up the log files we
have introduced this chapter with tons of examples. Follow along!
This default command searches all the files in the Linux system. Make sure that
you have given root permissions using the ‘su’ command so that the command
will search all the system files. After a couple of seconds the output information
will show all the locations where the log files are present.
For example :
/etc/ home/example.rsyslog
This is the format for the files that we will find with the above mentioned
command. After selecting a log file to analyze head over to the location and
open it using your favorite text editor.
Now, you will see a bunch of default code that exists whenever a log file is
created using the ‘rsyslog’ daemon. Don’t edit anything and head over to the end
of the log file where you will find a section with the title ‘RULES’ as shown
below.
Here is the format :
### RULES
## Enter rules here
## End of the log file
facility.priority action
Kern.error \etc\home\sample.syslog
WHAT NEXT?
With this, we have completed our second module of this book where we have
discussed a lot of basic information about the Linux operating system that can
help us to improve skills as hackers. As a hacker to exploit systems you need to
have a good information about programming languages such as Python. In the
next module of this book we will discuss in detail about different component of
Python along with tens of examples. Head over to the next module to further
gain knowledge to be a better hacker. Let us go!
Int a ;
Int b;
A = 23
B = 46
A,b = 23,46
Writing in very few lines of code can improve overall performance of the
program.
In the next section, we will talk about variables with examples. Follow along!
first = 3
// Here ‘first’ is the variable with a value 3.
Note : Python automatically recognizes 3 as a value with ‘int’ data type unlike
traditional programming languages which need them to declared first before
using.
First = 3
First = 5
Print ( first)
Output :
“””
This is multi-line comment
This is an excellent comment
“””
I=5
Print (i)
// This is a single-line comment
IN THE PREVIOUS chapter we have introduced data types and variables that
are the foundational blocks of the programming. In this chapter we will
introduce concepts of conditionals, loops and advanced data structures with
examples. Let us start!
I=3
J=5
Explanation :
This example, provides a simple scenario to explain the importance of control
statements in Python.
I) First, two variables are created with values, and then they are both compared
using the if control statement.
ii) Here ‘==‘ stands for the equal to operator. It says that if both the values in the
variables are equal then the below string should be printed on the screen.
iii) In the next block of code that is ‘else’ the print statement executes if the
statement is false.
In the real world programs we can increase the complexity using nested control
statements and blocks of control statements with a single execution. Control
statements in Python are handy and can help us create efficient programs which
can interact with user inputs.
Now, with this done we will now talk about loops that execute a statement or
result again and again. Let us go!
WHAT NEXT?
With this, we have given a complete introduction to conditional statements and
loops in Python. While these topics may seem complex they are very useful for
hackers who are trying to write programs using various scripting languages.
Remember that understanding the goal that you want to achieve is essential for
hackers. In the next section of this module we will discuss about other advanced
topics. Follow along!
WHAT IS PIP?
Pip is a package manager that manages all the important packages that Python
offers in its repository for easy download and upgrading. To install pip in your
system use the below command.
This installs pip, a package manager that is light-weight and which can
efficiently link all the files while installing.
The above command will display all the information about the package that you
are trying to download. Mostly the information will be about the package name,
its owner name , No. Of times it has been downloaded and its creation date.
Some packages will also provide the license agreement details along with the url
This command will decompress the package into a separate folder in the same
directory.
This will install the package/module in the Linux system. After completing the
process you can run the package commands in the terminal to check whether the
package is installed or not.
How to use modules in your own code?
While modules are usually run as separate entities they are also famously used
while creating programs. To use a particular module in your own program code
you can use the ‘import’ command.
Here is the format :
import packagename
WHAT NEXT?
With this, we have completed a simple introduction to modules in Python. In the
next chapter we will discussing about advanced topics such as functions and
OOP with detailed examples. Let us go!
f(x) = 2x+3
Where x is your preferred number.
def functionname
len(“thisisgreat”)
Output :
11
4) max , min
These are functions that are usually used to determine the highest and lowest of
the elements in a Python list.
Here is an example:
nice = [ 3, 5, 8]
// This is a list in Python
We will discuss in detail about lists and dictionaries in the next chapter. For now,
think a list as something that holds items.
In the next section, we will talk about object oriented programming topics such
as Classes, objects and inheritance in detail. Follow along!
class car
// This is a class in Python
car.driving()
// This is an object calling a method in Python
WHAT NEXT?
With this, we have completed a brief introduction to functions and OOP concepts
in Python. As a wannabe hacker, these concepts are usually more than enough to
get you started to create programs that can automate work flows and maintaining
some basic tasks as a Linux system administrator. We will now talk about bash
scripting in detail in the next chapter. Let us go!
#! /bin/bash
echo ( “ This is a sample example”)
# This is a comment
Explanation :
The above bash shell program that can be executed from a Linux computer
consists of three important concepts of Bash that every hacker needs to
understand and implement them in their programs.
I) First, we will talk about the Line that starts with a Shebang (#!) symbol. If a
#! /bin/python
// This runs using the Python interpreter
ii) In the second line, we used echo to display a message on the computer screen.
Echo is a shell built-in-program that can be used to display or print information
according to the user wish.
iii) And the third line that is followed after the ‘#’ is known as a comment.
Comments are usually used by programmers to make others understand what
they has written. However, it is important to remember that everything that is
written in a comment will be ignored by the bash interpreter to save compiling
time and system resources.
In the next section, we will know how to run a Bash program in a Linux
machine.
You, can check the permissions again using the ls -l command to confirm that
you have done the procedure right. After checking, head over to the directory
where file is present and enter the following command to run the bash shell
script.
root@server : ./sample.sh
Here ./ represents that the designated file should be searched only in the current
directory. This is a good practice it reduces the search time and is therefore
recommended by all Linux system administrators to run a Bourn-again shell in a
terminal.
When you run, the following output will be displayed on the computer screen :
WHAT NEXT?
Explanation :
The program mentioned maybe somewhat overwhelming if you are new to
programming or scripting by any change. But don’t worry because we will
explain each line of the program with precision in simple terms for your
understanding.
i) As said before, the first line of the program which starts with a Shebang(#!)
command says to the Linux machine that it needs to run the below code using
the interpreter that is present in the location “/bin/bash”. If by any chance bash is
not installed in the system then this location will be empty and the program will
exit with an error. Remember that all Linux systems come pre installed with
Bourne-again shell as it is default and recommended shell environment by Linux
enthusiasts. If you are by any chance using other shell environments such as K-
shell make sure that you have installed it on the system and the location of the
interpreter is correct.
ii) The second line of the program comes with a comment. In the above
comment the main purpose of the program is written and you can observe that
you can even write multiple lines of code in a shell script file that can be run
using Bourne-again shell.
iii) Now, starts the main logic of the script file we are dealing with. In this line
an echo is introduced to display a question that can be used to get an input from
the end user. ‘Echo’ not only displays the static information but can also be used
And now the user will have an option to enter data because in the next line a
variable is created using the ‘read’ identifier to store that data in this particular
variable.
Let us assume that the user entered his name as
Tom
Now, the entered data will be read by the shell program and will be saved in the
variable ‘name’. In the similar way obtain information from the end user for the
next two questions that is about his city and his reason to use this program. Let
us assume that the user entered this data as ‘New York’ and ‘Knowledge’.
Now, we have three variables ‘name’, ‘city’,’reason’ with data which we need to
be displayed using a string. We will learn how to do that now.
iv) In the next line, we display all the stored data in the variables using the ‘$’
behind the variable name.
echo “ Hi $name, You are from $city and You are here for $reason “
Here name = Tom, City = New York , reason = knowledge
Hi Tom, You are from New York and You are here for Knowledge
That’s it. Getting user input, storing them in variables and using them whenever
you needed is the basic foundation of the scripting to interact with different
WHAT NEXT?
In the next chapter, we will provide an example to help you understand the
complex scripting skills such as using functions and loops to create efficient
automated programs. Before heading to the next chapter, take a quick glance
about operators and conditional from the Python lessons we discussed before.
Shell scripting uses the same philosophy, so we need you to be aware of these
topics so that we can help you with an example. Let us go!
#! /bin/bash
# This program creates a scanner that scans a specified port number to check whether they
are open or not in a local network
Nmap -sT 191.123.111.32 -p 2378
-oG resultsfile1
Cat resultsfile1 | grep open > resultsfile2
Cat resultsfile2
# End of the bash script file
Explanation:
The above program may seem complex for beginners bu believe us it is simple
and does what it is entitled to.
I) Life every time, we start the script file with a #! Command with the location
of bash interpreter. And in the next line we follow with a comment that explains
WHAT NEXT?
With this, we have completed a brief introduction to the complexity the Bash
interface offers. Most of the information is given by us regarding Bash scripting.
Before heading out to learn about shell programming in the next module we Will
explain some of the built-in commands that Bash comes with in Linux in the
bonus chapter of this module. Let us go!
Output:
ii) cd
This is one of the famous Linux commands that is developed by bash. All it does
is to change the directory of the user in the terminal. You many need to change
the directories while performing executions because some may run only in the
current directory.
Here is an example:
Cd /bin/home
After you execute the above command in the shell interface you will be in that
particular directory. You can now use ls to check all the files in the current
directory
iii) pwd
Also remember that you can also find information about the current directory
you are in using the pwd command
Here is a command :
This will give an output with a bunch of information about the directory you are
in along with number of files and the location of the directory.
iv) Process management commands
We all know that in Linux everything run as processes. There are background
root@server : bg vmware
// This starts the software in background
As a hacker you need to be aware which software are running in the background
as most anti virus software work as background processes. Also, when you plant
an exploit in the target system you need to make sure that it will be not be found
out by the system administrator.
B) exec
This is a built in bash command that starts a software as a new process.
Sometimes even when a software or utility is opened you can use this command
to open a new instance of the software.
Here is the command :
You can confirm whether or not a new process had been created using the
following command
root@server : pid
V) umask
This in-built command is used to change the default permissions that are already
present. We can usually use chmod to give execute permissions but umask is
much more complex and can be used to perform complex changes whenever
needed.
Here is a command :
readonly first
B) export
Also, whenever you create variables or functions in computer programs or in
scripting files they cannot be used by other files. Only way to use these
components in other files is by using the export command that is available in
bash.
Here is a command :
export first
WHAT NEXT?
With this, we have completed a brief introduction to Bash scripting in Linux. All
the examples should have helped you understand the foundations of scripting.
With that in mind, now you are ready to learn in much detail about the shell
programming that Linux runs upon in the next module. Follow along and
experiment with different complex topics to interact with the system resources
that Linux is capable of. Let us go into an another exciting module of this book.
WELCOME to the fourth module of this Linux for hackers booklet which is
designed to help beginners interested in hacking to achieve essential skills using
various techniques. In the previous module we have already introduced a bit
about bash scripting, which is a shell interpreter. We hope you will have fun
reading the advanced skills that a shell programmer need to master.
Also, In the previous modules of this book bundle we coherently discussed about
the fundamentals of Linux with various in-deep examples. In this module we
will talk about shell scripting along with few tips and tricks that can help you
create well versed programs.
THIS CHAPTER IS the beginning of this module and will help to introduce
you the basics of shell scripting files along with various advantages they come
with. We recommend you to thoroughly read this chapter before skipping to the
other ones. Also, it is recommended to experiment the given code samples in
your Linux machine. There is no better way to learn Linux more than doing
yourselves in a Linux system. Do a simple research about different Linux distros
and settle with the one that you feel most comfortable with. From a simple Arch
Linux system to a complex and visually stunning Debian based Linux distro you
have everything to chose from.
WHAT IS SHELL?
Shell is a very easy and powerful programming language. Many Linux system
maintainers often use Shell scripts in their work, but not everyone is good at
writing Shell scripts. Once you master the rules and skills of writing Shell
scripts, your work will be easier and more efficient in the future!
Since 1991, Linux has rapidly grown into the preferred operating system for
enterprise server products, and more and more IT enterprises have adopted
Linux as their server platform operating system to provide customers with high-
performance and high-availability business services. Linux uses Shell
programming and is therefore a must for enthusiasts who are willing to make a
career in Linux server and database administration. Ethical hackers and
application developers also use shell for their own purposes.
#!/bin/bash
<< COMMENT
Note that the keyword behind the < < symbol can be any string, but the same
keyword must be used when ending the comment.If you start commenting from
< < ABC, you must also use ABC (letters are case sensitive) when you finish
commenting information.
In the next section of this chapter, we will have a brief discussion about the
various execution modes of shell or for that matter any script files.
root@server : ./sample.sh
// Will display errors because there is no permission
root@server : bash sample.sh
root@server : sh sample.sh
// Other main ways to execute but without errors
root@sample : pstree
From the above command output, we can see that the first process started by the
computer is systemd, and then N subprocesses are started under this process,
such as NetworkManager, atd, chronyd and sshd, all of which are systemd
subprocesses.Under the sshd process, there are two sub-processes of sshd. Under
the two sub-processes of sshd, the bash interpreter sub-process is started, and a
// sleep.sh
#!/bin/bash
Sleep 5000
root@server :chmod +x sleep.sh
rooot@server : ./sleep.sh
root@server : pstree
It can be seen from the output that a subprocess script file is opened under the
bash terminal, and a sleep command is executed through the script file.Back to
the first terminal, use Ctrl+C to terminate the script file executed before, and use
bash command to execute the script again.
At last, use the pstree command on the second terminal to observe the
experimental results.The result is similar, a bash subprocess is opened under the
bash process, and a sleep command is executed under the bash subprocess.
4) Execution mode without opening subprocesses
Next, let's take a look at the case of execution mode without opening
subprocesses.
root@server : . sleep.sh
root@server : bash sleep.sh
root@server : source sleep.sh
With this, we have completed a brief introduction about the shell scripting
execution capabilities. As linux system deals with lot of users and subgroups it is
important to not run a script file that is not supposed to run. So, with these extra
abilities to run programs we can now start learning about input and output
statements in shell programming. Let us go!
As can be seen from the above script file, the echo command can output any
string of messages. Multiple echo commands can be used to output multiple
Note:
There is at least one space between the -e option and the content to be given as
We can also output hello and wrap the line but the cursor still stays at the
original position, that is, the position behind the letter o, and then will output
world.We will have Bold display OK. \033 or \e can be followed by different
codes to set different terminal attributes.
Try it:
Do you have any other colors? Such as 92m? You can try it yourself!
In addition to defining the font color, style and background of the terminal, you
can also use h to define the location attribute.For example, you can display OK
in the third row and tenth column of the screen by the following command.
Here is the command:
#!/bin/bash
clear
echo -e “\033m”
printf “details”
Note: The parameters of the general printf command are what needs to be used
as an output.Commonly used format strings and function descriptions are shown
for your better understanding of what you are dealing with. In the next section,
we will further expand our knowledge using an application case.
Application Case:
The format %8d of this command sets the print width to 8, and displays the
integer 64 in a right-aligned manner.
Note that there are five spaces in front of the output information 64 of this
command.5 spaces +3 numbers together are 8 characters wide.If you need left
alignment, you can use %-8d to achieve the effect, such as the following
command.
root@server : printf “%-8d” 64
Note, after the printf command outputs information, it defaults to no new lines.
You can use the \n command if you need to wrap lines.
In order to better observe the left-right alignment effect, the following example
will print two symbols to determine the position.
root@server : printf “ | %-11d| \n” 64
Left-aligned output 64, the output content takes up 11 characters wide, 12 takes
up 2 characters wide, followed by 8 spaces.
The default printf command will not wrap after the output, but can wrap after the
output by using \n the command symbol.
This displays the octal value of 10, and the conversion from octal 12 to decimal
is exactly 10.0x11 represents hexadecimal 11, and the printf command converts
hexadecimal 11 into decimal integer output (17).011 represents octal 11, and
printf command converts octal 11 into decimal integer output (9).
When using \d to print a large integer, the system prompts that it is out of range
and that the maximum number that can be printed is 92798329882.
If you need to print such a large integer, you need to use the \u command, but the
\u command also has the maximum display value (29290390238009), and when
it is greater than the maximum value, it cannot be printed.
In the next section, we will talk about reading the information available using the
‘read’ command that is famous now a days.
3) Read the user's input information by using the read command.
Before that, we learned how to output data in the Shell script, and then discussed
how to solve the input problem. In the Shell script, read command is allowed to
realize the data input function.
Function description:
The read command can read a line of data from standard input.
The read command has the following syntax format.
Application case:
Note that after the password is prompted here, when the user enters the password
672, the computer displays the plaintext of the password on the screen, which is
not what we want to see!
What to do?
The read command supports the -s option, which can make any data input by the
user not displayed, but the read command can still read the data input by the
This script reads the user name and password entered by the user through the
read command, and when reading the password entered by the user, the content
of the password is not directly displayed on the screen, which is safer.
The user name and password entered by the user are stored in two variables, user
and pass respectively. Here, call the values in the variables with $,use the
useradd command to create a system account, and use the passwd command to
configure the password for the user. Passwd is directly used to modify the
password.
Here is the script:
#! /bin/bash
# Read the data
read -p “ Enter the name “ user
read -s “ Enter the password” pass
user add “$something”
#print password
root@server : who
b) ss
For another example, ss command can view the list of ports that all services in
Linux system listen to.However, ss command itself has no flexible filtering
function, while grep command has powerful and flexible filtering function, so
these two commands can be used together through pipeline.Obviously, there are
a lot of data that are not filtered by grep command, which is not clear enough.
After ss command stores its output data into the pipeline, grep command reads
the data from the pipeline again, and filters out data lines containing sshd from
many data, and the final output result is only two lines of data.
root@server : ss
The previous echo command will not give an error message.However, when
using ls command, the final output information is divided into standard output
and error output according to whether the file exists or not. At this time, if we
only use > or > >, we cannot redirect and export the error information to a file.
Here we need to use 2 > or 2 > > to redirect the error output.
While the second command has no error message originally, the final hello is
displayed on the screen through the error output channel by redirecting the
correct information to the error output.
Under normal circumstances, because the system does not have a /something
file, the ls command will report an error, and the error information will be
transmitted to the display through the error output channel.
However, when we use the 2 >&1 command, the error message will be
redirected to the standard correct output. Although the screen will eventually
display the error message, it is transmitted to the monitor through the standard
output channel.
Under normal circumstances, the echo command displays messages on the
screen through standard output.When we use 1 >&2, the system will redirect the
correct output information to the wrong output, although the screen will
eventually. Hello is also displayed, but it is transmitted to the monitor through
the wrong output channel.Finally, the correct and wrong information is imported
into the file.
No matter how much data is written in this file, it will be swallowed and
discarded by the system.If there is some output information that we no longer
need, we can use redirection to import the output information into the device
file.
Note:
Once the data is imported into the black hole, it will not be retrieved.In addition
to redirecting the output, you can also redirect the input.
The default standard input is keyboard and mouse.But the keyboard needs
human interaction to complete the input.
For example, the following mail command, after executing the command, the
program will enter the state of waiting for the user to input the mail content. As
long as the user does not input the content and uses an independent line to
indicate the end of the mail content, the mail program will stay in this state.
All the above email texts need to be manually entered, but in the future, when
we need to use scripts to send emails automatically, there will be problems.
To solve this problem, we can use the < symbol for input redirection.< the
symbol needs to be followed by a file name, so that the program can read the
data from the file instead of reading the input data from the keyboard.
The << symbol (also called Here Document) means that the content you need is
here.Let's take a look at an example where cat reads data through Here
Document and then exports the data to a file through output redirection.
Here is the script:
#! /bin/bash
Mail -s error
This is where we want to send
<< fax.txt
# Ends the program after an input redirection
In Linux system, fdisk command is often used to partition disks, but this
command is interactive, and now we need to write scripts to realize automatic
partition, automatic format, automatic mount of partitions and so on.To solve
this problem, Here Document can also be used.Let's write such an automatic
partition script.
Warning:
This script will delete all data on the disk, and all data will be lost!
Here is the script:
#! /bin/bash
mkfs.xfs /dev/sdb
# This is used to format
In order to improve the readability of the code when writing scripts, it is often
necessary to add extra indents to the code.However, when using << to import
data into a program, if there is indentation in the content, it will be passed to the
program along with the indented content.
At this time, the Tab key only serves as indentation, and we don't want to pass it
to the program.If necessary, you can use the <<-symbol to redirect the input, so
that the system will ignore all data contents and Tab key in front of EOF.In this
way, only the Tab key can be ignored, and if the body content of Here Document
is indented with spaces, it is invalid.
With this, we have completed a brief introduction to the redirection of input and
output statements in Shell programming with the help of linux system
commands. Before proceeding to discuss about variables and other advanced
concepts we will talk about the posture of various equation marks while dealing
with shell scripts. This may usually cause mistakes even being a steep learning
curve. Follow along!
root@server : touch x y z
// This is without quotation marks
root@server : touch “ x y z “
// This is with quotation marks
root@server : rm -rf x y z
// This will delete all the three files created
root@server : rm x y z
// This will display an error
Also we should analyse how many files are there here?What exactly is the file
name?
root@server : rm “ x y z”
Because double quotation marks are not used here, the system understands that
three files X, Y and Z need to be deleted, but in fact, there is only one file named
“x y z” in the system, and the space is also a part of the file name.
In the next section, we will further discuss about how files can be deleted in
practical terms.
root@server : echo #
root@server : echo ‘#####’
root@server : echo $$
root@server : echo ‘$$’
2) Command Substitution
Finally, we will explain the symbol (back quotation mark). Back quotation mark
is a command substitution symbol, which can replace the command with the
output result of the command.
Let's take an example below.
Use the above command to back up all the data in the /var/log directory to the
/root directory, but the file name of the backup is fixed.If the system needs to
CUSTOM VARIABLES
First, let's look at how custom variables are defined and called.In Linux system,
the definition format of a custom variable is “variable name = variable value”,
and the variable name is only used to find an identifier of the variable value, and
it has no other function.
root@server : test=567
root@server : $test
Output :
567
At this point, you need to use \_ to separate the variable name from other
characters.
Although these three commands do not use \_ to separate the variable name from
other characters, the final return value is not blank, because the Shell variable
name can only be composed of letters, numbers and underlined lines, and it is
impossible to include special symbols (such as horizontal lines, colons, spaces,
etc.), so the system will not regard special symbols as a part of the variable
name, but will understand that the variable name is test followed by other strings
unrelated to the variable name.
Let's look at a simple case of using variables.
In the next section, we will analyze a script to understand in depth about other
complexities these variables offer.
Script case analysis is as follows:
Before starting the analysis , here is the script that we are going to use. You can
change variable names or other values according to your own choice.
#!/bin/bash
Localip = $ (ifconfig(grep -eth0))
mem = $ (free grep -m)
cpu = $ ( uptime grep -u)
echo “ The local IP address is $localip”
echo “ The memory that cpu offers is $mem “
echo “ The cpu name is $cpu”
There are three variables defined in this script, all of which are the return results
of commands, so the variable values may change every time the script is
executed.However, no matter how the variable values change, the script can
normally output these variable values at the end.
The first variable, localip, stores the IP address of the native eth0 network card.
Here, it is assumed that there is eth0 network card in the system and the IP
address is configured.
The second variable, mem, stores the remaining capacity of local memory.
The third variable, cpu, stores the average load of local CPU within 15min.Tr
and cut commands are used in the statements for obtaining these three variable
values, and a space is quoted after tr -s, which is used to combine several
consecutive spaces in the data transmitted by the pipeline into one space.
If quotation marks are used after the -s option to refer to other characters, the
effect is the same, and multiple consecutive specific characters can be combined
into one character.Using the cut command can help us get the specific columns
SYSTEM VARIABLES
The user-defined variables are introduced above, and then the system preset
variables are known.System preset variables, as the name implies, are variables
that have been preset by the system and can be used directly without the user's
own definition.
The system default variables are basically in capital letters or some special
symbols as variable names.
Preset variables can be subdivided into environment variables, position
variables, predefined variables and user-defined variables.
When actually writing scripts, we can apply suitable variables in the right places,
so we will not elaborate on them here.Write script cases and call these system
preset variables to check the execution effect.
WHAT NEXT?
With this, we have provided a brief introduction to variables in shell
programming. In the next chapter we will talk about filtering using the grep
command with detailed examples. Follow along!
WHAT IS FILTERING?
In usual terms filtering stands for arranging. In a more conventional way it is
called as advanced searching according to the users wishes. The time for
filtering depends on the data storage value we are searching.
FILTERING IN LINUX
In the world of data filtering and regular expression, it is often necessary to use
scripts to filter data. Linux system provides a very convenient grep command,
which can utilize this function to filter data.
You can experiment with different ways of filtering easily using the grep
command.
In the next section we will discuss about the importance and implementation of
regular expressions in Linux.
Next, we will learn this case of arithmetic operations with built-in command
known as let.Note that when using the let command to calculate, the result of the
operation will not be given as output by default. Generally, it is necessary to
assign the result of the operation to a variable and view the result through the
variable.
In addition, when using the let command to calculate variables, there is no need
to add the $ symbol before the variable name.
Finally, note that when using \++ or - operators, the results of x++ and ++x are
different, and the results of x- and -x are also different. X\++ refers to calling x
and then adding 1 to x, and \++x refers to adding 1 to x and then calling x. X- is
to call x first and then subtract 1 from x while -x is to subtract 1 from x first and
then call x.
Bash only supports four operations on integers, not decimal operations. If we
need to operate on decimals with arbitrary precision or even write calculation
functions in the script, we can use bc calculator to realize it.
root@server :env
root@server : set
WHAT NEXT?
With this, we have completed a brief introduction to operators in Shell
programming. To further improve your skills as a shell programmer we
recommend you to try different operations available in a linux system. In the last
two sections of this book , we will discuss about various advanced concepts of
Shell programming with examples. Head on to the next chapter to learn about it.
TILDE EXPANSION
This is one of the above mentioned extended functions of Shell and represents
the home directory of the current user by default in the shell script. We can also
use the tilde expansion with a valid account login name to return the home
directory of a specific account.
However, note that the account must be a valid account in the system.Tilde
extension uses \~+ to indicate the current working directory, and \~-indicates the
former directory.
A working directory is determined with the following expansion.
Here are some other commands:
VARIABLE REPLACEMENT
boats = “”
root@server : echo $(boats : -streamer)
But it only returns the keyword streamer, which will not change the value of
boats, so the value of boats is still empty.Let's verify through an example that
even if the boats variable is defined, the keywords will still be returned when the
value is empty.
Whether the variable is undefined or the value of the variable is empty, the
following example returns the keyword and modifies the value of the variable.
When the value of a variable is non-null, this extension will directly return the
value of the variable itself.Occasionally, we can use variable substitution to
realize the error reporting function of script, and judge whether a variable has a
value. If there is no value or the value is empty, we can return specific error
reporting information.
Looking at the opposite result, when the variable has a value and is not null, the
keyword is returned, and when the variable is undefined or the value is null, it is
null.
In the previous chapter, we have written several cases of creating system
accounts and configuring passwords.Combined with the variable replacement
function we learned here, we can continue to optimize the scripts and realize
more functions.
Is that all for the substitution of variables?
Of course not!Variable substitution also has very practical functions of string
breaking and can separate the head and tail ( prefix and suffix) of a string. We
will talk about these functionalities in the next section.
Continuation of String Cutting and Pinch-off
We suggest that these alternatives to variables will not change the values of
variables themselves.The following examples demonstrate the specific
application of these functions.
Here is an explanation:
Firstly, a variable home is defined, and the offset of the variable increases from
0, indicating the position of each character of the variable value.
If a specific length is set, the value of the given length will be intercepted and
ended. If the intercepted length is not specified, it will be directly intercepted to
the end of the variable.Here, several examples are used to introduce the
operation of pinching the head and removing the tail of variables.
Use # to pinch the head and % to remove the tail.Because a # indicates the
shortest match, execute the following command to delete only the first O and
everything on its left.
If you need to make the longest match, that is, find the last specified character all
the time and delete all the characters before it, you need to use two #
symbols.Delete from right to left until d is matched. One % matches from right
to left.
It will stop at the first d, and the two % will match from right to left, but it will
not stop until the last d is matched.
If the variable is of array type, are these extensions still valid?The answer is yes.
Understand the power of shell programming.
You can modify the file name or extension in batches by pinching the head and
removing the tail. There are two cases of modifying the file extension in
batches.One script is to modify file extensions in the current directory in batches,
and the other script is to modify file extensions in the specified directory in
batches.
Finally, learn the statistics and replacement of variable content. Through this set
of functions, we can find variables, count the number of characters in variable
content and replace variable content.
In the next section, we will discuss about command substitution with various
examples. Follow along!
COMMAND SUBSTITUTION
We can use $ (command) or command to realize command replacement. It is
recommended to use $ (command) so you can utilize the support for nested
command replacement.
Here is the command:
ARITHMETIC REPLACEMENT
The arithmetic replacement extension can perform arithmetic calculation and
return the calculation result. The format of the arithmetic replacement extension
is $ (()), or it can use the form of $ [].
The arithmetic extension supports nesting.It is specifically introduced in this
book. Here again, the function of arithmetic extension is demonstrated by a
simple example.
Here are the commands:
root@server : k =3
root@server : echo $((k\++)
// This will give output as 3
root@server : k =3
root@server : echo $(( - -k))
// This will give output as 1
root@server : echo ( 5 !=7)
// This will display as 1
root@server : who | wc -l
We can also use process replacement to realize the same functions.< (who) will
save the results generated by the who command to the file descriptor /dev/fd/63,
and use this file descriptor as the input parameter of wc -l command.
The final output result of wc -l <(who) will show that there are five lines in the
file /dev/fd/63.It should be noted that the file descriptor is dynamically generated
in real time, so when the process is finished executing, when the file descriptor is
viewed by ls, it will be prompted that there is no such file.
Using process substitution, we can also pass the output results of multiple
processes to one process as its input parameter.
In the following case, we want to extract the account name (the first column) and
home directory (the sixth column) from the /etc/passwd file, and then extract the
password information (the second column) from the /etc/shadow file.
Finally, the data is merged into one file information through the paste command,
which reads the contents of multiple files line by line and merges multiple files.
root@server : paste
// This is used to paste rows, columns and a lot other identifiers
In Linux system, the output of the previous command can be redirected to a file
by using a pipeline, but once the output of the command is redirected to a file,
root@server : tee
The following command can view all files ending in conf in the /etc directory,
and save the output results to the /tmp/conf.log file.
Note:
If the file already exists in the system, the tee command will overwrite the
original contents of the file.
Next, we will demonstrate a case of process replacement with tee command.We
will create three files with sh extension and three files with conf extension as
experimental materials, and then write the output results of ls|tee into temporary
file descriptors through process replacement.
Finally, we will filter the contents of file descriptors through grep, redirect the
file names ending in sh to sh.log file, and redirect the file names ending in conf
to conf.log file.
WORD SPLITTING
Word splitting is often important to be learnt because there are usually a lot of
PATH REPLACEMENT
Path replacement is also a famous shell functionality. Unless set -f is used to
disable path replacement, Bash will search for \*,? and symbols. If these
symbols are found, the pattern matching is replaced.
The Shell processes the path or file after the path replacement when processing
commands.If the nocaseglob option is turned on when using the shopt command,
bash is case-insensitive when performing pattern matching, and it is case-
sensitive by default.
In addition, you can turn on the extglob option when you use the shopt
command, which allows Bash to support extended wildcards.The -s option of
shopt command can turn on specific Shell properties, and the -u option can turn
off specific Shell properties.
With regard to the path or file name, you can not only use Bash's automatic path
extension function, but also use two external commands, basename and dirname,
and intercept the contents of the path or file name in a path.
When using the ls or find command to list files, it is always with a path by
default, but sometimes we only need the file name, so we can use basename to
extract the file name.
Here are the commands :
IN THE PREVIOUS chapter we talked about various shell properties that can
enhance the shell programs. They are efficient and can help us create programs
that are prone to less errors or warning while compiling them to hack the targets
that we need to. In this final chapter of the shell module we will talk some
advanced details about the shell interpreter that can help us understand how shell
functions in the hardware kernel level.
All the information about its interaction with the Linux kernel is essential if you
are looking forward to write programs that requires sharing both kernel and
hardware resources cumulatively. Follow along and experiment the given code in
your own linux machine and clear errors all by your own for better
understanding of the subject.
When the account already exists or the account creation fails due to other
reasons, the default script will still insist on executing all the script commands.
Obviously, there will be an avalanche error prompt.There are many scripts like
this, and the related scripts such as installing software, modifying configuration
files, and starting services may cause large-scale errors in the whole script
because of a small mistake in front. Setting set -e can stop the whole script when
the first command goes wrong.
WHAT IS HASHALL?
“Hashall” allows Bash to record the executed command PATH and save it in a
memory hash table, so that the next time the same command is executed, it is no
longer necessary to search the command path through the path variable, which
usually improves the efficiency.
But sometimes, the path of the program changes, because the existence of Hash
records will lead to the failure of command execution.It can be seen that the
root@server : hash -d
root@server : hash -r
We can use Hash -d to delete a certain record information, use Hash -r to clear
the whole hash table, or use set +h to disable the hash table directly. These
methods can solve similar problems.Bash can support calling with exclamation
marks by setting the histexpand property.
root@server : histexpand(!)
Historical commands, such as Yum can directly call the last command in history
that starts with yum.By default, when we use redirection symbols such as > or
>&,the file will be overwritten, which may lead to the loss of existing data.
Setting noclobber property can prevent data from being overwritten.If the scripts
we write use tools such as tar, rsync and mysqldump to back up the data, because
the backup takes a certain amount of time, the scripts may be repeatedly
executed, such as opening multiple command terminals to execute the same
script repeatedly, or multiple remote connected users executing the same script,
and finally the backed-up files are chaotic.
For scripts like this, we can use the “noclobber” attribute of Bash to prevent the
scripts from being executed repeatedly.If the script wants to read the user's input
value as the script's variable parameter through read or position variable, and the
actual execution script does not assign value to the script, an unexpected error
will occur at this time.
Opening the “nounset” attribute of Bash can effectively prevent the error of
undefined variables.Because the above script didn't assign values to $1 and $2
and set -u, it quit the script directly after prompting that the variables were
assigned values.
If “nounset” attribute is not set, the operations of creating account and changing
password will still be executed and the wrong result will be returned.
Advanced information about Hash Table:
The function of Hash table has been introduced before. Under normal
circumstances, the system will search the records in Hash table first when
executing commands, and then execute commands according to the records in
the table.
However, according to the records in the Hash table, if the command cannot be
found, an error will be reported.After we open the checkHash attribute through
shopt, if the system cannot find the command according to the records in the
hash table, we will continue to search the normal command path.
When you can't find the program according to the Hash record when you open
“checkhash”, you can continue to search the path of the program in other ways,
and the command is executed normally. Cmdhist attribute allows us to record a
history that needs to be saved by multiple lines of commands into a record.
root@server : cmdhist
root@server : tput
You can display the number of lines of the current terminal through lines.Clear
command can clear the current terminal, and the effect is the same as executing
clear command or pressing Ctrl+L .
With cup, you can move the cursor to specific rows and columns.Sc can save the
current cursor position, and rc can restore the cursor to the last position saved by
sc.
root@server : cup
You can set no display cursor through civis, and you can set display cursor
through cvvis or cnorm.Blink can set the terminal to blink mode, bold can set the
terminal to bold mode, and rev can exchange the font color and background
color of the current terminal.
root@server : cvvis
With smcup, you can save the current screen, and rmcup can restore the recently
saved screen status.With sgr0, you can cancel all terminal attributes and restore
the terminal to normal status.The reset command can also reset our current
terminal to its initial state.
root@server : rmcup
WHAT NEXT?
BEFORE HEADING over to the next module of this book, we just want to
provide a few information that can help you to polish your Linux programming
skills using scripting languages such as Shell.
WHAT TO DO?
Before writing scripts, always make sure what your goal is. You can’t change the
goal of your script in the middle of the project. A perfect research about your
resources, target and your own strength is important to be a successful Linux
shell programmer.
You can use websites such as Github to find a lot of open source scripts that can
help you understand the script programming workflows that other programmers
use. While it is true that every programmer works differently it doesn’t give any
harm to understand how others work efficiently and use those tricks in your
work flow.
As a hacker, you need to be aware that shell program has a dangerous
executional abilities and can destroy systems if that might be your intention . So,
make sure and confirm with your guts about what you are trying to achieve. If
done without your understanding of the legal impacts that may arise then you are
definitely in trouble.
All the best and we are thrilled to head over you to the next module of this book.
Follow along!
IN THE PREVIOUS modules of this book we have learned about Linux and
its architecture in detail along with the usage of shell and its advanced
functionalities. While these modules have helped you understand the importance
of Linux and its implementation of resources learning Linux without any use
case is pretty bland. To make you sure again Linux is not a great day-to-day
system for normal users but a sophisticated operating system that is enclosed
with tons of tools for developers and security enthusiasts. Linux is also used by
black hat hackers to steal data and sensitive information such as credit card
numbers, cookies using their own tools or scripts.
This final module of this book not only introduces cyber security fundamentals
technically but will also provide information about tons of tools to help you
understand what is actually going on.
WHAT IS HACKING?
It may seem arbitrary because of constant wrong introduction of hacking in
popular culture references. From 90’s films, books and comics are representing
hackers as a shady guys who are trying to break systems and collapse the
internet.
But, is it right? Or hackers really what they are represented? I mean there are
definitely some bad people who stay in dark web and sell stolen credit card
information at cheap prices for people who are willing to exploit.
But on the other side of coin there are also thousands of ethical hackers who are
working hard to protect complex network systems to be not attacked by some
mischiefs that are trying to brute force the systems.
Hacking is a computer learning where people try to destroy systems. While it
may seem that destroying is a bad thing but what they are doing actually is to
help us understand the loopholes that may arise while developing software,
operating systems and applications.
In the early days hacking used to be a skill that requires both hardware and
software access. But now, after the advent of internet and network architecture
anything can be hacked just with the help of few software with just a computer.
WHAT NEXT?
To be a hacker you need to have a good mind for research about the target. In the
next chapter we will talk about different networking tools to improve your
reconnaissance knowledge. We also suggest you to learn about hacking and
cybersecurity in much larger detail by reading different blogs. You can also try
to involve in bug bounty hunting if you become interested in ethical hacking.
TCPDUMP
Tcpdump is a classic network capture tool. Even today, we have a package
grabbing tool like Wireshark which is easier to use and master, and tcpdump is
still a necessary tool for network programmers.Tcpdump provides users with a
large number of options to filter data packets or customize the output format.
However, in the version after 4.0, the default capture length was changed to 65
535 bytes, so we don't have to worry about the capture length. -s, the serial
number of TCP segment is displayed as absolute value, not relative value. -w,
directs the output of tcpdump to a file in a special format. And -r will read the
packet information from the file and display it.
In addition to using options, tcpdump also supports using expressions to further
filter packets.Operands of tcpdump expressions are divided into three types:
type, direction and protocol.The following are introduced in turn in this
section.Type will explain the meaning of the parameter immediately following it.
All the types supported by tcpdump include host, net, port and portrange.
They specify host name (or IP address), network address expressed by CIDR
method, port number and port range respectively.For example, to grab data
packets on the whole 1.2.7.0/255.233.211.0 network, you can use the following
command.
For example, to grab all the internet control message protocol, you can use the
following command.
You can use it for several other protocols such as SMTP, FTP etc.
Of course, we can also use logical operators to organize the above operands to
create more complex expressions.The logical operators supported by tcpdump
are exactly the same as those in programming languages, including and (&&), or
( | |), not (!).
For example, to grab IP data packets exchanged between host -laptop and all
hosts other than the source, we can use the following command.
In addition, tcpdump also allows direct use of some protocol fields in packets to
Because the second bit of the 14th byte of TCP header is the synchronization
flag we can get a lot of information from the data.Finally, the specific output
format of tcpdump is not only related to options, but also related to protocols.We
discussed the tcpdump output format of IP, TCP, ICMP, DNS and other
protocols.For tcpdump output formats of other protocols, please refer to the man
manual of tcpdump, and this book will not repeat them.
With this, we have completed a brief introduction to the popular network
monitoring tool tcpdump. In the next section we will discuss about lsof, another
popular network monitoring tool in Kali Linux with in command examples.
Follow along!
LSOF
lsof(list open file) is a tool to list the file descriptors opened by the current
system.Through it, we can know which file descriptors are opened by interested
processes, or by which processes.
The usage method of this option is:
root@server : [email protected]:22
To display all file descriptors opened by the specified process use the man
command. -t, only the PID of the process with the target file descriptor turned on
is displayed.We can also directly take the file name as an argument of the lsof
command to see which processes have opened the file.
It can be seen from the code listing that the program files and dynamic libraries
on the test machine are stored in the equipment "8,3". Where "8" indicates that
this is a SCSI hard disk;"3" indicates that this is the third partition on the hard
disk, namely sda3.
The equipment corresponding to standard input, standard output and standard
error output of websrv program is "136,3".Among them, "136" indicates that this
is a fake terminal;"3" indicates that it is the third pseudo terminal, i.e. /dev/pts/3.
For FIFO-type files, such as pipes and socket, this field will display the address
of a kernel reference target file, or its inode number.If the field is displayed as
"0t" or "0x", it means that it is an offset value, otherwise it means that it is a file
size.It is meaningless to define the file size for a character device or FIFO type
file, so this field will display an offset value.
For socket, it shows the protocol type, such as "TCP".Name, the name of the
file.If we use the telnet command to initiate a connection to the websrv server,
when we execute the lsof command again,The following line will be added to its
output.
NC(NETCAT)
netcat command is short, capable and powerful, and has the reputation of "Swiss
Army Knife".It is mainly used to quickly build network connections.We can
make it run as a server, listen to a port and receive client connections, so it can
be used to debug client programs.We can also make it run as a client, initiate a
connection to the server and send and receive data, so it can be used to debug the
server program, which is a bit like a telnet program.
The transport layer protocol used by nc command by default is TCP protocol.-w,
if the nc client does not detect any input within the specified time, exit.-x, which
specifies the communication protocol used between nc client and proxy server
when they communicate.At present, the proxy protocols supported by nc include
"4" (socks v.4), "5" (socks v.5) and "connect" (https proxy).The proxy protocol
used by nc by default is SOCKS v.5.-x, specify the IP address and port number
of the target proxy server.
For example, to connect to squid proxy server on laptop from source and access
Web services in www.google.com through it, you can use the following
command.
root@server : $ NC-x source: 1080-x Connect www.google.com 80-z, and scan
whether one or some services on the target machine are turned on (port
scanning).
For example, to scan services with port numbers between 20 and 50 on the
machine ernest-laptop, you can use the following command.
It will provide with requested file was not found on this server. Here we use the -
C option, so that every time we press the enter key to send a line of data to the
server, the nc client program will send an extra < Cr > < lf > to the server, which
is exactly the HTTP line terminator expected by websrv server.
After sending the third line of data, we got the response from the server, which is
exactly what we expected. The server did not find the requested resource file
a.html.Therefore, nc command is a convenient and quick testing tool, through
which we can quickly find out the logic errors of the server.
In the next section of this chapter, we will discuss about strace tool that can
conveniently help us to track the routes of the network packets. Follow along!
STRACE
Strace is an important tool for testing server performance.It tracks the system
calls and signals received during the program running, and outputs the system
call names, parameters, return values and signal names to the standard output or
specified files.
The format is:
[qualifier=][!]value1[,value2]...。
The qualifier can be one of trace, abbrev, verbose, raw, signal, read and write,
and the default is trace.Value is a symbol or numerical value used to further limit
the tracked system calls.Its two special values are all and none, which
respectively mean to track all system calls of the type specified by qualifier and
not track any system calls of this type.
For example, -e read=3, 5 means to output all data read from file descriptors 3
This line of output indicates that the program "cat/dev/null" executed the open
system call during running.
Open call opened the large file /dev/null in a read-only manner, and then
returned a file descriptor with a value of 3.It should be noted that this example
command will output a lot of content, and here we omit a lot of secondary
information. In the following examples, we only display the content related to
the topic.
When there is an error in the system call, the strace command will output the
error identification and description, such as the following example.
This will display the output as enoent (no such file or directory)
strace command will have different output modes for different parameter types,
for example, for C-style strings we will get a different output depending on its
parameter. This helps us to monitor data in a dynamic way using the strace
command utility tool.
The default maximum output length is 32 bytes, and the excessive strace will be
omitted with "…".
For example, the ls-l command will read the /etc/passwd file:
It should be noted that the file name is not regarded as a C-style string by strace.
For a structure, strace will output each field of the structure with "{}" and
The strace output above shows that the first parameter of lstat64 system call is
the second parameter and is the output parameter (pointer) of stat structure type.
strace only shows two fields of the structure parameter: mode and rdev.It should
be noted that when the system call fails, the output parameters will be displayed
as the values before passing in.
For bit set parameters (such as signal set type sigset), strace will output all bits
set to 1 in the set with "[]", and separate each item with a space.
Suppose there is the following code in a program:
sigaddset(&set,SIGUSR1);sigprocmask(SIGBLOCK,&set,NULL);
Then strace command for this program will output the following contents:
For the output modes of other parameter types, please refer to strace's man
manual, and do not repeat them here.
For the signal received by the program, strace will output the value of the signal
and its description.For example, we run the "sleep 100" command on one
terminal, then use the strace command to track the process on another terminal,
and then use "Ctrl+C" to terminate the "sleep 100" process to observe the output
of strace.
The specific operation is as follows:
Next, use the method to initiate a connection to the server and send an HTTP
request. Therefore, this event indicates that a new client connection is coming,
so the websrv server makes an accept call to the listening socket, and accept
returns a new connection socket with a value of 5.
Then, the server clears the errors on the new socket, sets its REUSEADDR
attribute, then registers EPOLLRDHUP and EPOLLONESHOT events on the
socket in the epoll kernel event table, and finally sets the new socket as non-
blocking.
The wait call detected the epoll event on file descriptor, which indicated that the
first line of data from the client had arrived, so the server executed two recv
system calls to receive the data.The first recv call reads 38 bytes of customer
data, that is, "gethttp://localhost/a.htmlhttp/1.1 \ r \ n".The second call to recv
failed, and errno is EAGAIN, which means there is no more customer data to
read at present.
After that, the server called futex function to unlock the mutex to wake up the
thread waiting for the mutex.It can be seen that pthread_mutex_unlock function
in POSIX thread library calls futex function internally.The contents of the third
and fourth parts are similar to those of the second part, so we will not repeat
them here.
In the fifth part, the wait call detects the EPOLLOUT event on file descriptor 5,
which indicates that the worker thread correctly processed the client request and
prepared the data to be sent, so the main thread started to execute the writev
system call and write the HTTP response to the client.
Finally, the server removes all registered events on file descriptor 5 from the
epoll kernel event table and closes the file descriptor.Therefore, strace command
enables us to clearly see the timing of each system call and the values of related
parameters, which is more convenient than debugging with gdb.
NETSTAT
Netstat is a powerful statistical tool for network information.It can print all
connections, routing table information and network card interface information on
the local network card interface.For this book, we mainly use the first of the
above functions, that is, display TCP connection and its status information.
After all, to get routing table information and NIC interface information, we can
use route and ifconfig commands with richer output content.
Next, we run the websrv server and execute the telnet command to initiate a
connection request to it.
In the above output, line 1 indicates that the local socket address
127.0.0.1:13579 is in the LISTEN state, and waits for any remote socket
(represented by 0.0.0.0:) to initiate a connection to it.Line 2 indicates that the
server has established a connection with the remote address 127.0.0.1:48220.
VMSTAT
Vmstat is the abbreviation of virtual memory statistics, which can output the
usage of various resources of the system in real time, such as process
information, memory usage, CPU usage and I/O usage.
By default, the output of vmstat is quite rich.
Look at the following example:
root@server : $vmstat 5 3
# outputs the results every 5 seconds.
Note that the output in the first line is the average result since the system was
started, while the output in the next line is the average result in the sampling
interval.
If these two values change frequently, there is not enough memory.io, the usage
information of block devices, and the unit is block/s. bi indicates the rate at
which blocks are read from the block device;"bo" indicates the rate at which
blocks are written to the block device."in" indicates the number of interrupts per
second and “cs" indicates the number of context switches (process switch) per
second.
However, we can use iostat command to get more information about disk usage,
and we can also use mpstat to get more information about CPU usage.The
IFSTAT
Ifstat is the abbreviation of interface statistics, which is a simple network traffic
monitoring tool.
For example, we execute the following command on the test machine:
root@server : $ifstat-a 2 5
# outputs the result every 2 seconds.
output of ifstat shows the rate of receiving and sending data on each NIC
interface in KB/s.Therefore, using ifstat command can roughly estimate the total
input and output traffic of the server in each period.
In the following section of this chapter, we will talk about mpstat which is a
popular networking tool for hackers to analyze different processor statistics.
Follow along!
MPSTAT
Mpstat is the abbreviation of multi-processor statistics, which can monitor the
usage of each CPU on multi-processor system in real time.The mpstat command
and iostat command are usually integrated in the package sysstat, which can be
obtained by installing sysstat.
The typical usage of the mpstat command is (there are not many options of the
mpstat command, which are not specificALLy introduced here)
The output of each sampling contains three pieces of information, and each piece
of information contains the following fields: CPU, which indicates which CPU
the piece of information is."0" indicates the data of the first CPU, "1" indicates
the data of the second CPU, and "all" indicates the average value of the data of
the two CPUs.
% usr, except for the process with negative nice value, the proportion of the time
that other processes on the system run in user space to the total CPU running
time.
% nice, the proportion of the time that a process with a negative nice value runs
in user space to the total CPU running time.
% sys, the ratio of the time that all processes on the system run in kernel space to
the total CPU running time, but excluding the CPU time consumed by hardware
and software interrupts.
% iowait, the proportion of CPU waiting for disk operation to total CPU running
time.% IRQ, the proportion of CPU time spent processing hardware interrupts to
the total CPU running time.
WHAT NEXT?
With this, we have provided a perfect introduction to a couple of network
monitoring tools that can help you understand the essence of hacking. To
become a hacker you need to personally try out all these commands you have
learnt in this chapter on your own networks. If not, anything that you do cannot
be understood. Linux is not about using the written commands but it is about
using commands in a definite way that can change the way system interacts with
the hardware resources and software kernel.
In the next chapter of this module we will discuss about different Kali Linux
tools in detail. Follow along!
KALI LINUX CONSISTS of more than 350 hacking tools pre installed. All
these tools provide different use cases and can help hackers spoof network data
and can even monitor and document them for future proof cases.
In this chapter we will discuss about some of the most important tools that are
available in Kali Linux along with what they do. Grab your seat and have fun
exploring different branches of hacking.
root@sample : nmap
// This displays the menu
root@ sample : nmap www.google.com
// This scans the open ports of this particular address
root@sample : nmap -V 192.32.222.12
// This sends requests for open port detection for this input address
2) EXPLOITATION TOOLS
What is fun in hacking systems if you don’t know a way to hack them and
exploit them?
After finding vulnerabilities hackers usually insert trojans or spider worms to
complete the dirty work for them. A lot of hackers now are also implementing
bitcoin mining in different systems to make some quick bucks. But how do they
do it?
Experienced hackers write their own software for exploits whereas moderate
hackers use kali linux tools such as Metasploit.
What is Metasploit?
Metasploit is the most popular exploitation tool and can be used for tracking the
users or for binding trojans in an image or pdf files. And when the target clicks
the metas-loot modified file we can track them and exploit them.
Here are some command:
root@server : metasploit
// This is will start the msf console
root@server : msf bind image.jpg
// This binds the image file with metasploit trojan code
You can also use metasploit to hack servers and mobile devices. However,
remember that you can’t hack an apple device without hardware access whereas
you can hack android files even remotely.
Here w stands for the wordlist that carries the username and password where as
proxy.txt has information about different proxy servers that can be randomly
used if they are live by the brute force engine.
B) Airdump
Airdump is a network monitoring tool that does the same work as the above
mentioned tool but with more options and customizations. Airodump even
provided fake phishing pages to obtain information from the target users using
social engineering attacks.
With this, we have completed a brief introuction to Kali Linux tool set. In the
next chapter of this module we will talk about different network monitoring tools
that can help to analyze the networks and hack them. Let us go!
WHAT NEXT?
Congratulations! You have now successfully completed the last module of this
book. In these five modules you have learned a lot of new information that can
help you to become an efficient hacker.
To make use of the knowledge you have acquired please write your own shell
and python scripts that can automated brute forcing. Also, read tons of open-
source code from Github so that you can challenge yourself with complexities
that may arise while dealing with catastrophic situations as an ethical hacker.
Finally, a friendly suggestion. We are not here to talk about ethics. Do whatever
it makes you happy. But it genuinely feels better to stop destroying things
instead of destroying thing. Cheers and All the Best!
TYE DARWIN is a cyber security specialist and has written tens of books about hacking. He is very
accurate about all the tools that he learned and explains them with concise words. His first book “ Hacking
for beginners” has become a successful book with more than thousand copies sold. He is a programmer by
day and a penetration tester by night.