Avinash Kiran Ivvala
Avinash Kiran Ivvala
net/publication/342046978
CITATIONS READS
0 94
1 author:
SEE PROFILE
All content following this page was uploaded by Avinash Kiran Ivvala on 09 June 2020.
Faculty of Computing
Blekinge Institute of Technology
SE-371 79 Karlskrona Sweden
This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in
partial fulfillment of the requirements for the degree of Master of Science in Computer Science.
The thesis is equivalent to 20 weeks of full time studies.
Contact Information:
Author:
Avinash Kiran Ivvala
E-mail: [email protected]
University advisor:
Emiliano Casalicchio
Department of Computer Science and Engineering (DIDD)
i
i
ABSTRACT
Context. Cloud computing (CC) is developed as a Human-centered computing model to facilitate its
users to access resources anywhere on the globe. The resources can be shared among any cloud user
which mainly questions the security in cloud computing. There are Denial of Service and Distributed
Denial of Service attacks which are generated by the attackers to challenge the security of CC. The
Next-Generation Intrusion Prevention Systems (sometimes referred as Non-Traditional Intrusion
Prevention Systems (NGIPS) are being used as a measure to protect users against these attacks. This
research is concerned with the NGIPS techniques that are implemented in the cloud computing
environment and their evaluation.
Objectives. In this study, the main objective is to investigate the existing techniques of the NGIPS
that can be deployed in the cloud environment and to provide an empirical comparison of source mode
and destination mode in Snort IPS technique based on the metrics used for evaluation of the IPS
systems.
Methods. In this study, a systematic literature review is used to identify the existing NGIPS
techniques. The library databases used to search the literature are Inspec, IEEE Xplore, ACM Digital
Library, Wiley, Scopus and Google scholar. The articles are selected based on an inclusion and
exclusion criteria. The experiment is selected as a research method for the empirical comparison of
Source mode and destination mode of Snort NGIPS found through literature review. The testbed is
designed and implemented with the Snort filter techniques deployed in the virtual machine.
Results. Some common metrics used for evaluating the NGIPS techniques are CPU load, Memory
usage, bandwidth availability, throughput, true positive rate, false positive rate, true negative rate,
false negative rate and accuracy. From the experiment, it was found that Destination mode performs
better than source mode in Snort. When compared with the CPU load, Bandwidth, Latency, Memory
Utilization and rate of packet loss metrics.
Conclusions. It was concluded that many NGIPS of the cloud computing model are related to each
other and use similar techniques to prevent the DoS and DDoS attacks. The author also concludes that
using of source based and destination based intrusion detection modes in Snort has some difference
the performance measures
Thanks for everyone who made what I am today. Best is yet to come.
ii
CONTENTS
Table of Contents
ABSTRACT ...........................................................................................................................................I
ACKNOWLEDGEMENTS ................................................................................................................ II
ACRONYMS......................................................................................................................................... 3
1 INTRODUCTION ....................................................................................................................... 4
1.1 PROBLEM DEFINITION ............................................................................................................ 4
1.2 RESEARCH PURPOSE .............................................................................................................. 5
1.3 AIMS AND OBJECTIVES .......................................................................................................... 6
1.4 RESEARCH QUESTIONS .......................................................................................................... 6
1.5 AUDIENCE .............................................................................................................................. 7
1.6 THESIS STRUCTURE ............................................................................................................... 7
2 BACKGROUND .......................................................................................................................... 8
2.1 CLOUD COMPUTING AND ITS SECURITY ................................................................................. 8
2.1.1 Cloud Computing .............................................................................................................. 8
2.1.2 Working models of Cloud Computing ............................................................................... 9
2.1.3 Cloud Computing Security .............................................................................................. 10
2.2 DOS AND DDOS ATTACKS .................................................................................................. 10
2.2.1 Types of DoS and DDoS Attacks ..................................................................................... 10
2.2.2 DoS and DDoS Attack Tools ........................................................................................... 12
2.3 INTRUSION PREVENTION SYSTEMS ...................................................................................... 13
2.3.1 IDS Classification ........................................................................................................... 13
2.3.2 IPS Types ........................................................................................................................ 14
2.4 RESEARCH AREA .................................................................................................................. 14
3 LITERATURE REVIEW ......................................................................................................... 16
3.1 INTRODUCTION .................................................................................................................... 16
3.2 PLANNING LITERATURE REVIEW ......................................................................................... 16
3.2.1 The Need of Literature Review ....................................................................................... 16
3.2.2 Research Question .......................................................................................................... 16
3.2.3 Selection of Keywords ..................................................................................................... 17
3.2.4 Review Protocol .............................................................................................................. 17
3.2.5 Library Database Selection ............................................................................................ 17
3.2.6 Study Selection Criteria and Procedures ........................................................................ 17
3.2.7 Study Quality Assessment ............................................................................................... 18
3.3 CONDUCTING LITERATURE REVIEW .................................................................................... 19
3.3.1 Identification of Existing Research ................................................................................. 19
3.3.2 Database search ............................................................................................................. 19
3.3.3 Study Selection Criteria .................................................................................................. 20
3.3.4 Study Selection Procedure .............................................................................................. 20
3.3.5 Data Extraction and Primary Data Synthesis................................................................. 23
3.4 RESULTS OF LITERATURE REVIEW ....................................................................................... 24
3.4.1 NGIPS Techniques .......................................................................................................... 26
3.4.2 Metrics ............................................................................................................................ 31
iii
4 SNORT (INTRUSION PREVENTION SYSTEM) ................................................................. 32
4.1 INTRODUCTION .................................................................................................................... 32
4.2 NETWORK INTRUSION DETECTION MODE ............................................................................ 32
4.3 SNORT RULE MODIFICATION ................................................................................................ 33
5 EXPERIMENT .......................................................................................................................... 35
5.1 INTRODUCTION .................................................................................................................... 35
5.1.1 Object and Purpose of the Study ..................................................................................... 35
5.1.2 Quality focus ................................................................................................................... 36
5.2 PLANNING ............................................................................................................................ 36
5.3 EXPERIMENT DESIGN ........................................................................................................... 36
5.3.1 Testbed Design................................................................................................................ 37
5.3.2 Malicious and legitimate traffic component design ........................................................ 38
5.3.3 Evaluation Metrics Design ............................................................................................. 38
5.4 OPERATION .......................................................................................................................... 39
5.4.1 Snort Implementation ...................................................................................................... 39
5.4.2 Legitimate and Malicious Traffic Generation ................................................................ 39
5.4.3 Metrics Implementation .................................................................................................. 40
5.5 RESULTS .............................................................................................................................. 41
5.5.1 CPU Load ....................................................................................................................... 41
5.5.2 Memory Utilization ......................................................................................................... 42
5.5.3 Bandwidth ....................................................................................................................... 43
5.5.4 Latency............................................................................................................................ 44
5.5.5 Packet Loss Rate ............................................................................................................. 45
6 ANALYSIS ................................................................................................................................. 47
6.1 PARAMETRIC TEST ............................................................................................................... 47
6.2 MEAN SQUARE ERROR (MSE) ............................................................................................. 47
6.3 WILCOXON TEST.................................................................................................................. 48
7 DISCUSSION ............................................................................................................................. 50
7.1 THREATS TO VALIDITY ......................................................................................................... 50
7.1.1 Internal validity .............................................................................................................. 50
7.1.2 External validity.............................................................................................................. 50
7.2 ANSWERS TO RESEARCH QUESTIONS .................................................................................... 50
8 CONCLUSION AND FUTURE WORK ................................................................................. 52
REFERENCES ................................................................................................................................... 53
APPENDIX A...................................................................................................................................... 57
APPENDIX B ...................................................................................................................................... 58
iv
LIST OF FIGURES
1
LIST OF TABLES
2
ACRONYMS
CC : Cloud Computing
PaaS : Platform as a service
SaaS : Software as a Service
IaaS : Infrastructure as a Service
DoS : Denial of Service
DDoS : Distributed Denial of Service
IPS : Intrusion Prevention Systems
IDPS : Intrusion Detection Prevention Systems
NGIPS : Non-Traditional Intrusion Prevention Systems
TIPS : Traditional Intrusion Prevention Systems
CSP : Cloud service providers
DC : Data Center
MITM : Man in The Middle Attack
NGIPS : Next-Generation Intrusion Prevention Systems
ICMP : Internet Control Message Protocol
TCP : Transmission Control Protocol
UDP : User Datagram Protocol
HTTP : Hypertext Transfer Protocol
LOIC : Low Orbit Ion Canon
GUI : Graphical User Interface
RTT : Round Trip Time
PRS : Packet Resonance Strategy
RMN : Reflection Mirror Node
TMN : Transparent Mirror Node
SDN : Software Defined Networking
RP : Reverse Proxy
CDAP : Cloud DDoS Attacks Protection
DST : Dempster Shafer Theory
BPA : Basic Probabilities Assignments
SOA : Service Oriented Architecture
CBF : Confidence-Based Filtering
3
1 INTRODUCTION
Cloud computing (CC) is using a network of remote servers hosted on the internet
(“The cloud”) to store, manage and process the data, into the third party data centers which
are located at a distance ranging from across a town to across the world instead of using a
personal computer or local servers. CC is integrated and developed with Grid computing,
and virtualization technology. This facilitates the end users to enable ubiquitous configurable
shared computer processing resources and data applications to computer and other devices
on demand. This can be provisioned with the minimal management effort.
There are some services and models which make the CC more feasible and
accessible to its customers [1]. The two working models of the cloud are Deployment Model
and Service Model. Cloud mainly offers three types of service models to its customers. They
are Platform as a service (PaaS), Software as a Service (SaaS) and Infrastructure as a Service
(IaaS) [2]. There are four types of deployment models in the cloud. They are public cloud,
private cloud, hybrid cloud, and community cloud [2]. These working models make CC
more flexible to small and individual businesses to use great configurable computing
services with low cost.
CC mainly depends on sharing of resources with the third party which is the major
concern about the security threats. One of the major security threat to CC is Denial of
Service (DoS) and Distributed Denial of Service (DDoS) attacks [3]. In DoS attack the
attacker sends a huge amount of fake data packets over the single network from a single
zombie computer to effect the bandwidth or resources of the victim systems [4]. Distributed
Denial of Service (DDoS) attack is similar to DoS attack but is performed using more than
one zombie systems known as botnets with multiple networks which are placed at the same
location to anywhere in the globe, targeting single system [4].
For protecting the cloud against these attacks cloud service providers (CSPs) use
Intrusion Detection Prevention Systems (IDPS) along with traditional firewall mechanism.
IDPS is the combination of Intrusion Detection Systems (IDS) and Intrusion Prevention
Systems (IPS). IDS generally identifies the security threats by detecting the probes and
attacks but do not prevent them from reaching the Data Center (DC). According to Richard
Kemmerer et.al., Intrusion Detection Systems are a bit of misnomer because they don't detect
intrusions but show the evidence of the intrusion [5]
The IPS not only detect the intrusions using the detection algorithm but also
prevent the intrusion from reaching the DC which will reduce the impact of the attacks in the
cloud environment. According to Piper, These IPS can be segregated into two generations
based on their intrusion prevention capabilities [6]. They are Traditional Intrusion Prevention
Systems (TIPS) and Non-Traditional Intrusion Prevention Systems. These Non-Traditional
IPS are also referred as Next-Generation Intrusion Prevention Systems (NGIPS). The IPS
which have been around for years are known as Traditional IPS. The Non-Traditional or
Next-Generation IPS are the prevention mechanisms that are evolved from traditional IPS
with more solutions to the weakness of Traditional IPS.
In this research existing NGIPS techniques are found and also the metrics used for
evaluating the NGIPS techniques empirically are also found. An empirical comparison is
done based on the performance metrics for the two modes in Snort IPS (for preventing the
DoS and DDOS attacks) are done to evaluate the best Non-Traditional IPS mode among
them. They are selected based on the availability of the resources and usage popularity.
4
security, Network security, resource availability etc.., with respective to the cloud service
layers. To achieve them, many security attacks are generated by the attackers like DoS
attacks, Cross-site Scripting attacks, DDoS attacks, Buffer overflow attacks etc..,[8] Out of
them DoS and DDoS attacks are placed as the top nine threats to CC [9] [10]. The Cloud
servers are designed to increase their computational power according to the number of
queries. But, there is a limit to process the queries per unit time. When the certain amount of
queries to the server reached, it stops entertaining further queries. This is the loophole which
draws the attention of attackers towards generating DoS and DDoS attacks.
In the Norwegian history DDoS attack is the biggest attack which interrupted
online payment systems of five banks, three airlines, two telecommunication companies, and
one insurance company [11].
An DDoS attack on Amazon Elastic Cloud Computing (EC2) is also one of the
major attacks [12]. Due to technical error in this control service, there is a possibility to
manipulate eavesdropped message despite of digital signed operation. This made hackers to
execute arbitrary code and perform DDoS attack leading EC2 costs on users bill.
These DoS and DDoS attacks can easily be implemented by the hacker even with a
novice knowledge in hacking. This is possible due to the availability of DoS and DDoS
attack generation tools [13]. The attackers are performing DoS and DDoS attacks to deplete
the bandwidth or resources of the victim’s system [9]. DDoS attacks mainly affect the
availability of network resources by exhausting its bandwidth which results in legitimate
users not to access the cloud resources. So, there is a need to prevent these attacks and
provide secure services to the users.
There are many TIPS and NGIPS techniques which can be deployed in the virtual
networks both at the server side and client side with respective to the cloud service layers
against DoS and DDOS attacks. The empirical comparisons based on the metrics provide
better understandings of performance standards of the applications [11]. There is an
empirical comparison of the Traditional IPS which helps the CSPs to select the suitable IPS
to their Cloud. The above statement can be motivated by analyzing the literatures [7] [11].
According to A. Patel et.al. and A. Carlin et.al [14] [11], TIPS are inefficient to deploy in the
cloud computing environment as they are not designed to detect and prevent attacks which
are generated by the attackers with latest attack generation tools which got better-enhanced
features to destroy the network. So in these days using of the Non-Traditional IPS became
indispensable to protect DC against these attacks.
When the author of this thesis performed literature search for empirical evaluation
on NGIPS techniques using search string
no literatures were retrieved from the data bases. By this we can say that there was limited
research available in empirical comparison of the Non-Traditional IPS for preventing the
DOS and DDOS attacks. So there is a need of empirical comparison for Non-Traditional IPS
which is the research problem addressed in this study.
There are many NGIPS techniques to prevent DoS and DDoS attacks. But no
single technique can prevent all protocols of DoS and DDoS attacks [15]. In order to gain
knowledge about the NGIPS technique and the evaluation metrics, and to evaluate them
there is a need of identifying different NGIPS techniques for preventing DoS and DDoS
attacks in cloud computing environment.
5
In the recent years, the strengths and weakness of the non-traditional IPS for
preventing the DOS and DDOS attacks are not explained empirically. The purpose of this
research is to identify the existing Non-Traditional Intrusion Prevention Mechanisms (IPS)
for preventing the DoS attacks and DDoS attacks in cloud computing and to identify the best
mode in Snort NGIPS with empirical comparison between them based on the evaluation
metrics.
This research will help the CSP’s to know the existing NGIPS techniques that can
be deployed in the cloud environment to protect their DC against DoS and DDoS attacks.
This research will also help CSP’s to choose the appropriate mode in Snort according to their
cloud environment.
Objective 1:
Identifying the different Non-Traditional IPS which are used in Cloud Environment.
Motivation: this helps in identifying the existing NGIPS techniques and will help
the CSP to know the different NGIPS techniques.
Objective 2:
Identifying the performance and security metrics to measure the techniques.
Motivation: this helps in answering the research question 2 by giving the list of
mostly used metrics for evaluating the NGIPS techniques.
Objective 3:
To evaluate the source mode and Destination mode empirically and analyze the
performance of the techniques.
Motivation: on analyzing the performance of both modes empirically will help the
CSP’s to choose the appropriate mode in Snort NGIPS.
RQ 2: What are the different metrics used for evaluating Next-Generation Intrusion
Prevention Systems (NGIPS) for Cloud Computing against DoS and DDOS attacks?
Motivation: This research question will help in identifying the metrics to evaluate
any NGIPS techniques for cloud computing.
6
Motivation: This research question will help the third party user in selecting the
suitable prevention mode while using Snort IPS for their cloud environment.
1.5 Audience
This thesis document is designed for CC security staff and program managers, Computer
Security Incident Response Teams (CSIRT), CSP, system and network administrators who
regulates and monitors IPS for CC. This document assumes that the reader has some
knowledge in IDPS technologies and security challenges of CC.
7
2 BACKGROUND
This chapter provides the basic concepts of cloud computing, types of DoS and
DDoS attack classifications and attack generating tools, and brief discussion about IPS
which help in better comprehend of the context of this research. the related work on this
research is also presented. The structure of this chapter is as follows
Section 2.1: Discusses about Cloud Computing architecture and its Security issues
Section 2.2: This section epitomizes DoS and DDoS attack Protocols and available
tools in generating these attacks.
Section 2.3: This section provides the brief idea about the IPS systems and their
classification.
The main purpose of cloud computing is using configurable resources over the network
without actually installing in local PC with minimal management cost.
According to the author W.Li et.al [17] CC is a computing service, providing dynamically
virtualization resource which can be availed through internet and is extendible according to
8
user requirement. The statement accentuates that CC uses virtualization technology in
providing its services to the users based on their requirement.
According to F.Li et.al [18] CC is an computing model based on parallel computing,
distributed computing, and grid computing. CC can sometimes be designed with the
realization of above mentioned computing models which percepts the concepts like
virtualization, utility computing, and the three service layers of CC.
Deployment Models: According to [1], [19], there are four deployment models in
the cloud environment.
Private Cloud: The cloud infrastructure which is indulged for a single
organization which comprises multiple business units. The cloud maybe
managed by third parties or by the same organization or by the combination
of them.
Community cloud: The cloud infrastructure which is indulged for an explicit
community of users from organizations that have shared concerns. The
cloud may be managed by third parties or by the organizations in the
community or sometimes by the combination of them.
Public cloud: The cloud infrastructure which is indulged for the general
public around the globe. These type of cloud may be managed by the
government organizations or business organizations, academic or
combination of some of the organizations.
Hybrid cloud: The composition of two or more distinct cloud deployment
infrastructure (Private, Community, or Public cloud) is the hybrid cloud. The
distinct cloud deployments are bounded together with standardized or
proprietary technology which supports data and application portability.
When compared to the other computing models cloud computing models have some unique
characteristics [20].
On-demand self-service: Users can obtain the provision of cloud resources without any
long delays whenever they are signed into the cloud.
Board network access: Accessing the cloud resources with a wide range of devices
anywhere around the globe.
9
Resource pooling: Cloud service provider has an ability to pool the cloud resources to cloud
consumers who are isolated with each other. The assigning and reassigning of the resources
is done dynamically according to the demand of the user. Pooling of resources is generally
done by multitenancy models which usually on virtualization technology.
Rapid elasticity: CC is designed to adapt the workload changes by facilitating users to
allocate or de-allocate the additional space in the cloud, so as to meet the on-demand self-
service meticulously.
Measured Service: Services of the cloud are monitored and controlled by the cloud service
provider for access control, billing, optimizing resources and also to plan and manage tasks.
To provide the best quality service of the cloud to users the above-mentioned security risks
should be mitigated. To solve the Technical Errors, deploying IPS in a cloud environment is
one of the solutions [24].
Distributed Denial of Service (DDoS) attack is similar to DoS attack but is performed using
more than one zombie systems known as botnets with multiple networks which are placed at
the same location to anywhere in the globe, targeting single system [4].
According to [9] [15], DDoS attacks are launched using botnets. The Botnets are selected by
the attackers by snooping the network for the machines that are prone to vulnerabilities and
use them as agents. These machines are known as zombie machines or botnets. Spoofed IP
addresses are used by the host and zombie machines which make difficult to trace the
attacker and its source. The main aim of generating this attack is to overload the resources.
In the context of CC resources are Bandwidth, CPU cycles, Memory, File descriptors,
Buffers etc., These attacks can sometimes crash the server.
2.2.1 Types of DoS and DDoS Attacks
10
There are different types of DoS and DDoS attacks. The characteristics of those attacks are
explained below in the view of CC [1] [7] [13], [25]–[28].
UDP flood attack: In this attack, the attacker sends User Datagram Protocol (UDP)
messages with spoofed return addresses. This is flooded to the random port of victim
host with numerous UDP packets which force the host to listen on that port
continuously for an application. If no application is found, then it replies with an
ICMP destination unreachable packet. The continuous listening to the port by the
host saps the resources of the host which leads to inaccessibility of legitimate users.
ICMP flood attack: This is similar to the UDP flood attacks. It sends a lot of Echo
request packets (ping) rapidly to the target resource, without awaiting replies from
the target. Also, the principle of ICMP floods is similar to UDP flood attack and can
consume both incoming and outgoing bandwidth, as the victim’s servers can slow
down the overall system by responding with ICMP echo reply packets.
Protocol Attacks: Protocol attacks consumes actual server resources or the
intermediate communication components like firewalls, load balancers. This attack
is measured in packets per second and it generally includes SYN floods, fragmented
packet attacks, ping of death, Smurf DDoS etc.
Smurf Attacks: In this type of attack the attacker machine sends a large number of
(ICMP) Internet Control Message Protocol ICMP_ECHO_REQUESTS to the
network devices that supports broadcasting technique with the spoofed IP address of
the victim. Then the ICMP_ECHO_RESPONSE packets are sent to the victim by the
machines in the particular broadcast network. This floods the victim with the fake
ICMP_ECHO_REPLY messages.
IP Spoofing attack: the attacker alters the headers of source IP with the legitimate
IP address or by a false IP address. This makes cloud server held in loop state for a
non-completed request. This makes server busy cannot process any further requests
which affect the genuine user.
Teardrop attack: this is a type of DoS attack where the IP packets are fragmented
into smaller chunks. Each fragmented IP packets contains original IP packet’s
header. TCP/IP stack will overlap those IP fragments with each other when the
destination server tries to reassemble them. This leads in crashing the victim system.
SYN flood attack: In this type of attack the attacker floods the victim machine with
TCP/SYN packets with spoofed IP address. In TCP/IP three-way handshaking the
server sends the SYN_ACK back to the spoofed IP address as an acknowledgment
of synchronize message sent by the attacker and waits for the response. But the
response message will never come back as the IP address is fake. This half-open
connection exhausts the connections to the server avoiding server response to legal
requests.
PING of Death attack: the attacker sends the mangled ping packet to a computer
with a size greater than the limit of IP protocol (65,535). The victim's machine
cannot able to handle the oversize packets causes the machine to crash or reboot.
According to [7], the variants of this type of attack includes jolt, ICMP bug,
IceNewk, SPING, ping o Death.
LAND attack: this attack is akin to ping attack. The attacker uses “land.c”
executable program to send the TCP/SYN packets with the victim’s IP address. This
results in sending of requests to the victim machine itself and finally, crashes.
HTTP flood: The HTTP flood attack is a type of DDoS attack that does not use
malfunctioned packets, spoofing or reflection techniques, but rather the attacker of
this attack attacks the web server or the application by exploiting the legitimate
HTTP GET or POST requests. So the attack requires less bandwidth than other
attacks to attack the targeted site or server. The attack is more impactful and
effective when it makes the server or application to reserve maximum possible
resources in response to each single request.
11
TCP flood: In this attack, the attacker sends Transmission Control Protocol (TCP)
messages with spoofed return addresses. This is flooded to the random port of victim
host with numerous UDP packets which force the host to listen on that port
continuously for an application. The continuous listening to the port by the host saps
the resources of the host which leads to inaccessibility of legitimate users..
HX-DoS attack [29]: HTTP and XML attacks combinely known as HX-DoS
attacks. This type of attacks mainly destroys the communication channel in cloud
environment. This attack includes HTTP and XML type of message requests.
12
Low Orbit Ion Canon (LOIC): This tool is mainly used for targeting web servers
by sending TCP packets, UDP packets and Http request to a target server for testing.
LOIC can be used to create a design bug written in C# which when left in a tool
makes it impossible from the user to stop an attack before completely exiting the
current running process. This helps in increasing effect of the attack. The attack is
carried by sending large amounts of data to a single IP address repeatedly.
Commview visual packet builder: This tool is mainly used to generate Land
Attack and Teardrop Attack. For Teardrop Attack, two fragmented packets with
overlapping offset values must be built which belong to the same original packet and
also have the same ID. This ID has a value assigned by the sender host to help in
assembling the fragments. In Land Attack a spoofed TCP SYN packet is used which
has IP address has been set to the destination IP address, the source port number to
destination port number and finally the destination MAC address is set to the MAC
address of the attacker host’s gateway.
Http Unbearable Load King(HULK): This tool is mainly used to avoid attack
detection via known patterns. This tool generates a unique request for each and every
generated request to hide traffic at the web server.
The comparison of the tables with the types of attacks implemented is provided in
appendix A.
The main advantages [33] of using IDPS technologies is to identify the intrusions and to log
the details of the intrusion for further analysis. These technologies can also prevent the
attackers for limited period of time which helps the network administrators and CSP’s to
take counter measures to the attacks. The preventing of attacks in IDPS technologies can be
done by any of the three ways
The IDPS terminates the network connection or the secession which is established to
perform the attacks.
Blocking the access to the target server by offending user account or IP address.
“Block all access to the targeted host, service, application, or other resource” [33].
IDPS uses some detection methodologies to detect the attacks. They are
Signature-Based Detection: In signature-based detection IDPS has some predefined
signature patterns of the attacks which it verifies with the observed events to identify the
malicious incidents.
Anomaly-Based Detection: In Anomaly-based detection has the predefined profiles of
normal behavior of users, hosts, Network connections and applications. If there is any
mismatch observed with respective to the profiles, then it alerts for the malicious incidents.
Stateful Protocol Analysis: This is similar to anomaly-based detection methods. This
contains the vendor-developed universal profiles [33] of benign protocol activity. The
attacks are detected based on the tracking the state of protocols.
2.3.1 IDS Classification
13
According to [11] IDS can be classified into four categories which are designed for CC.
They are
Host-Bases IDS (HIDS): This monitors the log files, user login information, and security
permission details of user to detect the intrusive behavior of the user.
Network-Based IDS (NIDS): The behavior of the data packets is tracked by IP and
transport layer headers which in turn compares the behavior of packets with previously
logged behavior inn real time.
Hypervisor-Based IDS HyIDS): These are designed to analyze and to scrutinize the
communication between Virtual machines (VM’s) hypervisor based virtual network and
between the hypervisor and VM’s.
Distributed IDS: the combination of more than two IDS mentioned above which
communicate each other via a central analyser. These individual IDS are placed
across a large network.
2.3.2 IPS Types
According to [6][11][33][34] there are two types of IPS which are designed for CC. They are
Traditional IPS (TIPS): the IPS technologies which have been around many years are
known as TIPS. The detection process in this type of IPS will be implemented through in-
band sensors or applications which were configured with a generally followed known threat
signatures.
The main disadvantages of using TIPS are
They cannot be updated with latest detection profiles according to the updated attack
behavior pattern.
They cannot satisfy the requirements of high-speed networks.
There will be no uniform standard of metric for evaluation.
They cannot be applied efficiently to mobile networks.
There will be an frequent variation of traffic profiles which makes training of IPS
difficult.
Next-Generation IPS (NGIPS): These IPS are designed by overcoming the disadvantages
of TIPS. And in additional to that these provide additional features like
Application awareness: These NGIPS provides the flexibility to restrict access to
specific applications.
Update profiles: These NGIPS systems are flexible to update the intrusion detection
profiles timely based on the real time traffic learning manually or automatically.
Automated Response: this automatically responds to the threats identified based on
the policy.
14
Figure 2: Research Area
15
3 LITERATURE REVIEW
This chapter focuses on the process carried out during the literature review. It is divided into
three sub-sections and the structure is as follows:
Section 3.1: Gives introduction about Literature review and outline for the process
carried out for executing Literature Review.
Section 3.2: Describes the planning process for answering the framed research
question.
Section 3.3: Concentrates on how the research process is carried out.
Section 3.4: Epitomizes the results of the literature review.
3.1 Introduction
In computer science engineering discipline, there are mainly five research methods [35],
[36]. They are literature review, experiment, simulation, interviews and surveys. In this
research, author selected literature review as one of the research strategy to answer the first
and second research questions, because according to kitchenham [35] “literature review is
the best research method for identifying, evaluating and interpreting all available research
relevant to a particular research question, or topic area, or phenomenon of interest.” The
first and second research questions is to identifying the existing work on the research area,
which emphasizes the statement of Kitchenham. So literature review is selected to answer
those research questions. Literature review is conducted based on the guidelines provided by
kitchenham [35]. The steps for executing SLR are as follows:
Planning: The need for a literature review in this research, its context,
development, and need of review protocols is explained in this section.
Conducting: The process identification of literature in databases and the selection
procedure and how the data is extracted is discussed.
Results and Analysis: The results and analysis of the literature review are
presented.
The description for each fragment is explained in the subsections below.
16
RQ 2: What are the different metrics used for evaluating Next-Generation Intrusion
Prevention Systems (NGIPS) for Cloud Computing against DoS and DDOS attacks?
How: The literature review will help in identifying the general metrics used for
evaluating the prevention techniques which are used in securing the cloud
environment.
3.2.3 Selection of Keywords
The keywords for this research is selected by using the guidelines provided by the [37]. This
uses the Population Intervention Comparison Outcomes Context (PICOC) criteria in
selecting the keywords for the research.
Population: The area of the research is considered as population. In this research,
the population resembles as “Cloud Computing”.
Intervention: In this research Intrusion Prevention Systems (IPS) address the
security issues for the population i.e cloud computing. So Intrusion prevention
systems are considered as intervention.
Comparison: Comparison of Next-Generation Intrusion Prevention Systems
(NGIPS) is done which is the new upcoming technology in Intrusion Prevention
Systems (IPS). So the comparison in this research is Next-Generation Intrusion
Prevention Systems.
Outcomes: In this research the NGIPS techniques are evaluated and their
performance is evaluated based on the metrics. So “performance accuracy” is
selected as outcomes.
Context: The comparison of IPS is based on the academic environment. So the
context in this research is “academic”.
17
Table 1: Study Selection Criteria
Search Constraints Abstract Full text Reference
/introduction
Inclusion Literature Theoretical Papers The
Criteria from the years study and describing reference
2010 to 2016 experimental Non- list should
are facts in Traditional IPS be well
considered. describing in cloud documented.
The controlled the Non- computing
vocabulary Traditional environment.
based on IPS are Experiment
research considered. procedure and
question is results are
used to select clearly
the relevant described
papers.
18
3.3 Conducting Literature Review
In this section the author explains how the literature review is conducted.
3.3.1 Identification of Existing Research
The aim of literature review in this research is to identify the previous studies and answer the
research question framed. To achieve this, a proper search string is required in order to
extract the most relevant literature from the databases.
For the search string formulation, we need keywords. Based on the keywords selected in
section 3.2.3, the author has divided them into groups and their synonyms are placed into
their respective groups as shown in table 3. Blank cells indicate no available synonyms. The
pictorial diagram of resultant literatures that can be obtained on using this keyword can be
found in figure 3.
Table 4: Search strings used for extracting literature from library databases
Database Search String
INSPEC (("Cloud computing" OR "Cloud
environment" OR virtualization) AND
(Intrusion OR Trespass OR Attack OR
Penetration) AND (Prevention OR Protection
OR Response OR Resilience) AND ("Next
generation" OR "Non-traditional"))
IEEE (("Cloud computing" OR "Cloud
environment" OR virtualization) AND
19
(Intrusion OR Trespass OR Attack OR
Penetration) AND (Prevention OR Protection
OR Response OR Resilience) AND ("Next
generation" OR "Non-traditional"))
SCOPUS TITLE-ABS-KEY (("Cloud computing" OR
"Cloud environment" OR virtualization)
AND (Intrusion OR Trespass OR Attack OR
Penetration) AND (Prevention OR Protection
OR Response OR Resilience) AND ("Next
generation" OR "Non-traditional"))
ACM (+( "Cloud computing" "Cloud environment"
virtualization )+( intrusion trespass attack
penetration )+( prevention protection
response resilience )+( "Next generation"
"Non-traditional" ))
WILEY "Cloud computing" OR "Cloud environment"
OR virtualization in Abstract AND Intrusion
OR Trespass OR Attack OR Penetration in
Abstract AND Prevention OR Protection OR
Response OR Resilience in Abstract AND
"Next generation" OR "Non-traditional" in
Abstract
INCLUSION CRITERIA:
The article should describe or summarize the NGIPS techniques for CC against DoS
and DDoS attacks.
The article should cover the security overcome issues in CC environment.
The article should discuss the effects of DoS and DDoS attacks.
The article should evaluate or present the outcomes for the proposed NGIPS
technique.
EXCLUSION CRITERIA:
The article that does not provide theoretical evidence or experimental evidence for
the proposed Non-Traditional IPS is excluded from the studies.
The articles which are not published in the English language are excluded.
The articles which are a part of a book or magazine.
STEP 1: The search string formed with the selected keywords is applied to the databases to
extract the literature. The literatures are retrieved from the database by the formed search
string as explained in the section 3.3.2. The number of articles selected from the initial
search of each database is shown in table 5.
Table 5: articles resulted from each database after initial search
Database Initial Search Results
20
INSPEC 16
IEEE 15
SCOPUS 14
ACM 2
WILEY 2
Total 49
STEP 2: From the initial set of articles, further filtration of articles is done to obtain the
relevant literatures for the research. The study selection process is explained in the section
3.2.6. The literature is selected by going through the abstract. If the abstract is pertinent to
the research question, then the article is selected to perform the literature review. If the
article is inappropriate, then it is rejected. The search results after applying the selection
criteria for each database is shown in table 6.
21
Figure 4: Study Selection Procedure
STEP 3: After Step 2, some duplicate articles were found, which were selected from
different databases. Those duplicate articles are removed by using Microsoft Excel where
column B is filled with the literature titles and column C is filled with the authors.
The function used to remove the duplicate articles in Excel is:
=IF(SUMPRODUCT(($B$2:$B$68=B8)*1,($C$2:$C$68=C8)*1)>1,"Duplicates","No
duplicates")
The resultant number of articles after the removal of duplicate articles is tabulated in table 7.
22
INSPEC 13 6 7
IEEE 11 3 8
SCOPUS 10 5 6
ACM 1 0 1
WILEY 1 0 1
Total 36 14 18
From the selected library databases, a total of 18 articles were filtered. These 18 articles also
include duplicate with respective to articles in each database i.e. articles resulted from
INSPEC database may contain the same articles which were resulted in IEEE database. On
merging of articles from all five databases, a total of 6 duplicate articles were found and 31
articles remained.
STEP 4: For the remaining articles the study selection criteria is applied further to reduce
research bias. The criteria is explained in the section 4.3.3. Table 8 shows the resultant
articles on filtering.
STEP 5: Based on the quality assessment checklist which is designed in the section 4.2.7,
the articles are reviewed from each database and selected for the literature review. From a
total of 68 articles, 28 articles are excluded from the study as they did not discuss the topic
related to the research. The articles resulted after quality assessment for each database is
shown in table 9.
The selected articles are reviewed to identify the Non-Traditional IPS which can be deployed
in cloud environment against DoS and DDoS attacks and the metrics which are used by
authors in accessing the performance of the Non-Traditional IPS. The data which is vital in
performing this research is highlighted while reading.
23
Table 10: Data extraction form
Data key Value Research Additional Notes
Question
General
Name of the Extractor Name of the
researcher who is
performing the data
extraction process
Name of the database Name of the database
in which the article is
selected
Total no: of articles Total number of
articles obtained from
the database
Article Title Name of the article
Year of publication Article published date
24
This table gives the technique name used to protect Cloud servers from DoS and DDoS
attacks. If the proposed method is evaluated, then the data of metrics used for evaluating the
proposed NGIPS technique is recorded. If the author in the article dose not evaluate the
technique, but just proposes the method then the metric used for evaluation column is
recorded as “did not propose any metric”. If the author in the article clearly states to which
DoS and DDoS attack protocol the proposed technique prevents then the type of attack
column is recorded with the attack protocol.
2 [22] TCP mitigation strategy using SYN cookies Dos and DDoS
25
18 [52] Port Lock Mechanism using SFA algorithim DDoS
In [38] author uses “Packet Resonance strategy (PRS)”. The architecture of PRS mainly
consists of two levels of detection methods. They are Packet Bouncer and Packet Transit
Detection levels. In packet Bouncer level there will be a Reflection Mirror Node (RMN). In
this level the node will log the incoming packet and a small packet is bounced back to the
user of cloud. The user should reply to the node along with the same packet for
authentication. The authentication is verified by investigating the MAC and IP address
combination. In Packet Transit level there will be a Transparent Mirror Node (TMN). “In
this level, the node will inquire for the origin Pass code which is created at the time of
account creation.” If the users failed in any of the detection levels the packets are considered
as malicious and are dropped.
In [22] the author uses “Transmission Control Protocol strategy using SYN cookies”. The
author provides two layers of security (control packet security and Data packet security) in
this strategy. Hop count filtering method and sequence number encoding strategy is used to
protect cloud servers against DoS and DDoS attacks. The MAC generator differentiates the
malicious packets from the legitimate packets.
In [39] the author provides evidences form survey results and concludes that combination of
Software Defined Networking (SDN) architecture and cloud computing will successfully
26
help to protect cloud servers from DoS and DDoS attacks. Author Yan.Q et.al states that
enhancing the features of SDN networking and full utilization of SDN advantages will help
in defeating DoS and DDoS attacks.
To prevent DDoS attacks, the application layer was targeted in paper [40]. As per the author,
this system provides security against Oversized XML, Oversized Encryption, Coercive
parsing, HTTP flooding, and Web Service-Addressing spoofing. This is done by filtering all
service requests received using a reverse proxy protocol. This is a type of filter which is
different than paper [17], which does not affect user service since no overheads are added.
The security feature is enforced by accepting service requests only from a defense server.
The downside is that the server is prone to flooding attacks from known users. Therefore, by
adding better authentication requirements for access to the web services, this approach can
be proven very useful.
In [41] author proposes a tool based IPS technique to prevent DoS and DDoS attacks. The
tool used was Snort IPS. Author has written the rules in source mode. The rule states that if
the requests from the single user reach the rate of 30 within one second, then consider there
is a DoS and DDoS attack and trigger the filter.
The authors in [42] proposes trace back filtering system to prevent DoS and DDoS attacks in
cloud which a tag is added to SOA packets to find out which route is taken by the attacker,
and filters the traffic to provide security However, both methods prove futile in identifying
the attack source. Spoofed IP addresses were not considered in this paper, upon which
considering could provide better results.
The authors in [47] proposes a technique which helps identify the attackers, overcoming the
shortcomings of Paper [42]. A greedy algorithm which involves repeated interchange of
known users between newly generated nodes allows the identification of the attackers,
provided they are insiders. This research was extended in paper [57], in which the selection
of new algorithms in order to optimize the runtime is introduced, since the current method is
considered ‘near-optimal’ by the author. Both approaches consider persistent bots whose
purpose is to migrate between servers to calculate the optimized shuffle pattern. This value
can be however, only estimated in real-world scenarios.
In Paper [57]focuses on security yet again in the application layer, and also removes the
requirement for a stronger authentication for its users as mentioned in paper [40]. This is
possible because the DDoS protections is carried out by non-ISP organizations. Both [47]
and [57] do not describe how the proxy node detects the attack. Overheads are mostly
dependent on the number of shuffles required and sometimes the size of the geographical
area considered.
In [43] author Alosaimi.W et.al proposed the framework named “Enhanced DDoS mitigation
System.” This framework mainly consists of five components. They are Firewall, Verifier
Node(s), Client Puzzle server, an IPS, and a Reverse Proxy (RP). Depending upon the
outcome of legitimate packet verification method (CAPTCHA) conducted by the verifier
node the firewall distinguish malicious packets form legitimate packets. The IPS will check
over the hidden malicious software in the data packets. The RP server will conceal and
controls the location of protected servers and maintain load balance between them. The
puzzle server will come into this field to control suspected users only when they cross the
threshold value in RP.
In [44] author T.Karnwal et.al introduces a filtering tree security service in Service Oriented
Architecture (SOA) model. The main proposal of this technique is, the user requests are
converted into XML tree form and virtual cloud defender is used to protect cloud servers
from DoS and DDoS attacks. To ensure the legitimacy of the data packets a signature
27
reference element is added to SOAP request. Double signature are generated such as,
“number of children, Number of Header element, Number of body element” for extra
protection against XML rewriting. They are generated using hashed characteristics of each
SOAP envelope.
In [45] author N.Jeyanthi et.al proposed a novel mechanism which contains “Cloud DDoS
Attacks Protection (CDAP) nodes that are to be installed at cloud server. “These army nodes
act as the virtual firewall without destroying the Cloud infrastructure and improve the
availability of DC, even at the time of DDoS attacks” [45]. These CDAP nodes will check
the legitimacy of users by the secession key which is generated and shared at the time of
account creation. If the user requests increase more than N times in a particular time period
then by verifying the REGISTER _STATUS the user will be placed in
BLACKLIST_CLIENT table and the future requests made by the particular user will be
redirected to DUMP terminal which is placed outside the cloud environment.
In [46] author R. Saxena et.al proposed Third Party Auditor (TPA) based IDPS technique for
CC against DoS and DDoS attacks. This approach uses Dempster Shafer Theory (DST). In
the proposed method there will be three phases. They are detection phase, conversion phase
and attack assessment phase. In detection phase, Snort is used to detect and log the values of
flooded packets. In conversion phase, “ front server convert alerts into basic probabilities
assignments (BPA) based on the attack alerts generated by snort” [46]. In assessment phase
the converted bpa’s is fused and based on normalized factor the attack is assessed. To
achieve this, it uses Dempseters combination rule.
In [4] author M. Rahman proposed hardware based watermarking and filtering mechanism to
provide an additional layer in defending of CC against DoS and DDoS attacks. In this
technique the legitimacy of the data is cross checked by using trace back mechanism by hop
count and TTL. If the authenticity is not verified, then the packet is distinguished as
malicious packet and dropped without reaching the server. If the packets are approved and
labeled as legitimate packets from trace back mechanism, then they will undergo for further
verification which will be checked against “knowledge based database”. If any suspicious
packets are found, then they will have marked as untrusted packets and will be dropped.
Only trusted packets will reach the server.
In [61] author proposes a tool based IPS technique to prevent DoS and DDoS attacks. The
tool used was Snort IPS. Author has written the rules in destination mode of rate filter. The
rule states that if the requests to the server reaches the rate of 30 within one second, then
consider there is a DoS and DDoS attack and trigger the filter.
In [59] contains another trace back mechanism, which involves the use of a Data Protection
Manager (DPM), and training data to the neural network’s filters. The system can detect the
attack traffic up to a 75% accuracy within the time limit of 20ms and 1s, as per the author.
This time difference can pose as a problem to even the regular users as well, due to the delay
overhead. Also, the invention of IPv6 renders the usability of DPM to be less significant,
than as specified.
28
In [48] author sushi yu et.al proposed dynamic resource allocation strategy for cloud
customers against DoS and DDoS attacks. To estimate the resource allocation queuing
theory based model was proposed against DoS and DDoS attacks.
In [49] author F.Guenane et.al proposed an hybrid cloud based firewalling architecture. The
main components in this proposed architecture are physical and virtual. In virtual part there
are specialized virtual machines in which each machine executes firewall program operations
like analysis, monitoring and reporting the data packets that enter the network with dynamic
resource provisioning. In physical part gives physical security of the servers. When the
traffic to the physical servers gets overloaded then it is redirected to the virtual servers. The
forwarding is done between the architectures by Secure Forwarding Architecture (SFA).
The system proposed in [50] helps to address some limitations of network overlays i.e.
regarding hiding the address of the target server by using gateway routers. Protection is
provided from attacks from the inside and compromised user host machines as per the
author. This technique avoids the requirement of monitoring all the data in the network
traffic, thereby reducing overhead. A bloom filter can be found in each proxy node which
tests certain values and efficiently identify an attack. A warning is issued on an attack
detection which reduces the number of users assigned by half until the attackers are
recognized. The results from the authors simulation were claimed to be promising, but no
real-world validation was provided
In [51] author N.Jeyanthi et.al proposed a three phase authentication scheme Reputstion
based service for cloud user environment (RESCUE) for CC against DoS and DDoS attacks.
This approach will classify users into three different categories. They are well-reputed,
reputed and ill-reputed. In the first phase of filtration, a puzzle is given to distinguish
between humans and automated programs. Failing in the first phase will drop the packets
immediately. Based on the predefined attack pattern signatures, the network level attacks are
filtered by dropping the packets in the second phase. By observing the request intervals
between consequent service requests the service level attackers are dropped in third phase.
In [52] author R.Anandhi et.al proposed a solution against DoS and DDoS attacks on
distributed cloud servers by port lock mechanism using Service File Access (SFA)
algorithm. The main approach in this solution is to lock the port after there is an access by
the user. To handle the ports at the server end Group Policy Object (GPO) is built for users
and files. If the user request for accessing the file, then the tripping password is generated
with a limited period validation. As the port is locked there is no chance of other users to
enter the port and if the tripping password is not entered by the user within the time frame
then the user is considered as malicious user and dropped.
In [53] author M.Malekzadeh et.al proposed two distinct security models. “The first
proposed model is called PCF-O2, which is based on the original HMAC-SHA2-256
algorithm (O-hmac2)”.[53] This mainly consists of three main parts. They are Proposed Key
Derivation Algorithm (KDA), New Security Elements, and Replay attack preventing
scheme. “For the second proposed model, we modify the HMAC-SHA2-256, which is
referred to as M-hmac2. Then, we apply M-hmac2 as the underlying authentication
algorithm of the second proposed model, which is called PCF-M2. The M-hmac2 is
developed to reduce the security cost and communication overheads of the O-hmac2 that will
consequently enhance and optimize efficiency of the PCF-M2 model compared with the
PCF-O2 model.”[53]
In [29] author E.Anitha et.al used a rule set based detection (CLASSIE) and modulo marking
method to avoid spoofing attacks. To take the decision about dropping of the packets,
Reconstruct and Drop method is used. CLASSIE which is trained to identify the pattern
attributes of DoS attacks is placed at one-hop distance from the user. If the attack pattern is
29
found, it drops the packets. After this stage the packets are sent to marking stage. In this
stage a MACtoID table is maintained by the routers which contains the physical address of
hosts. For marking the packets the routers uses modulo technique.
In [54] author F.Guenane et.al proposes a new architecture named cloud based firewalling
architecture which mainly consists of three main components. They are Front-Gateway,
virtual firewall instances and Back-Gateway. The Front-Gateway is a virtual router which is
responsible for authenticating and distribution of incoming traffic to different instances of
virtual firewall. The decision for distributing the traffic is taken by decision module. The
firewalling module present in the virtual firewall instance is responsible for authenticating
the data packets based on the inspection rule module which contains predefined rules of
attack traffic pattern. Back-Gateway is responsible for re-assembling the data packets and
send the legitimate packets to the server based on the packet reassembly and control
modules.
In [56] author W.Dou et.al proposed an Confidence-Based Filtering (CBF) method to protect
cloud server against DoS and DDoS attacks. The confidence values are generated and stored
in the nominal profile based on the attribute value pairs in TCP and IP headers. Based on the
nominal profile CBF calculates scores for the incomming data packets from the users. The
cofidence values represent the occurrence frequency of the data packets the score given to
the packets are low. If the packet score is high then the packets from that source is
considered as legitimate packets and are forwarded to the server to access the cloud
resources.
Huang in paper [60] presented a low reflection ratio migration system which helps in
identifying the source, detecting attacks, turning testing, and generating question modules.
This system was set to function before the IaaS, since it calculates computational efficiency
and the implementation of overheads on actual users. Lists are generated based on whether
they are a threat or not into whitelist, black list, unknown list, and block list by recognizing
IP addresses. Administrators govern these APIs; however, this opens the system to potential
manipulation by insiders. Operational degradation of an 8.5% is observed when monitoring
100,000 addresses
In paper [55], a flooding tool is proposed to detect DDoS attacks. The authors introduce a
technique which is based on distance estimation for identifying the traffic rates. The Time-
To-Live (TTL) is calculated using this distance measure. Real-time measurement is also
provided using exponential smoothing for the IP traffic. Conclusively, deviation is calculated
to identify if the behavior is normal or not. The attribute dependencies mentioned in previous
approaches are overcome in this approach using time delays. Generally, ISPs are trusted with
the implementation of filters on data traffic, hence making this approach more unlikely to be
adopted.
30
3.4.2 Metrics
In this section the identified metrics form the literatures are presented which answers the
research question 2. The overview of the identified metrics can be found in the table 12.
Table 12: List of identified metrics used for evaluation of NGIPS techniques
Response Time False positives
CPU Utilization time Hop count
Throughput Memory Utilization
Active legitimate connections Bandwidth
Number of connections aborted Latency
False negatives Processing time
Accuracy Traffic at nodes
Detection rate Round Trip Time
Packet Loss rate -
31
4 SNORT (INTRUSION PREVENTION SYSTEM)
To generalize the experiment evaluation, the IPS techniques proposed by the authors in
literature [61], [41] are modified. The modification without any change in the parameters of
filters is explained in this section.
4.1 Introduction
Snort is a Network Intrusion Detection System (NIDS). It uses Libpcap to sniff and logs the
packets in the network. Its rule-based content pattern matching feature detects the real time
attacks like buffer overflows, CGI attacks, DoS and DDoS attacks. The filter techniques
which snort provide will turn the IDS system to IPS system. Snort mainly offers to run in
three different modes. They are the sniffer mode, packet logger mode, and Network Intrusion
detection system mode. In sniffer mode, the packets are sniffed in the network and displayed
in the console. It generates an alert messages of the packets that are being transmitted in the
network. In Packet logger mode, snort logs the packets to the storage device. In NIDS mode,
snort generates alert messages for the detection and analysis made on the network traffic.
The rules written by the authors in [41] [61] are in NIDS mode.
The configurations to filter the malicious packets from legitimate packets provided by the
snort are Rate filter, Event filter. In this research author had compared two modes of rate
filter technique.
The above rule specifies snort to drop the packets of type TCP coming from any IP address
any port destined to any IP address to port 80 if it violates the sid: 1000001 thirty times in
one second. The detection of the packets is done using “detection by source” mode. The
attribute used for this mode is “by_src”. This mode tells snort to spoof the packets from the
source IP address. On triggering the rule with this mode, snort will drop the packets
regardless of destination.
“rate_filter \
gen_id 1, sig_id 100001, track by_dst, \
count 30, seconds 5, new_action drop, timeout 30”
32
The above rule tells the snort to spoof the destination IP address when the filter is triggered
and drop the packets irrespective of the source.
The detection rule implemented by the author is described below
# TCP rule that detects TCP packet with the SYN flag on in destination of an FTP server.
The above rule specifies snort to create a rule with sid 100001. It generates the alert
messages for the TCP packets coming from EXTERNAL_NET (IP address of the user
systems) with any port destined to HOME_NET (IP address of the server which IPS should
protect).
#TCP rule to detect TCP packet with SYN flag in the network.
alert tcp !$HOME_NET any -> $HOME_NET 80 (flags: s; msg:”Attempt to access server is
made with TCP packets”; classtype:attempted-dos; sid:1000990; rev:1;)
alert udp !$HOME_NET any -> $HOME_NET 80 (msg:” Attempt to access server is made
with UDP packets”; classtype:attempted-dos; sid:1000991; rev:1;)
The above rules are crafted to generate the alert message for the packets crossing the IPS
system. There is no rule written explicitly for HTTP packet flooding because HTTP packets
communicate using TCP protocol. And to filter the malicious packets the rules written in two
techniques are as follows:
Detecting the packets by source and filtering (source mode) [41]: This tells snort to
spoof the packets of source IP address and trigger the filter if the rate is reached. In
this mode, the rule tells snort to spoof the packets coming from the source and
trigger the filter if the rate is reached. The hypothesis [61] in this technique is, if a
large number of source IP addresses are used to generate the attack then the filter
will never be triggered. The IPS filter rule in this technique is written as follows.
33
# IPS rule to filter UDP packets
rate_filter \
gen_id 1, sig_id 1000991, \
track by_src, \
count 10, seconds 2, \
new_action drop, timeout 30
The above filter technique tells snort to spoof the data packets coming from the source
and if the rate of requests to the server reaches 30 within one second then drop the
packets and abort the connections for 30 seconds.
Detecting the packets by destination and filtering (destination mode) [61]: This tells
snort to spoof the packets to destination IP address and trigger the filter if the rate is
reached. In this mode, the rule tells snort to spoof the packets reaching the
destination and trigger the filter if the rate is reached. The hypothesis [61]in this
technique is when the filter is triggered Snort cannot distinguish between malicious
traffic and genuine traffic. It drops all the packets directed towards the destination
resulting in the drop of legitimate users of cloud. The IPS filter rule in this technique
is written as follows.
The above filter technique tells snort to spoof the data packets reaching the
destination from any source and if the rate of requests to the server reaches 30 within
one second then drop the packets and abort the connections for 30 seconds.
34
5 EXPERIMENT
This chapter focuses on process carried out during the experimentation. It is divided
into five sub-sections and the structure will be as follows:
Section 5.1 Give introduction about experimentation and outline for the process
carried out for executing it and also The objective and purpose of the experiment in
this research is specified.
Section 5.2 concentrates on how the experimentation is designed and discuss
the hypothesis that is tested in the experiment.
Section 5.3 explains how experiment process is carried out.
Section 5.4 data analysis procedure is explained.
Section 5.5 epitomizes the results of the experiment.
5.1 Introduction
In this research, the author selected Experiment as a research method to answer the third
research question because according to C Wohlin et.al.[36] among the five research methods
[35], [36] experimentation is the best research method to evaluate the accuracy of the
models. In this chapter, the author evaluates source mode and destination mode in Snort IPS
and compares the accuracy of the modes based on some selected metrics from section 3.4.2
So experimentation bests suit this research and also to validate the results. The execution
steps of experiment is as follows:
Planning: The context of the experiment along with the dependent variables and
independent variables in the experiment are explained.
Experiment Design: Testbed design, malicious and legitimate packets generation
design and metrics evaluation design are explained.
Operation: The implementation of the experiment design is explained in this
section.
Results and Analysis: The experiment results and analysis are provided in this
section.
Considering the context of this research i.e to evaluate the techniques of NGIPS which were
resulted from findings of literature review, author had found that Snort is mostly used tool in
Cloud environment to detect or to prevent the DoS and DDoS attacks. It is found that authors
of [41] and [61] proposed two different techniques in snort tool. In those literatures authors
had not even mentioned about other modes providing evidence that these two techniques
perform in a similar way or they had some difference. This had increased the curiosity in the
35
author of this thesis to find if there is any performance difference between those two modes.
So to evaluate those techniques authors used mostly used metrics which are given in the
section 3.4.
among the list of the metrics mostly used metrics are Bandwidth, CPU lading, Latency,
Reliability, Memory Usage and packet loss rate. Based on the resource availability and
considering the mostly used NGIPS technique list from the literature review Snort is selected
for the experiment.
The other techniques are not considered for the evaluation in this research is due to the
resources availability and also the expected duration to setup the other techniques which
were resulted from literature review and conduct the experiment is large
The main purpose of this experiment is to compare the two proposed systems based on the
metrics like Bandwidth, CPU loading, latency, reliability, memory usage and rate of
malicious packets dropped in a similar environment and evaluate the best IPS technique to
prevent the malicious DoS and DDoS packets entering the cloud environment. The entity
that is studied in the experiment is Snort IPS.
5.1.2 Quality focus
To evaluate the IPS, a real-time traffic should be generated which is very difficult to create
manually. So an automated tool, LOIC has been in this framework because this is one of the
best tool in generating the real-time attack traffic to the server [63][64][65]. The three
types of DDoS attacks are used in attacking the server machine. They are TCP, UDP, and
HTTP-based attacks. The brief description of the attacks and the tool is given in chapter 2.
To maintain the same amount of attack traffic throughout the experiment process Tcpreplay
tool is used.
The process and results of the Experiment are explained in the subsecctions below. The
entire process is designed and conducted by following the guidelines of Wohlin [36].
5.2 Planning
Planning of the experiment is vital for proper validation of the results [36]. The context of
the experiment is designed and conducted by the non-professionals. This may not be as
accurate as the experiment conducted by the professionals. In this section, the selection of
independent variable and the dependent variable is made which helps in designing the
experiment.
Independent Variable: the traffic rate of attack packets generated per second and
the IPS rule signatures which differ from source and destination mode to detect and
prevent the attack is set as independent variables in the experiment.
Dependent Variable: the output value speaks for a dependent variable which
depends on selected metrics.
36
In this section, the author explains how the experiment process is designed to evaluate the
IPS techniques. This chapter is subdivided into three sections and the structure is as follows.
Section 5.3.1: This section concentrates on how the testbed is designed and the
hardware and software requirements that are used in the machines are explained.
Section 5.3.2: This section discusses how the attack and genuine traffic is generated
in the experiment environment.
Section 5.3.3: This section epitomizes the metrics used in evaluating the IPS
techniques.
The software and hardware requirements used in the machines are described in table 13.
37
Tcpreplay The machine to repeat the same data set of
traffic
Snort IPS The IPS router which is responsible for
dropping the malicious packets.
Server A server to provide the web resource to the
cloud users.
38
also they are the mostly used metrics as they have the similar working environment. So, the
above mentioned metrics selected and also which are the most commonly used in evaluating
the IDS which helps in evaluating the IPS. These list of metrics are mostly used metrics
which are also resulted from the literature review.
5.4 Operation
The above commands orders snort to run in inline mode by verifying the rules in Snort.conf
file and print the output to the console and make a copy in the log file. -Q orders to Snort to
read packets from the queue.
The rules explained in the section are used and created a file DDoS.rules in the location
/var/log/snort/. The filtering rules are implemented in “snort.conf” file.
5.4.2 Legitimate and Malicious Traffic Generation
As mentioned in the section 5.3.2, LOIC tool is used in generating the malicious traffic. This
tool requires the server IP address to generate the traffic. The mode of the attack can be
selected from the GUI. The pictorial representation of using LOIC tool in generating the
TCP flooding is given in figure 4.
In generating the legitimate traffic Jmeter tool is used. This tool helps in generating with user
threads, ramp-up period and loop count. The author had selected two user threads with a
ramp-up period of 4 seconds and a loop count of 5 seconds. The pictorial representation of
using Jmeter tool in generating the legitimate traffic is given in figure 5.
39
Figure 7: JMeter tool generating legitimate traffic to the server
The traffic generated by the tools are captured and replayed by the Tcpreplay tool. The
packets are captured using Wireshark and are saved as “.pcap” file.
To evaluate the latency, the traffic is set to loop as the experiment should run for a long
period of times. The command to replay the traffic is crafted as
In this experiment the capturing of traffic is done on the Wireshark where the IPS system is
installed. Those captured packets are saved and used to replay the traffic in attacker machine.
40
Iperf -s
Latency: The latency metrics is used to measure as the Round Trip Time (RTT). To
measure the latency author used ping [70] tool. 100 packets are sent over the
network to the server over IPS and the average time is calculated for the packets that
acknowledged from the server. The command for ping is as follows:
Ping 192.168.248.2 -c 60
Data throughput: Wireshark is used to calculate throughput of IPS under the attack.
Reliability: Malicious traffic is replayed with Tcpreplay for the five hours and the
evaluation is done whether IPS can able to handle the attack for such long period of
time.
CPU load: Uptime is the Unix tool to calculate the CPU load. This metric will help
in calculating the effect of malicious traffic on the resources of the system.
Memory Utilization: The memory consumption of the system is calculated by using
the free Unix tool. This metric will help to evaluate the impact of memory
consumption with respective to attacking traffic.
Table 14 shows the list of tools used for calculating the metrics and the deployment
location of the tools.
5.5 Results
By following the experimental procedure as explained in section 5.3 and 5.4, the obtained
results are explained in this section. To ensure the experiment environment is same in
evaluating the IPS techniques with metrics Snort IPS machine is rebooted after saving the
collected data and analyzed later. Each iteration of the experiment is repeated ten times and
the mean value is calculated and recorded the results section.
5.5.1 CPU Load
The CPU load varies with the packet processing load of the IPS. TCP protocol uses the fewer
resources when compared to the other type of attacks. The TCP packets usage is about 50%
to 60% usage of the resources whereas the UDP and Mixed packets is about 70% CPU
usage. This might be because of TCP packets take less processing time when compared to
other packets. For this packets when the rate of 8000pps reached the load is stable (55% to
58%) till it reaches the rate of 1000pps. The CPU load is stable because there might be some
optimization process taking place by Snort when the rate is reached. The mix protocol
follows the UDP packets flow as it says that the UDP protocol needs much packet processing
time. The graphical representation of CPU load with different attack protocols is shown in
figure 6 and figure 7 of source and destination modes.
41
Figure 8: CPU Load results in destination mode
The memory usage for both modes is unpredictable. This might be because of using Linux
virtual machine in the same host. Using of such machine, memory cannot be predicted as
there will be an influence of the host machine on the memory.
42
Figure 10: Memory load results in destination mode
5.5.3 Bandwidth
When no traffic is on the network the bandwidth resulted was about 60Mbps. When the
traffic was flooded in the network the bandwidth was reduced and when the packets reach
the rate of 14000pps the bandwidth is almost zero. Snort with destination mode cannot make
bandwidth available when the traffic rate reaches about 14000pps.
43
Figure 12: Bandwidth availability results in destination mode
5.5.4 Latency
As the traffic rate increasing in the network the latency increases. When the rate is about
2000pps the latency of all the attack protocols lie about the same range about 5ms. The
latency between the traffic rate 10000pps to 24000pps the latency of the protocols decreased
which explains that snort in destination mode tries to optimize when latency is more
important till its threshold limit.
44
Figure 14: Latency results in destination mode
But in source mode, the results are slightly varying as mix protocol shows less packet loss
rate and can be analyzed as the rate of processing the UDP protocol because the time taken
to process the ICMP packets take less time.
45
Figure 16: Packet Loss rate in Source mode
46
6 ANALYSIS
To validate the results obtained from the experiment, statistical analysis should be performed
[71] [72]. The statistical analysis process carried out in this research is followed by using the
guidelines [36], [71].
To validate the results, author performed Wilcoxon test. This is parametric test to compare
the difference between two groups which samples are correlated with each other. The author
had chosen this test because the data types are considered as normal data type and there are
two matched groups. Support for the selection of above statistical test can be found in the
literatures [36] [73] [74] as the authors performed Wilcoxon test for the evaluation of two
algorithms with the data type which is similar to data in this thesis.
“you can evaluate the MSE to know about how your mean estimator is good with respect the
standard deviation of the sample data. If your MSE is small, it means that the average value
of the parameters is a good representation of the measured data” -Emiliano Casalicchio
The procedure followed by the author in calculating MSE is shown in step-by-step manner.
Step 1: The error value is calculated for the repetition values by using the formula
Error = (µ-X)
where µ= mean value for the data set.
X = corresponding data set value
Step 2: Calculate the square for the error values. Executing of this step eliminates negative
values present in the data set of error value.
(Error)²
Where SSE = sum of squared errors and N is the number of sample and
N= number of population.
47
It is found that MSE resulted for all data samples ranges between 0.06 to 0.71. By this we
can conclude that the mean value which is recorded to calculate Wilcoxon test gives best
results.
The example process for calculation of MSE in source mode with CPU load metric when
traffic rate is 2000pps is shown in table 16.
Table 16:Calculation of MSE for 2000pps in source mode (CPU Load metric).
PPS In Source Mode
2000 X value (%) Error=( µ-X) Error square (Error)²
10.5 -1.08 1.16
12.1 0.52 0.27
12.4 0.82 0.67
12.6 1.02 1.04
11.6 0.02 0.00
10.8 -0.78 0.60
11.6 0.02 0.00
11.6 0.02 0.00
12.4 0.82 0.67
10.2 -1.38 1.90
Hypothesis assumption
To perform the statistical analysis Null hypothesis and Alternative hypothesis is designed.
Null hypothesis designed was
Ho: both modes in snort has no difference between them
H1: there is a difference between the two modes.
The performance difference d=(Xi - Yi) where X is the destination mode data pair and Y is
Source mode data pair is calculated between the two modes and “sign” S was given to the
difference values stating +1 for positive value difference and -1 for negative value
difference.
Now absolute difference for the performance difference is calculated and corresponding rank
is given starting from 1 to 16 as we have 16 number of population. Signed ranked test is
calculated by the multiplication of S and rank of the data pairs. Then the positive sum T+ and
negative sum T- of the signed rank is calculated. Author absorbed that the least value among
T+ and T- in every metric is less than the critical value of significance level of 0.05.
48
So based on that for every metric which is considered in this thesis claims the rejection of
Null hypothesis. Stating that there is some difference in performance of both the rate filter
modes of Snort NGIPS technique.
The statistical table for the CPU Load metric for source and destination mode is shown in the
table
For all the metrics the data resulted was similar saying that Null hypothesis can be rejected.
So by statistical analysis we can say that there is some difference between two modes in
Snort NGIPS. As this is the case in all the metrics, we have enough evidence to prove that
both modes has some performance difference between them.
Now to know which mode performs better by observing the T+ and T- values the negative
sum T- value is less than the critical value. So we can say that destination mode performs
better when compared to source mode.
49
7 DISCUSSION
This chapter discusses about validity threats in this research and solutions to the research
questions.
ANS: Literature review is conducted to answer this research questions. The NGIPS
techniques that can be deployed in the cloud environment are represented in the table in the
section 3.4.1. there is no certain standard technique that can be used to deploy it in cloud
computing. We can conclude that there is no standard semantics that are considered for
developing a NGIPS. From the results of Literature review it found that snort is one of the
mostly used tool either for detecting or preventing the attacks. Authors in [59] [41] [52]
used snort along with their propose darchitectture to defend against DoS and DDoS attacks.
RQ 2: What are the different metrics used for evaluating Next-Generation Intrusion
Prevention Systems (NGIPS) for Cloud Computing against DoS and DDOS attacks?
ANS: Different metrics were collected form the literatures obtained on performing literature
review. The list of identified metrics are given in the section 3.4.2. There are some common
metrics that are used in evaluating the developed techniques. Some common metrics used for
50
evaluating the NGIPS techniques are CPU load, Memory usage, bandwidth availability,
throughput, true positive rate, false positive rate, true negative rate, false negative rate and
accuracy. For detecting the attack packets most of the NGIPS techniques will sniff the
networks and are based on the signature based detection algorithms. This can be explained as
NGIPS techniques are developed mostly are network-based IPS and follow signature based
algorithims. But there are certain mechanisms like packet resonance strategy
implementation [38] which uses schematic detection method in preventing the attacks
reaching to the data center
RQ 3: What is the performance of “track by source” mode and “track by destination” mode
in Snort Non-Traditional Intrusion Prevention System (IPS) against DoS and DDoS attacks?
ANS: When two RATE_FILTER modes in the Snort are compared with the metrics CPU
utilization, memory usage, latency, bandwidth and rate of packet loss it proves that there is
some difference exists between those two modes. From the statistical analysis results there is
enough evidence to prove that destination mode provides better results when compared to
source mode.
The CPU load depends on the traffic rate in the network. When the rate of traffic increases
Snort drops the packets with respective to the traffic rate.
Regarding the packet drop rate both modes show similar results based on the trigger rate of
the rule.
Using the source mode has a stable memory usage. But in destination mode the rate of
memory is increasing as there is an increase in the traffic rate when TCP traffic is not
considered. As the testbed is implemented in a virtual machine in the same host the memory
results cannot be validated. So using the rate filter technique with any of the modes does not
affect the cloud service provider
if the attacker uses different zombie machines with different source IP address less than the
threshold limit of the rule filter. And of the rate filter in destination mode will never drop the
packets of legitimate users also if the filter is triggered due to the malicious activity of
attackers.
When the characteristics of the two modes are studied the time of rule trigger can vary based
on the attacker. If the attacker uses many botnets which is less than the rate limit declared in
snort rules, then the rate filter in source mode will never trigger. And the destination mode
there is a chance of dropping the legitimate user packets if the server requests reach the
threshold limit of the user.
51
8 CONCLUSION AND FUTURE WORK
This research was an attempt to compare two rate-filter modes in snort NGIPS technique
because Snort is the mostly used technique in cloud environment to defend against DoS and
DDoS attacks. The evaluation is done based on the metrics to determine which technique is
better. The statistical analysis is performed to validate the data.
Literature review results show that NGIPS techniques mostly concentrate on network-based
intrusion detection for handling the DoS and DDoS attacks. While sniffing the network for
the malicious packets the NGIPS use signature based methods.
The experiment results show that Snort (both in source mode and Destination) is capable of
handling DoS and DDoS attacks effectively till the packet traffic rate in the network reaches
till 30000pps. But the main drawbacks are the rate of packet drop. The Snort drops most of
the legitimate packets along with the malicious packets (true positives). This results
legitimate users cannot have access to the server.
The both systems have shown high reliability in handling the attacks up to five hours. But
the other metrics are not validated during this experiment. So there might be some packet
drops of legitimate users along with the malicious packets as the time increases.
The DoS and DDoS methodology implementation in the real time may be much complex
when compared to the experiment environment designed in this research. but the attack
protocols used in this experiment gives strength to validate the results of two modes of Snort
rate_filter techniques.
The future, this work can be extended by considering other attack tools it in the same test
bed and evaluate the results. The testbed designed during this experiment had enough
capability to be tested even with other data sets like DARPA and bro. It can also be
implemented with other types of DoS and DDoS attacks and other metrics implemented in
the same test bed.
52
REFERENCES
[1] P. Yadav and S. Sujata, “Security Issues in Cloud Computing Solution of DDOS and
Introducing Two-Tier CAPTCHA,” Int. J. Cloud Comput. Serv. Archit., vol. 3, no. 3,
pp. 25–40, 2013.
[2] K. Hwang, S. Kulkarni, and Y. Hu, “Cloud security with virtualized defense and
reputation-based trust management,” in 8th IEEE International Symposium on
Dependable, Autonomic and Secure Computing, DASC 2009, 2009, pp. 717–722.
[3] A. Chonka, Y. Xiang, W. Zhou, and A. Bonti, “Cloud security defence to protect
cloud computing against HTTP-DoS and XML-DoS attacks,” J. Netw. Comput.
Appl., vol. 34, no. 4, pp. 1097–1107, 2011.
[4] M. Rahman and W. M. Cheung, “A Novel Cloud Computing Security Model to
Detect and Prevent DoS and DDoS Attack,” J. Adv. Comput. Sci. Appl., vol. 5, no. 6,
pp. 119–122, 2014.
[5] D. Sequeira, “Types of,” in Intrusion Prevention Systems: Security’s Silver Bullet?,
2003, pp. 36–41.
[6] S. Piper, CISSP, and SFCP, Intrusion Prevention Systems for Dummies. 2011.
[7] K. Nagaraju and R. Dr.Sridaran, “An Overview of DDoS Attacks in Cloud
Environment,” Int. J. Adv. Netw. Appl., pp. 124–127.
[8] A. Harper, S. Harris, J. Ness, C. Eagle, G. Lenkey, and T. Williams, “Gray Hat
Hacking The Ethical Hackers Handbook,” Jan. 2011.
[9] R. V. Deshmukh and K. K. Devadkar, “Understanding DDoS attack & its effect in
cloud environment,” Procedia Comput. Sci., vol. 49, no. 1, pp. 202–210, 2015.
[10] S. Subashini and V. Kavitha, “A survey on security issues in service delivery models
of cloud computing,” J. Netw. Comput. Appl., vol. 34, no. 1, pp. 1–11, Jan. 2011.
[11] A. Carlin, M. Hammoudeh, and O. Aldabbas, “Defence for Distributed Denial of
Service Attacks in Cloud Computing,” in The International Conference on Advanced
Wireless, Information, and Communication Technologies (AWICT 2015), 2015, vol.
73, no. (AWICT 2015), pp. 490–497.
[12] N. Gruschka and M. Jensen, “Attack surfaces: A taxonomy for attacks on cloud
services,” Proc. - 2010 IEEE 3rd Int. Conf. Cloud Comput. CLOUD 2010, pp. 276–
279, 2010.
[13] S. M. Specht and R. B. Lee, “Distributed Denial of Service: Taxonomies of Attacks,
Tools and Countermeasures,” in 17th International Conference on Parallel and
Distributed Computing Systems, 2004, no. September, pp. 543–550.
[14] A. Patel, M. Taghavi, K. Bakhtiyari, and J. Celestino Júnior, “An intrusion detection
and prevention system in cloud computing: A systematic review,” J. Netw. Comput.
Appl., vol. 36, no. 1, pp. 25–41, 2013.
[15] B. B. Gupta, R. C. Joshi, and M. Misra, “Distributed Denial of Service Prevention
Techniques,” Int. J. Comput. Electr. Eng., vol. 2, no. 2, pp. 268–276, 2012.
[16] B. Di Martino, G. Cretella, and A. Esposito, “Classification and positioning of cloud
definitions and use case scenarios for portability and interoperability,” Proc. - 2015
Int. Conf. Futur. Internet Things Cloud, FiCloud 2015 2015 Int. Conf. Open Big
Data, OBD 2015, pp. 538–544, 2015.
[17] L. Wenjuan, P. Lingdi, and X. Pan, “Use trust management module to achieve
effective security mechanisms in cloud environment,” in Electronics and Information
Engineering (ICEIE), 2010 International Conference, 2010, pp. V1– 14–V1–19.
[18] S. XIaodong, C. Guiran, and L. Fengyun, “A Trust Management Model to Enhance
Security of Cloud Computing Environments,” in Second International Conference on
Networking and Distributed Computing (ICNDC), 2011, pp. 244–248.
[19] P. Mell and T. Grance, “The NIST definition of cloud computing,” NIST Spec. Publ.,
vol. 145, p. 7, 2011.
[20] N. Jeyanthi, N. C. S. N. Iyengar, P. C. M. Kumar, and A. Kannammal, “An enhanced
53
entropy approach to detect and prevent DDOS in cloud environment,” Int. J.
Commun. Networks Inf. Secur., vol. 5, no. 2, pp. 110–119, 2013.
[21] I. Sattar and M. Shahid, “A Review of Techniques to Detect and Prevent Distributed
Denial of Service ( DDoS ) Attack in Cloud Computing Environment,” vol. 115, no.
8, pp. 23–27, 2015.
[22] R. Aishwarya and S. Malliga, “Intrusion Detection System- An Efficient way to
Thwart against Dos / DDos Attack in the Cloud Environment,” Ieee, 2014.
[23] W. Alosaimi and K. Al-Begain, “An enhanced economical denial of sustainability
mitigation system for the cloud,” in International Conference on Next Generation
Mobile Applications, Services, and Technologies, 2013, pp. 19–25.
[24] M. V. and J. V. Kouril, Daniel, Tomas Rebok, Tomáš Jirsik, Jakub Čegan, Martin
Drasar, “Cloud-based Testbed for Simulation of Cyber Attacks,” Proc. 2014 IEEE
Netw. Oper. Manag. Symp., 2014.
[25] R. Latif, H. Abbas, and S. Assar, “Distributed Denial of Service (DDoS) Attack in
Cloud- Assisted Wireless Body Area Networks: A Systematic Literature Review,” J.
Med. Syst., vol. 38, no. 11, 2014.
[26] L. Garber, “Denial-of-service attacks rip the internet,” Computer (Long. Beach.
Calif)., vol. 33, no. 4, pp. 12–17, 2000.
[27] J. M. Kim, H. Y. Jeong, I. Cho, S. M. Kang, and J. H. Park, “A secure smart-work
service model based OpenStack for Cloud computing,” Cluster Comput., vol. 17, no.
3, pp. 691–702, 2014.
[28] Z. Trabelsi, K. Hayawi, A. Al Braiki, and S. S. Mathew, Network Attacks and
Defenses: A Hands-on Approach. CRC Press, 2012.
[29] E. Anitha and S. Malliga, “A packet marking approach to protect cloud environment
against DDoS attacks,” 2013 Int. Conf. Inf. Commun. Embed. Syst. ICICES 2013, pp.
367–370, 2013.
[30] J. Mirkovic and P. Reiher, “A taxonomy of DDoS attack and DDoS defense
mechanisms,” SIGCOMM Comput. Commun. Rev., vol. 34, no. 2, pp. 39–53, 2004.
[31] A. R. Kumar, P. Selvakumar, and S. Selvakumar, “Distributed denial-of-service
(DDoS) threat in collaborative environment - A survey on DDoS attack tools and
traceback mechanisms,” 2009 IEEE Int. Adv. Comput. Conf. IACC 2009, no. March,
pp. 1275–1280, 2009.
[32] S. Dietrich, N. Long, and D. Dittrich, “Analyzing distributed denial of service tools:
The shaft case,” Usenix Assoc. Proc. Fourteenth Syst. Adm. Conf. (Lisa Xiv), no.
Lisa, pp. 329–339, 2000.
[33] K. Scarfone and P. Mell, “Guide to Intrusion Detection and Prevention Systems (
IDPS ) Recommendations of the National Institute of Standards and Technology,”
Nist Spec. Publ., vol. 800–94, p. 127, 2007.
[34] A. Bakshi and B. Yogesh, “Securing cloud from DDOS attacks using intrusion
detection system in virtual machine,” 2nd Int. Conf. Commun. Softw. Networks,
ICCSN 2010, pp. 260–264, 2010.
[35] B. Kitchenham and S. Charters, “Guidelines for performing Systematic Literature
Reviews in Software Engineering,” Engineering, vol. 2, p. 1051, 2007.
[36] C. Wohlin, Experimentation in Sofware Engineering, vol. 53, no. 9. 2013.
[37] M. Petticrew and H. Roberts, “Systematic Reviews in the Social Sciences,” vol. 42,
no. 5, p. 336, 2008.
[38] N. Jeyanthi and N. C. S. N. Iyengar, “Packet resonance strategy: A spoof attack
detection and prevention mechanism in cloud computing environment,” Int. J.
Commun. Networks Inf. Secur., vol. 4, no. 3, pp. 163–173, 2012.
[39] Q. Yan, F. R. Yu, S. Member, Q. Gong, and J. Li, “Software-Defined Networking (
SDN ) and Distributed Denial of Service ( DDoS ) Attacks in Cloud Computing
Environments : A Survey , Some Research Issues , and Challenges,” IEEE Commun.
Surv. Tutorials, vol. 18, no. c, pp. 2–23, 2015.
[40] T. Vissers, “DDoS defense system for web services in a cloud environment,” Futur.
Gener. Comput. Syst., vol. 37, pp. 37–45, 2014.
54
[41] B. Khadka, C. Withana, A. Alsadoon, and A. Elchouemi, “Distributed Denial of
Service attack on Cloud : Detection and Prevention,” vol. 4, no. September, pp. 210–
215, 2015.
[42] Y. Lanjuan, “Defense of DDoS attack for cloud computing,” in IEEE International
Conference on Computer Science and Automation Engineering (CSAE), 2012, pp.
626–629.
[43] W. Alosaimi, M. Zak, and K. Al-Begain, “Denial of Service Attacks Mitigation in the
Cloud,” in Proceedings - NGMAST 2015: The 9th International Conference on Next
Generation Mobile Applications, Services and Technologies, 2015, pp. 47–53.
[44] T. Karnwal, “A Comber Approach to Protect Cloud Computing against XML DDoS
and HTTP DDoS attack,” in Electrical, Electronics and Computer Science, IEEE
Students’ Conference, 2012, pp. 1–5.
[45] N. Jeyanthi and P. C. Mogankumar, “A virtual firewall mechanism using army nodes
to protect cloud infrastructure from DDoS attacks,” Cybern. Inf. Technol., vol. 14, no.
3, pp. 71–85, 2014.
[46] C. Borean, R. Giannantonio, M. Mamei, D. Mana, A. Sassi, and F. Z. B, “Internet and
Distributed Computing Systems,” vol. 9258, pp. 143–154, 2015.
[47] H. Wang, “A moving target DDoS defense mechanism,” Comput. Commun., vol. 46,
pp. 10–21, 2014.
[48] S. Yu, Y. Tian, S. Guo, and D. Wu, “Can We Beat DDoS Attacks in
Clouds?(Supplementary Material),” Nsp.Org.Au, vol. 25, no. 9, pp. 1–4, 2000.
[49] F. Guenane, M. Nogueira, and G. Pujolle, “Reducing DDoS attacks impact using a
hybrid cloud-based firewalling architecture,” 2014 Glob. Inf. Infrastruct. Netw. Symp.
GIIS 2014, 2014.
[50] H. Fujinoki, “Dynamic Binary User-Splits to protect cloud servers from DDoS
attacks,” 2013 2nd Int. Conf. Innov. Comput. Cloud Comput. ICCC 2013, pp. 125–
130, 2013.
[51] N. Jeyanthi, H. Shabeeb, M. A. S. Durai, and R. Thandeeswaran, “Reputation Based
Service for Cloud User Environment,” Int. J. Eng., vol. 27, no. 8, pp. 1179–1184,
2014.
[52] R. Anandhi and V. N. Raj, “Prevention Of DDoS Attacks On Distributed Cloud
Servers By Port Lock Mechanism,” ARPN J. Eng. Appl. Sci., vol. 11, no. 5, pp. 3013–
3019, 2016.
[53] M. Malekzadeh, A. A. A. Ghani, and S. Subramaniam, “A new security model to
prevent denial-of-service attacks and violation of availability in wireless networks,”
Int. J. Commun. Syst., vol. 25, no. 7, pp. 903–925, Jul. 2012.
[54] F. Guenane, M. Nogueira, and A. Serhrouchni, “DDOS Mitigation Cloud-Based
Service,” 2015 IEEE Trust., pp. 1363–1368, 2015.
[55] S. S. Chopade, K. U. Pandey, and D. S. Bhade, “Securing cloud servers against
flooding based DDOS attacks,” Proc. - 2013 Int. Conf. Commun. Syst. Netw. Technol.
CSNT 2013, pp. 524–528, 2013.
[56] W. Dou, Q. Chen, and J. Chen, “A confidence-based filtering method for DDoS
attack defense in cloud environment,” Futur. Gener. Comput. Syst., vol. 29, no. 7, pp.
1838–1850, 2013.
[57] Q. Jia, H. Wang, D. Fleck, F. Li, A. Stavrou, and W. Powell, “Catch me if you can: A
cloud-enabled ddos defense,” in Proceedings of the International Conference on
Dependable Systems and Networks, 2014, pp. 264–275.
[58] J. B. D. Joshi, H. Takabi, and S. T. Zargar, “DCDIDP: A distributed, collaborative,
and data-driven intrusion detection and prevention framework for cloud computing
environments,” in 7th International Conference on Collaborative Computing:
Networking, Applications and Worksharing (CollaborateCom), 2011, pp. 332–341.
[59] B. Joshi, “Securing cloud computing environment against DDoS attacks,” in 2012
International Conference on Computer Communication and Informatics (ICCCI),
2012, pp. 1–5.
[60] V. . Huang, “A DDoS Mitigation System with Multi-stage Detection and Text-Based
55
Turing Testing in Cloud Computing,” in 27th International Conference on Advanced
Information Networking and Applications Workshops (WAINA), 2013.
[61] J. Buchanan, Bill; Flandrin, Flavien;Macfarlane, Richard;Graves, “A Methodology
To Evaluate Rate-Based Intrusion Prevention System Against Distributed Denial of
Service,” no. August 2016, pp. 17–22, 2010.
[62] M. N. Banu, “Cloud Computing Simulation Tools - A Study,” vol. 7, no. 1, pp. 13–
25, 2015.
[63] M. Sauter, “‘ LOIC Will Tear Us Apart ’ : The Impact of Tool Design and Media
Portrayals in the Success of Activist DDOS Attacks,” Am. Behav. Sci., vol. 57, no. 7,
p. 9831007, 2013.
[64] N. Hoque, M. H. Bhuyan, R. C. Baishya, D. K. Bhattacharyya, and J. K. Kalita,
“Network attacks: Taxonomy, tools and systems,” J. Netw. Comput. Appl., vol. 40,
no. 1, pp. 307–324, 2014.
[65] M. H. Bhuyan, H. J. Kashyap, D. K. Bhattacharyya, and J. K. Kalita, “Detecting
distributed denial of service attacks: Methods, tools and future directions,” Comput.
J., vol. 57, no. 4, pp. 537–556, 2014.
[66] X. Xia, Q. Pei, Y. Liu, J. Wu, and C. Liu, “Multi-level logs based web performance
evaluation and analysis,” ICCASM 2010 - 2010 Int. Conf. Comput. Appl. Syst. Model.
Proc., vol. 4, no. Iccasm, pp. 37–41, 2010.
[67] L. Hu, T. Li, N. Xie, and J. Hu, “False Positive Elimination in Intrusion Detection
Based on Clustering,” pp. 519–523, 2015.
[68] J. Corsini, “Analysis and Evaluation of Network Intrusion Detection Methods To
Uncover Data Theft,” Napier University, 2009.
[69] J. Sommers, P. Barford, and V. Yegneswaran, “Toward Comprehensive Traffic
Generation for Online IDS Evaluation,” Development, p. 12, 2005.
[70] H. Jiang and C. Dovrolis, “Passive estimation of TCP round-trip times,” ACM
SIGCOMM Comput. Commun. Rev., vol. 32, no. 3, pp. 75–88, 2002.
[71] E. McCrum-Gardner, “Which is the correct statistical test to use?,” Br. J. Oral
Maxillofac. Surg., vol. 46, no. 1, pp. 38–41, 2008.
[72] M. Berndtsson, J. Hansson, B. Olsson, and B. Lundell, Thesis Projects: A Guide for
Students in Computer Science and Information Systems. 2008.
[73] W. Lu, “Study on characteristic of emerging technology,” in International
Conference on Management and Service Science, MASS 2011, 2011.
[74] H. Oskar, “Mobile Phones and cloud computing,” 2012.
[75] D. Rubin, “Research Guides: Quantitative Analysis Guide: Choose Statistical Test for
1 Dependent Variable.”
56
APPENDIX A
Table 18: Comparison of DoS and DDoS attack generation tools
Features
Tool
Operating Requirements Performs Limitation
Programming System attack type
Attack Type GUI
Language
57
APPENDIX B
In the given tables SRC represents source mode, Dest represents destination mode.
58
Table 21: CPU Load for HTTP flood
CPU LOAD for HTTP flood
Dest
PPS Src (%) (%)
0 10 10
2000 29.8 30.1
4000 38.1 39.1
6000 66.9 68.5
8000 65.5 66.2
10000 64.9 65.8
12000 64.3 65.3
14000 63.9 64.2
16000 63.1 63.9
18000 62.8 63.5
20000 62.1 63.1
22000 61.2 62.8
24000 59.7 62.6
26000 59.1 61.1
28000 58.2 60.8
30000 57.9 60.2
59
Table 23: Memory Utilization for TCP flood
Memory utilization for TCP flood
Src Dest
Pps Mbps Mbps
0 316 315
2000 321 320
4000 336 339
6000 332 335
8000 339 339
10000 342 340
12000 335 338
14000 335 336
16000 339 338
18000 337 338
20000 340 338
22000 335 337
24000 336 336
26000 337 335
28000 337 334
30000 335 334
60
Table 25:Memory utilization for HTTP flood
Memory utilization for HTTP
flood
Src Dest
Pps Mbps Mbps
0 313 312
2000 319 318
4000 325 322
6000 330 328
8000 325 328
10000 333 331
12000 336 335
14000 335 338
16000 341 340
18000 338 339
20000 345 342
22000 335 340
24000 338 339
26000 338 335
28000 339 339
30000 337 336
61
Table 27: Bandwidth availability for TCP flood
Bandwidth availability for
TCP flood
Src Dest
Pps Mbps Mbps
0 60.3 60.3
2000 36.2 37.2
4000 19.7 20.95
6000 11.9 12.7
8000 8.89 9.89
10000 7.08 7.08
12000 4.168 4.168
14000 2.15 1.256
16000 1.16 1.076
18000 0.896 0.896
20000 0.705 0.507
22000 0.507 0.119
24000 0.129 0.102
26000 0.105 0.086
28000 0.08 0.07
30000 0.054 0.054
62
Table 29: Bandwidth availability for HTTP flood
Bandwidth availability for
HTTP flood
Src dest
Pps Mbps Mbps
0 60.3 60.3
2000 38.861 40.2
4000 25.63 24.4
6000 11.5 11.5
8000 9.89 8.98
10000 6.46 9.89
12000 5.39 5.28
14000 2.62 2.125
16000 1.35 1.18
18000 0.795 1.076
20000 0.599 0.557
22000 0.365 0.1
24000 0.1 0.076
26000 0.076 0.066
28000 0.052 0.0369
30000 0.013 0.0296
63
Table 31: Latency availability for TCP flood
Latency availability for TCP
flood
Src dest
Milli Milli
seconds seconds
Pps (ms) (ms)
0 1.876 1.876
2000 1.994 2.125
4000 2.215 2.993
6000 2.425 3.632
8000 37.765 40.765
10000 32.453 35.765
12000 32.899 33.899
14000 30.993 31.993
16000 29.899 30.889
18000 30.523 31.523
20000 31.626 32.626
22000 31.898 32.898
24000 32.628 33.628
26000 32.958 33.958
28000 33.485 34.485
30000 33.412 34.995
Table 32: Latency availability for UDP flood
Latency availability for UDP
flood
Src dest
Milli Milli
seconds seconds
Pps (ms) (ms)
0 1.876 1.876
2000 2.113 2.659
4000 2.659 3.123
6000 3.123 3.985
8000 38.657 39.657
10000 38.898 40.895
12000 38.834 39.834
14000 35.357 36.357
16000 34.924 35.924
18000 35.254 36.245
20000 36.215 36.954
22000 36.751 37.751
24000 36.148 37.148
26000 35.851 36.851
28000 35.519 36.519
30000 36.112 36.425
64
Table 33: Latency availability for HTTP flood
Latency availability for HTTP
flood
Src dest
Milli Milli
seconds seconds
Pps (ms) (ms)
0 1.876 1.876
2000 1.958 1.958
4000 2.324 2.324
6000 3.245 2.955
8000 36.324 37.324
10000 37.965 38.965
12000 38.335 39.335
14000 38.992 39.992
16000 37.125 38.125
18000 36.895 37.895
20000 36.325 37.325
22000 36.795 37.759
24000 36.982 37.982
26000 36.743 37.743
28000 36.143 37.143
30000 35.124 36.941
Table 34: Latency availability for MIX flood
Latency availability for MIX
flood
Src dest
Milli Milli
seconds seconds
Pps (ms) (ms)
0 1.876 1.876
2000 2.548 3.548
4000 2.245 4.245
6000 1.127 2.127
8000 31.548 32.548
10000 33.731 34.731
12000 34.234 35.234
14000 34.992 36.129
16000 34.212 35.421
18000 34.324 35.324
20000 33.752 34.752
22000 34.987 35.987
24000 35.578 36.578
26000 36.245 37.245
28000 37.147 38.147
30000 39.125 39.357
65
Table 35:Packet loss rate in TCP flood
Packet loss rate in TCP flood
Src dest
Pps % %
0 0 0
2000 0 0
4000 0 0
6000 0 6
8000 12 18
10000 19 27
12000 26 33
14000 34 40
16000 39 45
18000 46 54
20000 50 64
22000 57 67
24000 66 71
26000 70 74
28000 75 76
30000 76 80
66
Table 37:Packet loss rate in HTTP flood
Packet loss rate in HTTP flood
Src dest
Pps % %
0 0 0
2000 0 0
4000 0 0
6000 1 3
8000 13 19
10000 21 30
12000 30 36
14000 36 43
16000 47 52
18000 51 56
20000 56 59
22000 58 61
24000 61 65
26000 64 69
28000 69 73
30000 70 75
67