0% found this document useful (0 votes)
12 views16 pages

A New Architecture For Network Intrusion Detection and Prevention

This paperpresents an investigation, involving experiments, which shows that current network intrusion, detection, and prevention systems (NIDPSs) have several shortcomings in detecting or preventing rising unwanted traf c and have several threats in high-speed environments. It shows that the NIDPS performance can be weak in the face of high-speed and high-load malicious traf c in terms of packet drops, outstanding packets without analysis, and failing to detect/prevent unwanted traf c. A nov

Uploaded by

onfly.bd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views16 pages

A New Architecture For Network Intrusion Detection and Prevention

This paperpresents an investigation, involving experiments, which shows that current network intrusion, detection, and prevention systems (NIDPSs) have several shortcomings in detecting or preventing rising unwanted traf c and have several threats in high-speed environments. It shows that the NIDPS performance can be weak in the face of high-speed and high-load malicious traf c in terms of packet drops, outstanding packets without analysis, and failing to detect/prevent unwanted traf c. A nov

Uploaded by

onfly.bd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Received December 23, 2018, accepted January 5, 2019, date of publication January 31, 2019, date of current version

February 20, 2019.


Digital Object Identifier 10.1109/ACCESS.2019.2895898

A New Architecture for Network Intrusion


Detection and Prevention
WALEED BUL’AJOUL 1,2 , ANNE JAMES1 , AND SIRAJ SHAIKH3
1 Computing and Technology Department, New Hall, Nottingham Trent University, Clifton Campus, Nottingham NG11 8PT, U.K.
2 Computing Department, School of Science, University of Omar Al-Mukhtar, Al Bayda’ 543, Libya
3 Systems Security Group, Institute for Future Transport and Cities, Coventry University, Coventry CV1 5FB, U.K.

Corresponding author: Waleed Bul’ajoul ([email protected])


This work was supported in part by the Nottingham Trent University, Nottingham, U.K., and in part by the University of Omar
Al-Mukhtar, Al-Bayda, Libya.

ABSTRACT This paper presents an investigation, involving experiments, which shows that current network
intrusion, detection, and prevention systems (NIDPSs) have several shortcomings in detecting or preventing
rising unwanted traffic and have several threats in high-speed environments. It shows that the NIDPS
performance can be weak in the face of high-speed and high-load malicious traffic in terms of packet
drops, outstanding packets without analysis, and failing to detect/prevent unwanted traffic. A novel quality of
service (QoS) architecture has been designed to increase the intrusion detection and prevention performance.
Our research has proposed and evaluated a solution using a novel QoS configuration in a multi-layer switch to
organize packets/traffic and parallel techniques to increase the packet processing speed. The new architecture
was tested under different traffic speeds, types, and tasks. The experimental results show that the architecture
improves the network and security performance which is can cover up to 8 Gb/s with 0 packets dropped. This
paper also shows that this number (8Gb/s) can be improved, but it depends on the system capacity which is
always limited.

INDEX TERMS Computer security, computer networks, intrusion detection system, intrusion prevention
system, network architecture, network security, open source, quality of service, security, switch
configuration.

I. INTRODUCTION a Denial of Service (DoS) or Distributed Denial of Ser-


Information technology (IT) influences almost every aspect vice (DDoS) attack [1]–[3]. Network security is therefore
of modern life. Today, various devices are available to meet extremely important and has developed into an industry
users’ requirements such as high machine processor speed, aimed at improving applications and hardware platforms to
and fast networks. Alongside our increasing dependence on identify and stop network threats.
IT, there has unfortunately been a rise in security incidents. One of the most established concepts in information secu-
Threats and attacks may range from stealing personal infor- rity is a defense-in-depth approach which utilizes a multi-
mation from a laptop or network server to stealing the most layered structural design, in which firewalls, vulnerability
top-secret information stored on a Security Intelligence Ser- assessment tools (anti-viruses and worms), and IDPS (Intru-
vice (SIS). Furthermore, hackers can snoop on users’ online sion Detection and Prevention Systems) are employed to pre-
purchases by eavesdropping on their credit card details, or, vent any hostile endeavours on network systems and servers.
even more alarmingly, safety-critical systems can be com- The Network Intrusion Detection and Prevention System
promised. Multi-faceted attacks and threats have made the (NIDPS) has been designed to serve as the last point of
implementation of security systems more challenging. Hack- defense in the network architecture. NIDPS monitor the trans-
ers have evolved along with the sophistication of the IT portation of network traffic for any malicious and uncomfort-
industry. For example, hackers exploit the developments in able activities and create alerts when operating in detection
computer processors and network speeds to increase the mode or block packet alerts when operating in prevention
volume and speed of malicious traffic that might constitute node [4], [5].
The detection and prevention mechanisms of the NIDPS
The associate editor coordinating the review of this manuscript and are grounded in observing the comparison of ingress packets
approving it for publication was Ali Kashif Bashir. (traffic) to any known attack through patterns (signature

2169-3536 2019 IEEE. Translations and content mining are permitted for academic research only.
18558 Personal use is also permitted, but republication/redistribution requires IEEE permission. VOLUME 7, 2019
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

TABLE 1. Snort-NIDP reaction to detect malicious packets.

NIDPS mechanism) or identifying unknown malicious pat- (NID-mode) and prevention (NIP-mode) modes. The exper-
terns from ingress traffic (anomaly NIDPS mechanism). iments were conducted to test Snort NID and NIP modes
NIDPS are important in that they: performance in detecting and preventing malicious packets
• counter intrusions or malicious attempts to access net- under high-speed traffic.
works and systems; The experimental testbed also incorporated generator traf-
• analyze network traffic and identify hackers’ targets and fic tools, such as NetScanPro, Packets Generator, Win-
techniques; and Pcap, capture tool, Packets Traceroute, TCP reply and
• detect or prevent unwanted and malicious traffic. Packets flooder. The experiments used performance met-
Open source is the most common category of NIDPS soft- rics such as number of packets analysed, number of mali-
ware configured platforms [6]; however, its performance in cious packets detected or prevented, and number of packets
high- speed networks communication remains a major issue. dropped. In this section the two experimental setups are
Irrelevant alerts (false positive alerts) occur, thus creating a described.
more difficult job for system security managers. Moreover, A. DETECTING MALICIOUS PACKETS
despite claims of increased capabilities and efficient perfor- In this experiment, WinPcap, Flooder packet and TCPreplay
mances by several NIDPS dealers, research has shown that tools were used to send flood traffic with signed (known)
systems lack the required capabilities to monitor and analyze malicious UDP packets (255 threads per 1mSec) to a phys-
high-speed network traffic [7]–[9]. ical system at different speeds (see Table 1). The UDP
Innovators have created hardware IDPS to process millions malicious packets were interspersed among other packets
of packets at the same time [10], [11], but there are limitations transmitted at varying speeds. The following rule has been
in the capability to perform particular software tasks. In addi- designed to require Snort to detect (alert and log) any
tion, limited memory size is a problem for hardware-based UDP threads or malicious packets that contain the variables
NIDPS solutions. Furthermore, hardware-based NIDPS offer ‘ab.H0 ..OK..cdef’ and time to live (TTL) 132 that comes
a high range of processing speeds but are very costly. Soft- from any source and port address and goes to any destination
ware solutions are popular because they are cheaper and offer address and ports:
more flexibility than hardware solutions. This paper focuses Alert udp any any -> any any (msg: ‘‘Detect Malicious
on open-source software solutions. UDP Packets’’; ttl: 132; content:|’ 61 62 C2 48 60 AE 97 4F
Computer network and internet security face increasing 4B C3 63 64 65 66’|; Sid: 100004;).
challenges and many companies rely on NIDPS to secure Flood traffic TCP/IP was sent in different bandwidths
their data sources and systems. The need to ensure that (Bps) with 255 malicious UDP packets (threads) in interval
the NIDPS can keep up with the increasing demands as a packets with a delay of 1 microsecond (1 mSec). The NIDS
result of increased network usage, higher speed networks and rule was set up to check the pattern inside the packets and
increased malicious activity, makes this an interesting area of then detect only the malicious UDP threads when the two
research and motivated this study. conditions of (TTL and content) are matched.
The paper is organized as follows. Our investigation As shown in Table 1 and Figure 1, Snort NIDS initially
testbed is described in section II. Section III presents our analysed every packet that reached the wire. When 255 mali-
proposed solution and section IV its evaluation. A discussion cious UDP packets were sent at a speed of 1 mSec with
and comparison to related research is provided in section TCP/IP flood traffic at 16 bytes per second (16Bps), Snort
V. Finally, section VI gives a conclusion and recommenda- alerted and logged more than 99% of the total UDP packets
tions for future work. that it analysed. As the flood traffic (speed) was increased
to 200, 1200, 4800 and 60000 bytes per second (Bps), Snort
II. INVESTIGATION TESTBED alerted and logged packets to a decreasing degree, respec-
To investigate the problem, two experiments were carried out. tively, at 98.84, 97.17, 49.40 and 35.75% of the total mali-
The Snort NIDPS has been configured to NIDPS detection cious packets analysed (see Table 1). Figure 1 shows that the

VOLUME 7, 2019 18559


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

FIGURE 1. Malicious packets detection.

TABLE 2. Snort-NIDPS reaction to prevent malicious packets.

number of missed malicious packet alerts increased when the of the total malicious packets analysed (see Table 2).
speed increased. The experiment shows that, when the speed Figure 2 shows that the number of missed malicious
was 60000 Bps, Snort detected less than 36% of the malicious packets increased when the speed increased. The experiment
packets analysed (see Table 1). shows that, when the speed was 60000 Bps, Snort only
prevented less than 18% of 100% of the malicious packets
B. PREVENTING MALICIOUS PACKETS analysed (see Table 2).
In this experiment, TCP/IP flood traffic was sent at differ-
ing speeds (see Table 2) with 255 malicious UDP packets
III. PROPOSED SOLUTION
(threads) also sent at 1 microsecond (1 mSec) intervals. Snort
The results of the experiments described above in section (II)
was set to prevent UDP threads by using two rule conditions
show that the NIDPS’s performance decreases when faced
(TTL and content) as follows:
with heavy and high-speed attacks. This section analyses
reject udp any any ->any any (msg: ‘‘Prevent Malicious
the problem and then outlines a novel solution to increase
UDP Packets’’; ttl: 120; content:|’ C2 48 60 AE 97 4F 4B C3
NIDPS performance in the analysis, detection, and prevention
’|; Sid: 100007;).
of malicious attacks.
Use of these options will prevent any UDP malicious
packet that is matched with the TTL value equal to 120 and
a data pattern inside the malicious packet with content A. NOVEL NIDPS ARCHITECTURE
‘‘.H‘..OK.’’. The hexadecimal number (‘C2, 48, 60, AE, 97, Critical analyses were done for the experiments presented in
4F, 4B, C3’), which the rule contained, is equal to the ASCII sections II(A) and II(B) (see Figures 1 and 2, respectively).
characters (‘., H0 ,,.,., O, K,.’). The figures show that performance of NIDPS throughput
As shown in Table 2 Figure 2, When 255 malicious is affected when NIDPS is exposed to a high-volume and
UDP packets were sent at a speed of 1 mSec and TCP/IP speed of traffic; more packets will be dropped and left out-
flood traffic at 100 bytes per second (Bps), Snort pre- standing as the speed of traffic increases. Figure 1 shows
vented 100% of the total UDP packets that it analysed. that the NIDPS’s detection performance decreased when the
As the flood traffic (speed) was increased to 10000 bytes traffic speed increased. There were more missed alerts and
per second (10000Bps), Snort prevented less than 51% missed logs for packets as the speed of traffic increased.

18560 VOLUME 7, 2019


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

FIGURE 2. Prevent malicious packets.

Figure 2 shows that the NIDPS prevention performance


decreased when traffic speed increased.
When traffic moves through the network interface card
(NIC) to the NIDPS node, the packets are stored in the buffer
until the other relevant packets have completed transmission
to processing nodes. In the event of high-speed and heavy
traffic in multiple directions, the buffer will fill up. Then
packets may be dropped or left outstanding [13]–[15]. In this
case, there is no security concern about the packets dropped;
the packets are dropped outside the system. The existence
of outstanding packets that are waiting or have not been
processed by a security system (i.e. NIDPS node) affects the
system efficiency however.
FIGURE 3. General model of buffer packets drops.
Packets can also be lost in a host-based IDPS. Most
software tools use a computer program such as the kernel,
which manages input/output (I/O) requests from software and host system but the NIDPS node has not been able to process
decodes the requests into instructions to direct the CPU’s them. Therefore, useful packets can be lost. In order to solve
data processing. When traffic moves from the interface (NIC) the problem of lost packets in NIDPS, our study investigated
through the kernel’s buffer to the processor space, where most the use of a QoS configuration in layer 3 switches with paral-
of processing nodes are executed, the packets will be held in lel NIDPS technology to organize and improve the processing
the kernel buffer before being processed by the CPU. When performance. Our study develops a novel QoS architecture
some nodes experience a high-volume of data, the buffer will (see Figure 4) based on a layer 3 network switch.
fill up and packets may be dropped. A layer 3 switch enables a network to get the best perfor-
There are therefore three (3) places where packets could mance effort from a network traffic delivery system. Through
be dropped: in the network, in the host or in the processor, it, packets of various priorities can be delivered on a network
because all of them are dependent on buffer size and process- in a timely manner. When networks experience high-speed
ing speed. If the arrival packet speed rate (λ) is greater than and heavy traffic, each packet has a similar chance of being
the network or host buffer speed rate (β), dropped packets dropped or modified.
(λd ) may occur (see Figure 3), and even increasing the buffer Implementing QoS methods, such as queueing, memory
speed can affect processor speed and cause packet drop. reservation, congestion-management, and congestion-
For network-based packet loss, the NIDPS node fails to avoidance techniques, can yield preferential treatment to
analyze this traffic (packets) because the network drops pack- priority traffic according to its relative importance. Further-
ets and the node cannot see them. Packet loss has no negative more, QoS technology ensures that network performance
impact on the node’s ability to detect or prevent received is more predictable, and that bandwidth utilization is more
malicious packets, but it does have an impact on the receiving effective. QoS is used to improve performance in high speed
system in that useful packets would not be delivered. In host- network events [16]–[18]. QoS can be configured on physical
based and processor-based packet loss, the NIDPS node has interfaces such as ports and switch virtual interfaces (SVI)
analysed this traffic because these packets have reached the [16, pp. 294–299], [19], [20], [21, p. 826].

VOLUME 7, 2019 18561


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

from an available queue buffer, from the SVI, or port interface


memory buffers, or by switching to a common memory pool
buffer.
ICMP, TCP, and UDP packets as well as malicious packets
have different characteristics and require different process-
ing. The Shaped or Share Round Robin (SRR), threshold, and
priority methods for each output queue offer opportunities to
manage differently various packet types and behaviors. For
example, when all input and output queue buffers are flooded
with traffic, priority queue and threshold map values can deny
buffer overflow [19], [20].
The main idea of our novel QoS architecture is to manage
and allocate a specific traffic weight, or set of bytes, into
each input queue and process each output queue individually
in parallel, thereby increasing NIDPS processor speed and
reducing traffic congestion, even if the traffic is high-load
and high-speed. The next section (B) gives more detailed
information about our QoS configuring methods, i.e., class
and policy map, ingress and egress queues, SRR, bandwidth,
threshold, buffer, queue memory reservation and priority.

B. THEORETICAL AND TECHNICAL BACKGROUND OF


NOVEL ARCHITECTURE
NIDPSs process packets which are carried by IP protocols,
e.g. UDP, TCP and ICMP. The IP protocols are checked by
FIGURE 4. Novel NIDPS architecture. NIDPS rules based on a signature database (known signa-
ture/attacks). However, to get the best NIDPS performance,
the NIDPS should be implemented in a system which can
In our study, QoS has been used to configure a novel archi-
manage the layer 3 network protocol (IP layer). In our study,
tecture in order to improve overall network traffic and secu-
a layer 3 switch has been used to support and improve NIDPS
rity performance. As shown in Figure 4, the system (switch)
performance. The switch supports QoS configuration as well
interface has been configured to have two input queues and
as Differentiated services (DiffServ) architecture.
four output queues. The queues’ parameters were configured
to allow queues to process traffic as a group of bytes. These
load a set of packets equally among the queues and divide 1) MAPPING TO LAYER 3
traffic into parallel streams in order to increase the rate of Most of the switches work in layer 2, which is the data link
packet processing. The system then uses parallel NIDPS layer. The switches use the class of services (CoS) value (see
nodes to increase the NIDPS throughput performance and Figure 5), which enables differentiation of the packets [16],
analyses each egress queue separately to determine whether [21, pp. 827–828]. However, layer 2 provides insufficient
it is free of malicious codes. methods to support switch features such as QoS features,
A class map and a policy map were made for each input dynamic access control lists (ACLs), VLAN features, static
queue. The class map recognizes and classifies a certain type IP routing, and policy-based routing (PBR) [5], [21], [22].
of traffic for each input queue, while the policy map controls Other mechanisms operate at OSI (Open Systems Inter-
and organizes the speed limit for each input queue and applies connection) model layer 3 where DiffServ architecture can
the limit to all interfaces. The bandwidth, threshold, buffer, implemented (see Figure 5). For example, DiffServ allows
memory reservation, and priority (queue and traffic) were different types of services to be offered depending on a
configured for all ingress and egress queues to treat and code [18]. DiffServ allows a policy that gives priority to a
control traffic in order to help prevent congestion or complete certain type of package [16], [21, pp. 827–828]. DiffServ
failure through overload. architecture is the basis for the QoS implementation in this
One queue was configured as an expedited queue. research. It assigns each packet a classification upon entry
It received prioritized QoS services and other queues were that states its priority and its likelihood of being delivered
not serviced until the bandwidth of prioritized queue reached into a network before packets are distributed. It adjusts each
its limit. A memory buffer reservation technique was imple- packet for different traffic speeds to ensure timely delivery.
mented on our novel QoS architecture for each queue to Figure 5 illustrates the relative layers at which CoS
guarantee that each queue’s buffer could attain more space and Differentiated Services Code Point (DSCP) operate.
once it reached its limit. This was achieved by reserving space It also illustrates the relevant models (i.e., TCP/IP, Proto-

18562 VOLUME 7, 2019


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

FIGURE 5. Positioning of CoS and DSCP values.

cols, QoS, and Data Unit) based on OSI layers [16, p. 191], Network QoS technology enables the implementation of
[21, pp. 827 and 830–833]. a new logical and throughput-traffic-forwarding plan in the
Setting the type of service (ToS) field in the IP header can switch. For the purpose of this research, a physical inter-
be used to achieve a simple classification which can be carried face was configured to two input queues and four output
with the packet across the network [16:p32,21:p830-833]. queues (see Figure 4). This configuration helps to prevent
In layer 2, 802.1Q and 802.1p frames use 3 bits (see Figure 5) congestion traffic (which would cause buffer overflow) and
for the IP ToS field; in layer 3 IPV4 packets use 6 bits (see helps to improve buffer throughput performance. A buffer
Figure 5) for DSCP in the ToS field to carry the classification was set for each queue and a memory reservation method
information. Regardless of a network’s capability to identify using a dynamic memory reservation technology was imple-
and classify IP packets, hops can offer each IP packet a QoS mented in order to manage higher traffic loads. After packets
service. were placed into input queues, class and policy maps were
In the configuration method utilized by this research, implemented to handle packets based on their QoS require-
the first action changes the switch frame from layer 2 to ments. Appropriate services were then provided, including
layer 3 by mapping values from CoS to DSCP. We considered bandwidth guarantees, thresholds, queue setting, and priority
DSCP to be the best choice for the intended usage because servicing through an intelligent ingress and egress queueing
differentiated services technology can offer more precise han- mechanism.
dling of traffic on the network, can classify each packet upon The class map information is assigned along the path of
entry into the network interfaces, and allows adjustments to a switch and can be used to limit the volume of incoming
be made for different traffic speeds and loads. The mapping packets distributed to each traffic class. The default behavior
action between values determines the delay priority of the in layer 3 switches using the DiffServ architecture is the ‘‘per-
packets. CoS has 8 values and DSCP has 64 values. Thus, hop’’ method [16, pp. 6 and 940–941], [21, p. 828]. If a switch
the DSCP values allow for a higher degree of differentia- along the path does not provide a consistent behavior per hop,
tion [5], [16]. QoS provides a conceptual and constructed solution, such
as an end-to-end queue solution. The solution is based on a
2) QOS CLASSIFICATION AND POLICY METHODS configurable policy map that allows the system to examine
Classification is the process of identifying the data packets packet information closer to the edge of the network, which
to a class or group in order to manage the packet appro- prevents the core switch from experiencing overload. The
priately [16]. QoS features such as a policy map and class output queues are processed individually where parallel Snort
map can be used to achieve this. The class information can NIDPS nodes are implemented. Each output queue has own
be assigned by switch, router, or end host. Policing involves NIDPS node (see Figure 4).
creating a policy that defines a group weight (the number of
bytes to be processed together) for the traffic and applies it to 3) PARALLEL TE CHNOLOGY WITH QOS.
the interface. Policing can be applied to a packet per direction Parallel NIDPS is a form of computation where many NIDPS
and can occur on the ingress and egress interfaces. Different nodes work simultaneously, operating on the principle that
types of traffic can be recognized in terms of type, and ports the large incoming data can be divided into smaller sets,
and differentiated policies can be set accordingly. which are processed at the same time. Parallelism of NIDPS

VOLUME 7, 2019 18563


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

FIGURE 6. High level parallel process for one interface.

can occur at three general levels: the high-level processing


node (entire system), the component level (specific tasks are
isolated and parallelized) and the sub-component level paral-
lelism (function within a specific task) [23]. The handling of
data can also be parallelized with traffic being split into sepa-
rate streams to be processed by parallel nodes or components.
This is data parallelism which can occur in various ways with
the three general levels of parallelization. In our architecture,
parallel management of traffic was implemented through the
use of queues (2 input queues and 4 output queues) on an FIGURE 7. Packets classification and marking.
SVI where component level parallelism of NIDPS nodes was
implemented (see Figure 4 and Figure 6). packets [22, p. 1]. The novel configuration proposed in this
The parallelization of data (traffic) that was distributed paper uses an ACL technology with a class map and SVI
through ingress and egress queues into critical and non- queues, as well as a policy map that specifies each type of
critical is viewed as multiple traffic parallelism (MTP). Criti- IP traffic (e.g., ICMP, TCP and UDP) to be processed by
cal pre-processing of traffic is performed on queues to create implementing parallel output queues with associated parallel
particular groups of packets (threads) before the traffic is NIDPS nodes.
examined by an ingress queue algorithm. Non-critical pre- When traffic arrives at the ingress interface of system,
processing occurred after the packets had been matched to packets will be classified through a class map (see Figure 7)
ingress and egress queues policies. The NIDPS node com- that will enable packets to be processed as a group of bytes
ponent can be parallelized in an either non-functional or defined by a policy and ACLs that were matched with DSCP
functional manner. Component level parallelism is defined values. A policy map (see Figure 8) was made to specify
as function parallelism of the NIDPS processing node. Indi- required action for each class. The following procedures
vidual components of NIDPS were isolated, and each output constitute the method:
queue was given its own processing element (see Figure 4 and • Classify the traffic with a class map for SVI and ports.
Figure 6). The NIDPS node was configured from a single- Set ACLs rules depending on the kind of traffic/attacks
node NIDPS to a multi-node NIDPS. Each node was config- to be detected or prevented. In our experiment, we detect
ured to check for a certain type of packet (e.g. UDP, TCP and and also prevent UDP malicious packets which came
ICMP) and was able to access discrete parts of a centralized, with random high-speed flood traffic. We allowed UDP
common rule base to order to carry out its task. The kernel traffic to be processed in a separate egress interface
buffer parameters for each NIDPS node was configured as (queue) and then analysed by a parallel NIDPS node.
each output queue rate. The other traffic (e.g. TCP, ICMP, etc.) was processed
in the other egress queues.
4) QOS CLASSIFICATION, POLICING AND MARKING FOR • Organize a rate-limit for the system ingress interface
INGRESS AND EGRESS INTERFACES (QUEUES). processing speed (Setting a set group of packets in bytes)
Queues, class, and policy technologies can use access con- for the class traffic. The rate depends on the maximum
trol lists (ACLs) to allow the processing management of limit of SVI bandwidth including memory. In our system
different types and patterns of incoming and outgoing we set ‘‘1.124 million’’ bytes (nearly 1Gb of packets) for

18564 VOLUME 7, 2019


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

policy to specify a bandwidth limit for each traffic class.


Aggregate QoS specifies an aggregate policy with which to
apply a bandwidth limit to all matched traffic flows. The
individual policer only affects packets on a physical interface
(i.e., SVI/port). Furthermore, if more than one type of traffic
needs to be classified, it is possible to create more ACLs, class
maps, and policy maps [16]. In our experiments, three types
of traffic (TCP, UDP, and ICMP) were classified using ACL,
class map, and policy map methods.
Switches receive each traffic frame in a token bucket
[16, p. 62], [21, p. 835], where an algorithm is used to check
leaks of data transmissions. The token bucket processing
speed is set at the same rate as the configured average packets
rate and conforms to defined limits on bandwidth to allow a
burst of traffic for short periods. Each time a token is added
to the bucket, the algorithm checks to see if sufficient room
is available in the bucket. If not, the packet will be marked as
non-conforming, and the specified policy action will be taken.
In our QoS architecture (see Figure 8), packets dropped out
of profile were marked down with new DSCP values and the
DSCP value was modified to generate a new QoS label.

5) QOS THRESHOLDS FOR INGRESS AND EGRESS


FIGURE 8. Packet policing and marking. INTERFACES (QUEUES).
QoS stores packets in input and output queues according
the set of classes because the maximum limit for each to the QoS label, which has been defined and identified by
interface in our system is 1Gbps. the DSCP values in the packets. Threshold map values can
• Each class of traffic matches to a DSCP value so packets be assigned to the queues [16, p. 260], [21, p. 838], [22].
in that class are marked down to a new DSCP label. Our architecture has employed weighted tail drop (WTD)
thresholds on ingress and egress queues to cope the band-
After packets were classified and policed with a specific
width length of each queue and deliver the drop precedence
bandwidth, some were dropped out of the profile (fabric).
for different classifications of packets. If the available space
Each policy can specify what actions should be carried
on a destination queue is less than the volume of packets,
out [16], [21, p. 833], including dropping packets, allowing
the threshold is exceeded for that QoS label and the switch
dropped packets to be modified, allowing packets to pass
drops the packet. DSCP values (which are carried by packets)
through without modification, and deciding on a packet-by-
were mapped to ingress and egress queues which all have a
packet basis whether a packet is in or outside the profile.
located buffer space. WTD thresholds for input and output
The novel QoS policy map architecture (see Figure 8) was
queues were set (see Figure 9). Each queue has three drop
proposed as follows:
thresholds. This means that different thresholds can be set
• Packets dropped were modified to be re-processed again for different types of packets. Each value of the threshold
and mapped with new DSCP values based on the original represents a percentage of the queue’s total buffer.
QoS label. This modification helps for correcting and In our configuration, each ingress queue was assigned
reducing dropped packets inside the profile. a various threshold value. One of the ingress queues was
• When packets are reprocessed, they may get out of order. assigned to be a high-priority queue with a maximum queue
To prevent this, a policy was designed to allow packets threshold (the queue can hold up to its limit of frames at up
to be re-processed in the same queue as the original QoS to 100% threshold). The other ingress queue was assigned
label. to be a lower-priority queue with a lower queue threshold
• The system has the ability to mark up a limit speed (as a (threshold <100%). Higher priority packets were directed to
set of bytes) for each input queue. the high-priority queue.
• If packets are not matched with DSCP values, packets
will be dropped. See Figure 8 for an illustration of the 6) QOS BUFFER RESERVATION FOR INGRESS EGRESS
architecture. INTERFACES (QUEUES).
A hierarchical policy map was created and applied to the Buffers are universal throughout the software and hardware
traffic inside the ingress queues. The policy map targeted layers of any network computer system. They are valuable
SVIs and ports. Two types of QoS policy were created: in reducing the impact of traffic rate variability on the net-
individual and aggregate. Individual QoS applies a separate work especially in the case of traffic rate points. By having

VOLUME 7, 2019 18565


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

FIGURE 10. QoS buffer reservation.

of the ingress interface buffers are sent to the egress interface


buffers. Here, the switch algorithm matches the packets in
the ingress and egress interfaces. According to this matching,
the packets in the ingress interfaces will be transmitted to
the corresponding egress interfaces. In this scheduling task,
there is also a buffer for each pair of ingress and egress
interfaces. Thus, there arises anther buffer management prob-
lem at scheduling phases. In the implementation of QoS
architecture proposed in this study, QoS DiffServ was used to
assign a value to each packet according to its importance and
then it decides the order of packets to be processed through
queues based on the value of packets. Additional buffers were
provided dynamically to the ingress and egress interfaces.
A QoS switch was used to control the input and output traffic.
A priority queue was implemented for one of the ingress
FIGURE 9. Novel scheduling architecture for ingress and egress queues. queues in the interface.
By default, the ingress buffer rate is the same as egress
buffer rate. However, when the arrival event of traffic rate
sufficiently large buffers to absorb traffic rate spike, high (λin ) is more than total rate of egress buffers, or one of
latencies associated with retransmissions and lost data can be n egress buffers already reached α, the packets would be
avoided. They are also useful if there is a temporary differ- dropped (λd > 0). In the novel configuration, the sharing
ence in the rate at which traffic is produced and consumed. policy was configured for each ingress-queue’s buffer which
However, increasing the buffer size cannot compensate for corresponds to rate α2 and the egress-queue’s buffer was to
packet processing that is perpetually slower than the packet rate α4 , where α is the maximum rate for the interface’s buffer.
arrival rate. Systems (e.g., switches, routers, etc.) may have All buffers were assumed to have the same maximum rate.
different buffer configurations. The total rate of all buffers is A buffer reservation technique (see Figure 10) has been
β; and each ingress and egress buffer of an interface is limited used to increase buffer size along with implemented paral-
to rate α. The same α applies to all interfaces. The rate of a lel nodes of buffers to increase buffer speed performance.
buffer is the speed at which packets move through it and this The total buffer memory space was provided by the system
depends on the underlying processing system. main common memory pool with subdivisions for the SVI
The switch manages buffering across a number of inter- pools and further for ingress queue pools and egress queue
faces. An ingress interface has ingress buffers connected to pools (see Figure 10). A buffer distribution scheme was
common egress buffers. The switch algorithm is also respon- implemented to reserve more space for each egress buffer.
sible for scheduling. At each event of scheduling, the switch Thus, all buffers cannot be consumed by one egress queue,
algorithm selects one of the ingress buffers. The packet at and the system can manage matching buffer space to queue
the head of the selected buffer is then transmitted to the demand. The remaining free common pool interfaces were set
inside at the switch and via the egress buffer to the target to reserve up to 50% of the available switch memory pool.
system. There are some formulations that model the entire A switch identifies whether the target queue has consumed
switch rather than just one interface [24]. For example: a less buffer space than its reserved volume (under-limit),
switch consists of m ingress and n egress interfaces, where whether the target queue has consumed all of its reserved
each interface has buffer. Arrival events (packets) arrive at the amount (over-limit), and whether the system (switch) mem-
ingress interfaces (which have specified destination egress ory pool has free buffers. If the queue is not over-limit,
interfaces). At the scheduling event, packets at the top end the switch can reserve a buffer space from the interface

18566 VOLUME 7, 2019


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

pool or from the switch main common memory pool. If no Every arrival packet needs to be sent out of an ingress
more space is available on the common pool or if a queue is interface and then placed in egress buffers which permit an
over-limit, the switch drops the packet. interface to hold packets when there are more packets to be
transmitted than can physically be sent (experiencing conges-
7) QOS SHAPED/SHARED ROUND ROBIN (SRR). tion). If the system (switch) cannot allocate sufficient buffers
After traffic has been classified, marked, and policed in to hold all incoming traffic, the packets will be dropped.
two ingress queues, each packet is processed into four out- Availability of egress buffers determine if a packet is trans-
put queues that implement parallel NIDPS nodes. QoS also mitted or not. When it comes to reducing packet drop, the
offers SRR technologies, which can vary the interface queue system does not concern itself with packets. Rather, it is
bandwidth, and control the buffer rate at which packets are concerned with the number of requested (reservation) buffers
sent [16, p. 260], [21, p. 840]. The Shaped function SRR can to which unbuffered packets can be added. The volume of
guarantee each queue a bandwidth limit, but queues cannot egress buffering differs from platform to platform. Typically,
share with each other if one or more queues reach their two (2) reservation pools are available for layer 3 network
bandwidth limits. The Share function SRR can guarantee a switches. These pools are the SVI reserved pool and the
bandwidth limit for each queue, and the other queues can switch memory common pool (see Figure 10). The switch
share with each other if one or more queues reach their memory common pool is used when the SVI reservation pool
bandwidth limits. This research utilized Share in the ingress has previously been consumed.
queues and Shaped in the first three egress queues. In the When packets (traffic) go through the output queues,
fourth egress queue, the Share mode was set to allow the the switch reserves buffer from the SVI reservation pool
queue to share traffic with other available output queues. for all egress buffers and then if one egress buffer is fully
Queue technology is placed at specific points in a system consumed, the consumed egress buffer can reserve buffer
(e.g., Layer 3 switches) to help prevent congestion. The total space from the available buffers of other egress queues. When
inbound bandwidth of all interfaces may exceed a ring space the SVI reservation pool is consumed by all egress buffers,
of internal bandwidth. After packets are processed through it reserves more buffers from the switch common memory
classification, policing, and marking, and before packets pass pool. If all the switches’ buffers are consumed, the packets
into the system (switch) fabric, the system allocates them will be dropped because no more space will be available.
to input queues. Because multiple input queue interfaces All ingress and egress buffers above are collectively called
can simultaneously send packets to output queue interfaces, one node of the switch’s buffer (ηi ). The common memory
ingress packets exit to an internal ring, and outbound queues switch pool is the ‘‘holdup’’ storage area, where packets
are allocated from the internal ring. This avoids congestion. are held until they can be processed. If the holdup area is
The SRR ingress queue transmits packets to the internal ring, full, more reservation buffer can be achieved by reserving
while the SRR egress queue sends the packets to the output memory from another switch memory pool in the LAN (see
link. Figure 11) and even can be from WLANs. However, all egress
The novel configurable architecture has a large limit of buffers were controlled to limit rate βk (kernel buffer speed)
buffer space and a generous bandwidth allocation for each to prevent host based dropped packets. The kernel buffer rate
queue. One of the ingress queues was set as a priority queue, (speed) should be equal to output queue rate.
which allowed the system to priorities packets with particular
DSCP values and thereby allocate a large buffer. It also C. SUMMARY OF PROPOSED SOLUTION
allows buffer space to be used dynamically, and then adjusts NIDPSs are often unable to detect or prevent all unwanted
the thresholds for each queue so that packets with inferior traffic or malicious activities when traffic comes in at high-
priorities are dropped when queues are full. This allows the speeds and volumes. As a solution, this paper describes
system to ensure that high priority traffic is not dropped. a novel architecture. Layer 3 Cisco switch technology is
When the traffic comes to the interface, the packets are combined with parallel NIDPS nodes, to create queues with
buffered in the priority ingress buffer (priority queue) and if specific buffer and bandwidth sizes.
the priority buffer is getting full, the traffic will transmit to The system thus increases queue buffer size automatically
the second ingress buffer. If all ingress buffers are getting full, up to a network limit. It also services buffer space from an
the packets will be dropped, or the switch can reserve another available queue buffer, port buffer, or switch pool memory
ingress buffer with the same priority up to n, where n is the buffer to hold more packets. This allows the system to orga-
maximum number of ingress buffers. nize and increase the processing speed of arriving packets
When the arrival packets pass through ingress buffers, (which have been reconfigured and reordered as groups of
the traffic speed was re-controlled as an interface speed limit bytes) by setting a number of parallel egress queues to be
(µ) (group of bytes per second) and the traffic is stored in processed by parallel NIDPS nodes. The number of parallel
the SVI QoS-ring space, where packets were arranged and processing NIDPS nodes needed in any particular system
managed to exit the interface through egress queues. µ should depends on network arrival rates. Therefore, it was necessary
be equal to the maximum bandwidth limit of each system to operate with the class and QoS technologies within the
interface. network switch.

VOLUME 7, 2019 18567


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

the proposed solution (QoS and parallel technologies). Each


experiment tested Snort NIDPS throughput when analyzing
traffic such as TCP/IP headers and then detecting or prevent-
ing unwanted traffic (UDP malicious packets) arriving at a
high-speed.
As shown in Figure 1, when malicious UDP packets were
sent at a speed of 1 mSec with different TCP/IP flood traffic
at 16 to 60000 bytes per second (Bps), Snort NIDPS started
effectively but overall it missed detecting up to 65% of mali-
cious packets that system received (see Table 1). Further-
more, it was unable to prevent all unwanted packets. The
experiment shows that, when the speed was 60000 Bps, Snort
prevented less than 18% of the malicious packets analysed
(see Figure 2 and Table 2). When QoS architecture was imple-
mented, Snort NIDPS detected almost 100% of malicious
packets that system received (see Table 3 and Figure 12).
The experiment results show that Snort NIDPS performance
FIGURE 11. Reservation buffer from n node of switches.
increased greatly when QoS is used. It prevented almost100%
of malicious packets that it analysed (see Table 4 and
An assumption is that there will be an underlying parallel Figure 13).
implementation of the target destination (NIDPS in this case)
and for each egress buffer commissioned there will be a B. EVALUATION AT HIGHER SPEED
port to a parallel node of the target system. This enables In this section, two tests were carried out for NIDPS analytic
better performance and higher volumes of traffic to be pro- performance under various high-speed traffic. The first test
cessed successfully. The difference between the previous was for NIDPS with no novel architecture, and the second
studies [5], [12] is that this study gives a clear picture of test was for our novel NIDPS architecture. In this experiment,
how QoS architecture along with parallel technology can TCP replay tool was used to generate traffic at different
improve NIDPS performance. The QoS configuration boosts speeds (Gbps) through the system. The same system (Cisco
the NIDPS performance with regard to its congestion man- Switch 3560) was used. As the results show in Figure 14,
agement and its congestion avoidance. Congestion manage- when we tested NIDPS (no novel architecture), Snort NIDPS
ment creates balanced queuing by evaluating the internal processed every single packet that reached the wire. The
DSCP and determining in which of the four egress queues results show that when packets are sent at 1 Gbps, Snort
to place the packets. analyses nearly 100% of received packets, but when traffic
Other items related to queuing are also configured to speed increases, NIDPS starts losing/dropping packets (see
reduce dropped buffer packets in interfaces in order to Figure 14). Furthermore, when speed was 8Gbps NIDPS
improve NIDPS performance, e.g., defining the priority analyses just 599818 of 10994568 packets which is less than
queue, defining a queue set, guaranteeing buffer availability, 6% of total packets received (see Figure 14).
limiting memory allocation, specifying buffer allocation, set- When we implemented our novel architecture, Snort pro-
ting drop thresholds, mapping the CoS to the DSCP value, cessed 100% of packets that were received while the traffic
configuring SRR, and limiting the bandwidth on each of speed was 8 Gbps (see Figure 15). When the traffic speed was
the outbound queues. The congestion avoidance method also increased to 10 Gbps, Snort started to drop packets.
helps with the performance of the NIDPS, by, e.g., setting By using 2x 1Gb interfaces, the experiment results showed
output queuing, and configuring WTD parameters for the that the Snort NIDPS processed up to 8Gbps with 0 packets
ingress and egress queues. The use of parallel NIDPS nodes dropped, but without using our architecture, Snort dropped
to match each of system egress queues enables NIDPS packet more than 94% of total packets that it received (see Fig-
checking to keep up with increased arrival rates typical of an ure 14) whereas when using the novel architecture, NIDPS
attack. dropped 0. However, successful processing of more than
8Gbps depends on the system capacity which always has
IV. EVALUATION OF THE PROPOSED SOLUTION some limit and cost. The total capacity of our system’s main
This section presents an evaluation of the proposed architec- memory buffer is 32 Gbps. The system used (Cisco switch
ture. 3560) can hold up to 32 queues. Each queue can has up to
1Gbps buffer.
A. EVALUATION OF NOVEL NIDPS ARCHITECTURE
The experiments that were described in section II are V. DISCUSSION AND RELATED RESEARCH
repeated, but here the novel architecture is implemented to This section discusses the proposed solution and compares it
test performance in terms of throughput with the support of to related research in parallelism in intrusion detection.

18568 VOLUME 7, 2019


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

FIGURE 12. Detecting malicious packets in high-speed traffic.

TABLE 3. Novel NIDPS architecture reaction to detect malicious packets.

FIGURE 13. Preventing malicious UDP packets in high-speed traffic.

Vasiliadis et al. [25] proposed a new model for a multi- system. Jiang et al. [26] proposed a parallel design for NIDS
parallel IDS architecture (MIDeA) for high-performance pro- on a TILERAGX36 many-core processor. They explored data
cessing and stateful analysis of network traffic. Their solution and pipeline parallelism and optimized the architecture by
offers parallelism at a subcomponent level, with NICs, CPUs exploiting existing features of TILERAGX36 to break the
and GPUs doing specialized tasks to improve scalability and bottlenecks in the parallel design. They designed a system
running time. They showed that processing speeds can reach for parallel network traffic processing by implementing a
up to 5.2Gbps with zero packet loss in a multi-processor NIDS on the TILERAGX36, which has a 36 core processor.

VOLUME 7, 2019 18569


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

TABLE 4. Novel NIDPS architecture reaction to prevent malicious packets.

FIGURE 14. NIDPS at >= 8Gbps traffic speed.

FIGURE 15. Novel NIDPS architecture at >= 8Gbps traffic speed.

The system was designed according to two strategies: first CPUs and heterogeneous GPUs. Kargus adapts its resource
a hybrid parallel architecture was used, combining data and usage depending on the input rate, to save power. The
pipeline parallelism; and secondly a hybrid load-balancing research shows that Kargus handles up to 33 Gbps of normal
scheme was used. They took advantage of the parallelism traffic and achieves 9 to 10 Gbps even when all packets con-
offered by combining data, pipeline parallelism and multiple tain attack signatures. The two approaches described in this
cores, using both rule-set and flow space partitioning. They paragraph are not directly comparable in terms of throughput
showed that processing speeds can handle and reach up to as different numbers of processors are used in each. However,
13.5 Gbps for 512-bytes. Jamshed et al. [27] presented the the experiments show that high gains can be made by paral-
Kargus system which exploits high processing parallelism lelizing NIDPSs in order to combat problems of higher speeds
by balancing the pattern matching workloads with multi-core and increasing traffic. Our research uses a multi-layer switch

18570 VOLUME 7, 2019


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

along with parallel technology to improve packets processing direction. The framework could be enhanced with intrusion
performance which increases the ability to handle different detection and protection functionality to create a more com-
speeds and data volumes. Further enhancements occur when plete solution. Our research has focused on standard business
queuing is combined with parallel processor technologies. infrastructure whereas the work of Zhao et al. has concen-
The approach of this study has shown how parallelism at a trated on single cluster across cloud data centers. Cross-
higher level of granularity, which is simpler to implement, can cluster security services in a high performance environment
also make impressive improvements for security performance such as that afforded by G-Hadoop is an area where attention
in terms of throughput and the number of dropped packages. is welcome.
By using 2 machines connected to two interfaces, our NIDPS Vendor companies are aiming to develop security solu-
processed up to 8 Gbps with 0 drop for 1KB packets. This tions to protect the enterprise network. Equipment has been
number can be increased up to 32Gbps which is the full designed to meet connectivity speed and load standards. The
system capacity forward bandwidth by implementing more improvements in the throughput of NIDPS shown in this
nodes of NIDPS. research are achieved by pairing the ASA (Adaptive Security
Chen et al. [28] proposed an application-specific integrated Appliance) Cisco equipment [10] with multiple implementa-
circuit (ASIC) design with parallel exact matching (PEM) tions of Snort. The principles of the method proposed in this
architecture to accelerate processor packets speed. The ASIC research could be applied to other equipment combinations
hardware has been designed to operate at 435MHz to perform where similar facilities are offered.
up to 13.9 Gbps throughput to manage the requirements of To summarize, our research differs from previous research
high-speed and high accuracy for IDS, which resolves the in terms of the architecture used. The research investigates
issue of the information security limitation to manage data how QoS including DiffServ technology and parallelism can
received from the 10Gbps core network. They proposed SRA have impact in high-speed and heavy traffic networks using
(Snort Rule Accelerator) with parallel rules to increase the an industry standard switch and standard desktop processors.
performance of the IDS. The SRA is proposed with a stateless This solution is a more accessible way of receiving good
parallel-matching scheme to perform high throughput packet results as it can be activated at a higher level, namely at the
filtering as an accelerator of the Snort detection engine. The level of configuring the CISCO switch software and replicat-
ASIC is composed of five major modules, namely the Inspec- ing Snort on standard machines. Further improvements could
tor, Counter, Parallel Matching, Conformity, and Compare be made if higher performance equipment was used. Cost is
modules. The parallel matching scheme compares a packet’s generally an important concern. The design proposed in this
payload with the stored rule. When an entry packet is matched research benefits the network security requirements at low
with Snort rules, the ASIC is in an idle state and sends a cost.
compare signal to the conformity module, which integrates
VI. CONCLUSION, RECOMMENDATIONS
all signals and determines whether an abnormal payload is
FOR FUTURE WORK
presented. Here the authors designed a half mesh architecture
This section summarizes the outcomes achieved in the paper
in the parallel matching rules module, which allows the traffic
and then provides recommendations for future research.
to be compared with several rules. Our research addressed
performance in a different way. It embeds an open source A. CONCLUSION
security software along with layer 3 switch technology (QoS, A new architecture for NIDPS deployment was designed,
memory and buffer dynamic reservation and parallel queue implemented and evaluated. There has recently been massive
technologies) to improve network forward-throughput-traffic development in computer networks regarding their ability
architecture and, hence, security performance. It configured to handle different speeds and data volumes. As a result of
an interface into queues (interface-to-queues), which allows this rapid development, computer networks are now more
packets to be processed through the component level parallel vulnerable than ever to high-speed attacks and threats. These
NIDPS nodes. The approach is designed to deal with the can cause considerable trouble to computer networks and sys-
limitation of real networks speed and finds solutions to the tems. Network intrusions can be categorized at various levels.
problems that caused the NIDPS performance. The approach Many high-speed attacks can be classified as being difficult
can deal with any incoming traffic-speed that may allow to detect or prevent. It will become ever more difficult to
malicious packets to enter the system and prevent NIDPS analyze increasing volumes of traffic due to the rapid shifts
from detecting or preventing them. It does this by imposing in technology that are increasing network speed.
advanced management of network packet traffic. The advan- Recently, various open-source tools have become available
tage of the proposed approach is that every-day equipment to cover security requirements for network systems and users.
can be utilized in a new way to achieve improvements and it In this paper, the performance of an open source NIDPS
is also more scalable than the proposal of Chen et al. [28]. has been evaluated in the context of high-speed and volume
In the context of big data and distributed systems, attacks. The purpose of the evaluation was to determine the
Zhao et al. [29] have developed a security framework in performance of the NIDPS under high-speed traffic when
G-Hadoop. This work focuses on authentication and access restricted by off-the-shelf hardware, and then find ways to
rather than intrusion detection but offers an interesting new improve it.

VOLUME 7, 2019 18571


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

This study focused on the weakness of such security sys- In the testbed, we identified that there is limitation for the
tems, i.e. NIDPS in high-speed network connectivity. We number of packets processing which is 8.0 Gbps with 0%
proposed a solution for reducing this weakness and presented packets dropped. The idea has been examined further in terms
a novel architecture in NIDPS development that utilizes QoS of performance limitation above 8.0 Gbps, and therefore
and parallel technologies to organize and improve network modification may be made for better response. As experiment
management and traffic processing performance in order to 4 showed, packets started to be dropped when load-balancing
improve the performance of the NIDPS. for traffic exceeding 8.0 Gbps. Analysis is still in develop-
With our novel architecture, Snort’s performance improved ment and shall be covered in the future efforts.
markedly, allowing more packets to be checked before they Establishing a relationship between traffic size and number
were delivered into the network. The performance (analysis, of NIDPS cluster nodes for an efficient performance is also
detection and prevention rate) of Snort NIDPS increased to an interesting research area. Defining parameters to identify
more than 99%. By using 2 machines (PCs) connected to two the number of nodes for a scalable response to network speed
1Gb interfaces, Snort NIDPS processed up to 8 Gbps with type and will be a good addition.
0 drop. This number can be increased up to 32Gbps which is
the full system capacity forward bandwidth by implementing REFERENCES
more nodes of NIDPS. [1] B. Wang, Y. Zheng, W. Lou, and Y. T. Hou, ‘‘DDoS attack protection in the
The research focused on establishing a technical solution era of cloud computing and software-defined networking,’’ Comput. Netw.,
vol. 81, pp. 308–319, Mar. 2015.
with a theoretical foundation. This information generalizes
[2] K. Chauhan and V. Prasad, ‘‘Distributed denial of service (DDoS) attack
the problem and solution and thus enables the proposed techniques and prevention on cloud environment,’’ Int. J. Innov. Advance-
approach to be applied more easily to infrastructures that are ment Comput. Sci., vol. 4, pp. 210–215, Sep. 2015.
different to the testbed used in this research. [3] M. D. Samani, M. Karamta, J. Bhatia, and M. B. Potdar, ‘‘Intrusion
detection system for DoS attack in cloud,’’ International Journal of Applied
Information Systems (Foundation of Computer Science), vol. 10, no. 5.
New York, NY, USA: FCS, 2016.
B. RECOMMENDATION AND FURTHER RESEARCH [4] S. H. Vasudeo, P. Patil, and R. V. Kumar, ‘‘IMMIX-intrusion detection and
prevention system,’’ in Proc. Int. Conf. Smart Technol. Manage. Comput.,
NIDPSs are used to capture data and detect malicious packets Commun., Controls, Energy Mater. (ICSTM), May 2015, pp. 96–101.
travelling on network media and match them to a database [5] W. Bul’ajoul, A. James, and M. Pannu, ‘‘Improving network intrusion
of signatures. Signature-based NIDPS are able to detect detection system performance through quality of service configuration and
parallel technology,’’ J. Comput. Syst. Sci., vol. 81, no. 6, pp. 981–999,
known attacks, but the major problem of the signature-based 2015.
approach is that every signature should have an entry in a [6] N. Akhtar, I. Matta, and Y. Wang, ‘‘Managing NFV using SDN and control
database in order to compare with the incoming packets. theory,’’ Dept. CS, Boston Univ., Boston, MA, USA, Tech. Rep. BUCS-
TR-2015-013, 2015.
New signatures arise constantly, and an issue is how to keep
[7] P. S. Kenkre, A. Pai, and L. Colaco, ‘‘Real time intrusion detection and
track up with new signatures. Another problem is processing prevention system,’’ in Proc. 3rd Int. Conf. Frontiers Intell. Comput.,
time required to check all signatures. Knowledge sharing Theory Appl. (FICTA). Bhubaneswar, India: Springer, 2015, pp. 405–411.
may provide a solution. Cloud computing which provides [8] M. Li, J. Deng, L. Liu, Y. Long, and Z. Shen, ‘‘Evacuation simulation
and evaluation of different scenarios based on traffic grid model and high
for massive processing distribution and sharing is a possible performance computing,’’ Int. Rev. Spatial Planning Sustain. Develop.,
future direction, but this also raises issues of trust. An avenue vol. 3, no. 3, pp. 4–15, 2015.
of future investigation should aim to develop a trusted cloud [9] J.-M. Kim, A.-Y. Kim, J.-S. Yuk, and H.-K. Jung, ‘‘A study on wireless
intrusion prevention system based on snort,’’ Int. J. Softw. Eng. Appl.,
solution to NIDPS deployment such that if the threshold vol. 9, no. 2, pp. 1–12, 2015.
monitoring tool indicates that traffic is increasing then extra [10] Cisco. (2016). Cisco Interfaces and Modules, Cisco Security Mod-
Snort nodes can be brought into play from the cloud. Future ules for Security Appliances. Accessed: Feb. 30, 2018. [Online]. Avail-
able: http://www.cisco.com/c/en/us/support/interfaces-modules/security-
work should investigate the use of specialized and trustworthy modules-security-appliances/tsd-products-support-series-home.html
security clouds e.g. a parallel node of NIDPS implemented on [11] M. Trevisan, A. Finamore, M. Mellia, M. Munafò, and D. Rossi, ‘‘DPD-
a multi-core/multi-processing cloud environment which can KStat: 40Gbps statistical traffic analysis with off-the-shelf hardware,’’
increase the NIDPS processing speed in order to improve its Telecom, Paris, France, Tech. Rep. 318627, 2016.
[12] W. Bul’ajoul, A. James, S. Shaikh, and M. Pannu, ‘‘Using Cisco network
performance. components to improve NIDPS performance,’’ Comput. Sci. Inf. Technol.,
Statistical based anomaly detection is designed to detect pp. 137–157, Aug. 2016.
deviations from a baseline model of network behaviour. [13] K. R. Kishore, A. Hendel, and M. V. Kalkunte, ‘‘System, method and appa-
ratus for network congestion management and network resource isolation,’’
When the rate of ‘‘malicious’’ packet transmission is very U.S. Patent 9 762 497, Sep. 12, 2017.
high, the attack will almost certainly be detected by a statisti- [14] Y. Naouri, and R. Perlman, (2015). ‘‘Network congestion management by
cal anomaly detector. Therefore, statistical anomaly detection packet circulation,’’ U.S. Patent 8 989 017 B2, Mar. 24, 2015.
[15] Y. Zhu et al., ‘‘Packet-level telemetry in large datacenter networks,’’ in
can be used with the proposed novel architecture to redirect Proc. ACM Conf. Special Interest Group Data Commun. New York, NY,
traffic to a secured haven for processing when attacks are USA: ACM, 2015, pp. 479–491.
detected. The secure haven can use the proposed architecture [16] T. Szigeti, C. Hattingh, R. Barton, and K. Briley, Jr., End-to-End QoS
to enable throughput of good traffic while bad traffic is Network Design: Quality of Service for Rich-Media & Cloud Networks.
London, U.K.: Pearson Education, 2013.
halted. We consider this area of development needs further [17] M. K. Testicioglu and S. K. Keith, ‘‘Method for prioritizing network
investigating. packets at high bandwidth speeds,’’ U.S. Patent 15 804 940, Nov. 6, 2017.

18572 VOLUME 7, 2019


W. Bul’ajoul et al.: New Architecture for Network Intrusion Detection and Prevention

[18] T. Szigeti, J. Henry, and F. Baker, Mapping Diffserv to IEEE 802.11 Yes, ANNE JAMES received the B.Sc. degree from
Tatil, document RFC 8325, 2018. Aston University, U.K., and the Ph.D. degree, spe-
[19] D. Melman, I. Mayer-Wolf, C. Arad, and R. Zemach, ‘‘Egress flow mir- cializing in data processing, from the University
roring in a network device,’’ U.S. Patent 15 599 199, May 18, 2017. of Wolverhampton, U.K. She was a Professor of
[20] K. K. Kulkarni, and R. O. Nambiar, ‘‘Distributed application framework data systems architecture with Coventry Univer-
for prioritizing network traffic using application priority awareness,’’ sity. Her work involves ensuring that excellent
U.S. Patent 15 792 635, Oct. 24, 2017. teaching, which covers the latest developments in
[21] Cisco, ‘‘Catalyst 3560 switch software configuration guide. Cisco IOS
the field, is delivered at undergraduate and post-
release 15.0(2),’’ SE and Later Edn., Cisco, San Jose, CA, USA, White
graduate levels. She promotes research within the
Paper OL-26641-03, 2016, Accessed: May 31, 2016. [Online]. Available:
https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3560/software/ department encouraging investigation into innova-
release/15-0_2_se/configuration/guide/scg3560.pdf tive systems that advance and support society. She is currently a Professor
[22] Cisco (2014) Security Configuration Guide: Access Control Lists, Cisco and the Head of the Department of Computing and Technology. Exam-
IOS Release 15SY. Accessed: Mar. 20, 2018. [Online]. Available: ples of her current projects are the development of enhanced methods for
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_data_acl/ the detection of biometric identity fraud, the construction of new methods
configuration/15-sy/sec-data-acl-15-sy-book.pdf for cloud forensics, the use of block chain technology, and natural language
[23] P. Wheeler and E. Fulp, ‘‘A taxonomy of parallel techniques for intrusion processing for document content analysis. She has successfully supervised
detection,’’ in Proc. 45th Annu. Southeast Regional Conf. New York, NY, over 30 research degree students and has published about 200 papers in peer-
USA: ACM, Mar. 2007, pp. 278–282. reviewed journals or conferences. Her research interests include the general
[24] J. Kawahara, K. M. Kobayashi, and T. Maeda, ‘‘Tight analysis of pri- area of creating distributed and intelligent systems to meet new challenges,
ority queuing for egress traffic,’’ Comput. Netw., vol. 91, pp. 614–624, particularly in the area of cyber security.
Nov. 2015.
[25] G. Vasiliadis, M. Polychronakis, and S. Ioannidis, ‘‘MIDeA: A multi-
parallel intrusion detection architecture,’’ in Proc. 18th ACM Conf. Com-
put. Commun. Secur. New York, NY, USA: ACM, 2011, pp. 297–308.
[26] H. Jiang, G. Zhang, G. Xie, K. Salamatian, and L. Mathy, ‘‘Scalable
high-performance parallel design for network intrusion detection systems
on many-core processors,’’ in Proc. 9th ACM/IEEE Symp. Archit. Netw.
Commun. Syst. Piscataway, NJ, USA: IEEE Press, 2013, pp. 137–146.
[27] M. A. Jamshed et al., ‘‘Kargus: A highly-scalable software-based intru-
sion detection system,’’ in Proc. ACM Conf. Comput. Commun. Secur.
New York, NY, USA: ACM, 2012, pp. 317–328.
[28] M.-J. Chen, Y.-M. Hsiao, H.-K. Su, and Y.-S. Chu, ‘‘High-throughput
ASIC design for e-mail and web intrusion detection,’’ IEICE Electron.
Express, vol. 12, no. 3, pp. 1–6, Jan. 2015.
[29] J. Zhao et al., ‘‘A security framework in G-Hadoop for big data computing
across distributed Cloud data centres,’’ J. Comput. Syst. Sci., vol. 80, no. 5,
pp. 994–1007, 2014.

SIRAJ SHAIKH was seconded to the Knowledge


WALEED BUL’AJOUL received the M.Sc. and Transfer Network (KTN), which is the innovation
Ph.D. degrees, specializing in computer network- network in Britain, from 2015 to 2016. He served
ing and cybersecurity, from Coventry University, as a Cyber Security Lead for the KTN coordinating
U.K. He joined as a Program Developer and a activities across academia, industry, and national
Lecturer with the Computer Science Department, policy. From 2015 to 2016, he was also seconded
Omar Al-Mukhtar University, Libya. From 2012 to to HORIBA MIRA, as part of the Royal Academy
2017, he was a Researcher with Coventry Uni- of Engineering’s Industrial Secondment Scheme.
versity. He is currently a Lecturer/Senior Lecturer In 2016, he co-founded CyberOwl, which is a VC-
with Nottingham Trent University, U.K., and a backed commercial venture involved in develop-
Researcher in Industry 4.0 and Cyberphysical Sys- ing early warning system for cyber threats, cyber-physical platform health,
tem with the Computing and Technology Department. His first research and prognostics; CyberOwl has been a part of the U.K.’s first GCHQ Cyber
project was done during his undergraduate at the university. The project was Accelerator, in 2017. He is currently a Professor of systems security with
supported by educations training at Libyan high education institutions. The the Future Transport and Cities Research Institute, Coventry University,
project achieved the Highest Achievement Award in the University, in 2002. where he leads the Systems Security Group. He is currently involved in the
In 2010, he embarked on another project privacy in mobile computing. EPSRC-Funded project ECSEPA, jointly with University College London,
His research interests include the general area of computer networks and investigating evidence-based policy-making for cyber security, working with
cybersecurity performance, including wireless and network communica- policy partners including UK’s GCHQ/NCSC. He has been involved in the
tions, modeling, and simulation. An innovative contribution of his work research, development, and evaluation of large-scale distributed secure sys-
is the establishment of design a new architecture to improve security and tems for nearly 20 years. His Doctoral and Postdoctoral Research involved
privacy. Most recently, his research focused on network architecture and the design and verification of security and safety-critical systems. From
security. His research was about improving security performance for high- 2015 to 2017, he was involved in an EPSRC-Funded research project ACiD,
speed environments based on intrusion detection and prevention systems, investigating collusion attacks on smart phone platforms, in collaboration
QoS, and parallel technologies; new security architecture was designed and with City University and with Swansea University, U.K., and Intel Security
evaluated. His research acquired four achievements award from different as an industrial partner.
institutions and from external and internal events.

VOLUME 7, 2019 18573

You might also like