0% found this document useful (0 votes)
31 views81 pages

30.two Factor Worm Detection Based On Signature Anomaly

The document outlines a project focused on developing a Two Factor Worm Detection system that combines Signature and Anomaly detection techniques to enhance cybersecurity against internet worms. It includes a comprehensive analysis of existing systems, proposed methodologies, system requirements, and a literature survey of related works. The project aims to improve detection accuracy and adaptability to emerging threats while addressing the limitations of current detection methods.

Uploaded by

gpravalika2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views81 pages

30.two Factor Worm Detection Based On Signature Anomaly

The document outlines a project focused on developing a Two Factor Worm Detection system that combines Signature and Anomaly detection techniques to enhance cybersecurity against internet worms. It includes a comprehensive analysis of existing systems, proposed methodologies, system requirements, and a literature survey of related works. The project aims to improve detection accuracy and adaptability to emerging threats while addressing the limitations of current detection methods.

Uploaded by

gpravalika2002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

INDEX

TOPICS Page No’s

➢ Certificates

➢ Acknowledgement

➢ Abstract

➢ Figures/Tables

CHAPTER-1: INTRODUCTION 1

CHAPTER-2: LITERATURE SURVEY 2-4

CHAPTER-3: SYSTEM ANALYSIS

3.1 Existing System 5

3.2 Proposed System 5-6

CHAPTER-4: SYSTEM REQUIREMENTS

4.1 Functional Requirement 7

4.2 Non-Functional Requirements 7

CHAPTER-5: SYSTEM STUDY

5.1 Feasibility Study 8

5.2 Feasibility Analysis 8-9

CHAPTER-6: SYSTEM DESIGN

6.1 SYSTEM ARCHITECTURE 10

6.2 UML DIAGRAMS 10-18


6.2.1 Use Case Diagram

6.2.2 Class Diagram

6.2.3 Sequence Diagram

6.2.4 Collabration Diagram

6.2.5 Activity Diagram

6.2.6 Component Diagram

6.2.7 Deployment Diagram

6.2.8 Er Diagram

6.2.9 Data Dictionary

CHAPTER-7: INPUT AND OUTPUT DESIGN

7.1 Input Design 19

7.2 Output Design 19-20

CHAPTER-8: IMPLEMENTATION

8.1 Modules 21

8.1.1 Module Description 21

CHAPTER-9: SOFTWARE ENVIRONMENT

9.1 Python 22-44

9.2 Source Code 44-58

CHAPTER-10: RESULTS/DISCUSSIONS

10.1 System Testing 59-62

10.1.1 Test Cases 63

10.2 Output Screens 64-72


CHAPTER-11: CONCLUSION

11.1 Conclusion 73

11.2 Future Scope 73

CHAPTER-12: REFERENCES 74-76


LIST OF FIGURES

S.NO TABLES/FIGURES PAGE NO’S

1 System Architecture 12
2 UML Diagrams 13-21

2.1 Use Case Diagram 13

2.2 Class Diagram 14

2.3 Sequence Diagram 15

2.4 Collaboration Diagram 16

2.5 Activity Diagram 17

2.6 Component Diagram 18

2.7 Deployment Diagram 18

2.8 ER Diagram 19

2.9 Data Dictionary 19-21

3 Compiling and interpretation of python source 22-44


code
4 Screenshots 64-72
TWO FACTOR WORM DETECTION BASED ON
SIGNATURE & ANOMALY
ABSTRACT
In our proposed work, we present a comprehensive approach to worm detection employing a
Two Factor authentication system involving both Signature and Anomaly detection techniques.
Internet worms pose a serious threat by infiltrating users' computers through online channels,
subsequently installing themselves and initiating malicious activities such as file corruption or
unauthorized information retrieval sent to attackers. To counteract such threats, various
detection methods have been introduced:

Signature-based Detection: This method involves analyzing internet traffic signatures,


matching them with predefined rules to discern whether the traffic exhibits normal or attack
signatures. The analysis is often conducted using PCAP (packet capture) files.

Detection through Honeypot logs: By strategically placing a Honeypot server between the main
server and user requests, any malicious requests from users are logged. Subsequently, these
logs are inspected to identify and block IP addresses associated with attackers.

Netflow-based Detection: This technique inspects UDP and TCP signatures, verifying whether
requests contain normal or attack signatures, contributing to a robust detection mechanism.

Anomaly Detection Models Based on Traffic Behavior: Leveraging machine learning


algorithms like Random Forest, Decision Tree, and Bayesian Networks, we train these models
on previous datasets. The trained models are then utilized to analyze traffic behavior,
classifying it as either NORMAL or ABNORMAL
CHAPTER-1
INTRODUCTION
In the contemporary digital landscape, safeguarding computer networks from the menace of
internet worms is imperative to ensure data integrity and user security. Our project focuses on
enhancing the efficiency and reliability of worm detection through a comprehensive Two
Factor authentication system that incorporates both Signature and Anomaly detection
techniques. Internet worms, malicious programs downloaded onto users' computers through
online channels, have the potential to corrupt files and compromise user information, posing
significant cybersecurity threats. In response to these challenges, our project leverages
advanced detection methodologies to fortify network defenses and thwart potential
cyberattacks. This method involves identifying worms by comparing their characteristics,
known as signatures, with a database of known worm signatures. If a match is found, the system
flags it as a worm. However, this method may miss new, unknown worms.This approach
focuses on detecting deviations from normal network behavior. It establishes a baseline of
regular activity and flags any unusual patterns that may indicate a worm attack. It's effective
against new, unidentified worms but can generate false positives.Combining both methods
creates a robust defense system. Signature-based detection catches known threats, while
anomaly-based detection helps identify novel or evolving worm attacks. This two-factor
approach enhances the overall security posture of a network.

1
CHAPTER-2

LITERATURE SURVEY
TITLE: A Worm Detection System Based on Deep Learning
AUTHOR: Hanxun Zhou; Yeshuai Hu; Xinlin Yang; Hong Pan; Wei Guo; Cliff C. Zou
ABSTRACT: In today's cyber world, worms pose a great threat to the global network
infrastructure. In this paper, we propose a worm detection system based on deep learning. It
includes two main modules: one worm detection module based on a convolutional neural
network (CNN) and one automatic worm signature generation module based on a deep neural
network (DNN). In the CNN-based worm detection module, we propose three kinds of data
preprocessing methods: frequency processing, frequency weighted processing, and difference
processing, and use CNN to train the model for worm detection. In the DNN-based worm
signature generation module, there are two phrase: DNN is firstly utilized for training the model
with worm payloads and their corresponding signatures as input in the training phrase. After
worm payloads are fed into the trained DNN model in the test phrase, worm signatures are
generated by our proposed Signature Beam Search algorithm. In the experiment, we firstly
analyzed the impact of different data preprocessing methods and the number of convolution-
pooling layers in the CNN model on the worm detection performance. Then we analyzed the
effects of different signatures in the DNN algorithm on the automatic generation of worm
signatures. Experiments show that the generated signatures have a good detection performance.
TITLE: Two Factor Worm Detection on Signature and Anomaly
AUTHOR: Prof.Kiran Kumar.A1 , Sai Bhavya Reddy.T2 , Bollam Sri Sai Vignesh3 ,
Kolhapuram Medha4 ,
ABSTRACT: Our undertaking presents a Two-Variable Worm Discovery framework that
joins Mark and Inconsistency based strategies to upgrade web security. Web worms keep on
compromising client information and security, making compelling location essential. We
utilize a few high level strategies to accomplish this objective. To begin with, our Mark Based
Recognition investigates web traffic marks against predefined rules utilizing parcel catch
(PCAP) documents, empowering continuous ID of vindictive traffic.Our framework conducts
Netflow-Based Examination by reviewing UDP and TCP marks to observe typical from assault
marks. Finally, we utilize Irregularity Identification Models, which are prepared on authentic
datasets utilizing AI calculations, for example, Arbitrary Woodland, Choice Tree, and Bayesian

2
Organizations, to recognize strange traffic conduct. These consolidated methodologies, upheld
by different datasets, give an all encompassing guard against developing web worm danger
and assaults, guaranteeing powerful client insurance
TITLE: Anomaly Detection Based on CNN and Regularization Techniques Against Zero-Day
Attacks in IoT
AUTHOR: belal Networks ibrahim hairab, mahmoud said elsayed
ABSTRACT: The fast expansion of the Internet of Things (IoT) in the technology and
communication industries necessitates a continuously updated cyber-security mechanism to
keep protecting the systems’ users from any possible attack that might target their data and
privacy. Botnets pose a severe risk to the IoT, they use malicious nodes in order to compromise
other nodes inside the network to launch several types of attacks causing service disruption.
Examples of these attacks are Denial of Service (DoS), Distributed Denial of Service (DDoS),
Service Scan, and OS Fingerprint. DoS and DDoS attacks are the most severe attacks in IoT
launched from Botnets. Where the Botnet commands previously compromised single or
multiple nodes in the network to launch network traffic towards a specific node or service. This
leads to computational, power, or network bandwidth draining, which causes specific services
to shutdown or behave unexpectedly. In this paper, we aim to verify the detection approach
reliability when it encounters an attack that it was not trained on before. Therefore, we evaluate
the performance of Convolutional Neural Networks (CNN) classifier in order to detect the
malicious attack traffic especially the attacks that never reported before in the network i.e.
Zero-Day attacks. Different regularization techniques i.e. L1 and L2 have been used to address
the problem of overfitting and to control the complexity of the classifier. The experimental
results show that using the regularization methods gives a higher performance in all the
evaluation metrics compared to the standard CNN model. In addition, the enhanced CNN
technique improves the capability of IDSs in detection of unseen intrusion events. 18 19
INDEX TERMS Botnet, convolutional neu

TITLE: Anomalous Payload-Based Worm Detection and Signature Generation


AUTHOR: Ke Wang, Gabriela F. Ciocarlie

ABSTRACT: New features of the PAYL anomalous payload detection sensor are
demonstrated to accurately detect and generate signatures for zero-day worms. Experimental
evidence demonstrates that site-specific packet content models are capable of detecting new
worms with high accuracy in a collaborative security system. A new approach is proposed that

3
correlates ingress/egress payload alerts to identify the worm's initial propagation. The method
also enables automatic signature generation that can be deployed immediately to network
firewalls and content filters to proactively protect other hosts. We also propose a collaborative
privacy-preserving security strategy whereby different hosts can exchange PAYL signatures to
increase accuracy and mitigate against false positives. The important principle demonstrated is
that correlating multiple alerts identifies true positives from the set of anomaly alerts and
reduces incorrect decisions producing accurate mitigation.

TITLE: A Double-Layered Hybrid Approach for Network Intrusion Detection System Using
Combined Naive Bayes and SVM
AUTHOR: treepop wisanwanichthan, and mason thammawicha.

ABSTRACT: A pattern matching method (signature-based) is widely used in basic network


intrusion detection systems (IDS). A more robust method is to use a machine learning classifier
to detect anomalies and unseen attacks. However, a single machine learning classifier is
unlikely to be able to accurately detect all types of attacks, especially uncommon attacks e.g.,
Remote2Local (R2L) and User2Root (U2R) due to a large difference in the patterns of attacks.
Thus, a hybrid approach offers more promising performance. In this paper, we proposed a
Double-Layered Hybrid Approach (DLHA) designed specifically to address the
aforementioned problem. We studied common characteristics of different attack categories by
creating Principal Component Analysis (PCA) variables that maximize variance from each
attack type, and found that R2L and U2R attacks have similar behaviour to normal users.
DLHA deploys Naive Bayes classifier as Layer 1 to detect DoS and Probe, and adopts SVM
as Layer 2 to distinguish R2L and U2R from normal instances. We compared our work with
other published research articles using the NSL-KDD data set. The experimental results suggest
that DLHA outperforms several existing state-of-the-art IDS techniques, and is significantly
better than any single machine learning classifier by large margins. DLHA also displays an
outstanding performance in detecting rare attacks by obtaining a detection rate of 96.67% and
100% from R2L and U2R respectively.

4
CHAPTER-3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

The current cybersecurity landscape employs various techniques to detect and counteract
internet worms. These include Signature-based detection, which analyzes internet traffic
signatures against predefined rules using PCAP files, and Honeypot-based detection, where a
server logs malicious user requests for subsequent analysis. Additionally, Netflow-based
techniques inspect UDP and TCP signatures, while Anomaly Detection Models, trained on
historical datasets using machine learning algorithms, analyze traffic behavior to identify
abnormalities. Despite these efforts, there remains room for improvement in terms of accuracy,
real-time responsiveness, and the ability to tackle evolving cyber threats.

3.1.1 DISADVANTAGES

Implementing both signature-based and anomaly-based detection systems can be resource-


intensive in terms of processing power and storage requirements. Anomaly detection, in
particular, may need continuous monitoring and analysis of network traffic, which can strain
resources.Managing and fine-tuning two detection methods simultaneously can introduce
complexity. It may require expertise to configure and maintain both systems effectively,
potentially leading to increased operational overhead.While anomaly detection helps reduce
false positives compared to signature-based detection alone, it can still generate false alarms
due to legitimate but unusual network activities. Balancing between false positives and false
negatives can be a challenge.Sophisticated worms that are designed to evade signature
detection or mimic normal behavior closely can sometimes bypass both signature and anomaly
detection methods. In such cases, relying solely on these two factors may not be sufficient.

3.2 PROPOSED SYSTEM

Our proposed Two Factor Worm Detection system combines the strengths of Signature and
Anomaly detection techniques. In Signature-based detection, we utilize PCAP datasets to
analyze internet traffic signatures. Simultaneously, Anomaly detection leverages machine
learning algorithms, including Random Forest, Decision Tree, and Bayesian Networks, trained

5
on historical traffic datasets. The integration of these techniques provides a robust and multi-
faceted approach to worm detection, offering improved accuracy, real-time responsiveness,
and adaptability to emerging cyber threats. By addressing the limitations of the existing system,
our proposed approach aims to fortify network security and enhance overall cyber resilience.

3.2.1 ADVANTAGES

By combining signature-based and anomaly-based detection, you get a broader coverage.


Signature-based detection can catch known worms, while anomaly-based detection can pick
up on new, unidentified threats that don't match any known signatures.Anomaly detection is
particularly useful for identifying new or evolving worms that may not have established
signatures yet. This early detection can help mitigate potential damage before a worm spreads
widely.While signature-based detection is precise, it can sometimes generate false positives.
Anomaly detection helps in reducing false alarms by focusing on deviations from normal
behavior rather than just specific signatures.The combination of both methods allows for a
more adaptive and dynamic defense system. It can adjust to changing threat landscapes and
provide a more resilient defense against a variety of worm attacks.

6
CHAPTER-4

SYSTEM REQUIREMENTS

4.1FUNCTIONAL REQUIREMENTS
• UTILIZER
the utilizer module for two-factor worm detection based on signature and anomaly
methods, it typically refers to the component of the system that utilizes both
signature-based and anomaly-based detection techniques. It combines data from
both signature and anomaly detection methods to provide a comprehensive view of
potential worm activity.The module correlates the information obtained from both
detection methods to determine the likelihood of a worm outbreak accurately.It
assists in making decisions on whether an identified activity is indeed a worm attack
by analyzing the combined results from signature and anomaly detection.

4.2NON FUNCTIONAL REQUIREMENTS

4.2.1 HARDWARE REQUIREMENTS

System : i3 or above.

Ram : 4 GB.

Hard Disk : 40 GB

4.2.2 SOFTWARE REQUIRMENTS


Operating system : Windows8 or Above.

Coding Language : python

7
CHAPTER-5
SYSTEM STUDY

5.1 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth
with a very general plan for the project and some cost estimates. During system analysis
the feasibility study of the proposed system is to be carried out. This is to ensure that
the proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

5.2 FEASIBILITY ANALYSIS

Three key considerations involved in the feasibility analysis are

• ECONOMICAL FEASIBILITY
• TECHNICAL FEASIBILITY
• SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead
to high demands being placed on the client. The developed system must have a modest
requirement,as only minimal or null changes are required for implementing this system.

8
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This includes
the process of training the user to use the system efficiently. The user must not feel threatened
by the system, instead must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the system and to make
him familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.

9
CHAPTER-6

SYSTEM DESIGN

6.1 SYSTEM ARCHITECTURE


Upload PCAP signature dataset

Run signature based worm


detection

Upload anomaly dataset


Application
Run machine learning based
anomaly detection

Comparison graph

Comparison table
UTILIZER
Predict attack from test data

6.2 UML DIAGRAMS


UML stands for Unified Modeling Language. UML is a standardized general-purpose
modeling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object
oriented computer software. In its current form UML is comprised of two major components:
a Meta-model and a notation. In the future, some form of method or process may also be added
to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modeling and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the
software development process. The UML uses mostly graphical notations to express the design
of software projects.

10
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
7
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.

6.2.1 USE CASE DIAGRAM


A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented
as use cases), and any dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor. Roles of the actors in
the system can be depicted.

11
upload PCAP Signature dataset

run signature based worm


detection

upload anomaly dataset

uitilizer

run meachine learning based


anomaly detection

comparision graph

comparison table

predict attact test data

12
6.2.2 CLASS DIAGRAM

In software engineering, a class diagram in the Unified Modeling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.

6.2.3 SEQUENCE DIAGRAM

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram


that shows how processes operate with one another and in what order. It is a construct of a
Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.

13
uitilizer Application

reguest
login

upload PACP signature dataset

run signature based worm detection

upoad anomaly dataset

run machine learning based anomaly datacetion

comparision graph

comparison table

predict attact from test data

logout

6.2.4 COLLABRATION DIAGRAM

10: logout

Applicati
1: reguest
uitilizer
on 2: login
3: upload PACP signature dataset
4: run signature based worm detection
5: upoad anomaly dataset
6: run machine learning based anomaly datacetion
7: comparision graph
8: comparison table
9: predict attact from test data
11:

14
6.2.5 ACTIVITY DIAGRAM

Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.

15
6.2.6 COMPONENT DIAGRAM

application

utilizer

upload PCAP signatu predict attack


e dataset from test data

comparison
table
run signature
based worm

predict attack from


upload anomaly test data
dataset

comparison table

16
6.2.7 DEPLOYMENT DIAGRAM

Application

uitilizer

predict attack fro


upload anomaly m test data
dataset

comparison
table
run signature
based worm

comparison
graph
upload anomaly
dataset run meachine
learning based

data base

6.2.8 ER DIAGRAM
administrator
uitilizer

add
frequently logout
logout interact wit
h voice

view
users

17
6.2.9 DATASET DIAGRAM

18
CHAPTER-7

INPUT AND OUTPUT DESIGN

7.1 INPUT DESIGN

The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and those steps are
necessary to put transaction data in to a usable form for processing can be achieved by
inspecting the computer to read data from a written or printed document or it can occur by
having people keying the data directly into the system. The design of input focuses on
controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra
steps and keeping the process simple. The input is designed in such a way so that it provides
security and ease of use with retaining the privacy. Input Design considered the following
things:

What data should be given as input?


How the data should be arranged or coded?
The dialog to guide the operating personnel in providing input.
Methods for preparing input validations and steps to follow when error occur.

7.1.1 OBJECTIVES

1.Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The
data entry screen is designed in such a way that all the data manipulates can be performed. It
also provides record viewing facilities.

3.When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in
maize of instant. Thus the objective of input design is to create an input layout that is easy to
follow.

19
7.2 OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the
right output must be developed while ensuring that each output element is designed so that
people will find the system can use easily and effectively. When analysis design computer
output, they should Identify the specific output that is needed to meet the requirements.

2.Select methods for presenting information.

3.Create document, report, or other formats that contain information produced by the system.

The output form of an information system should accomplish one or more of the following
objectives.

Convey information about past activities, current status or projections of the


Future.
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.

20
CHAPTER-8

IMPLEMENTATION

8.1 MODULES

• UITILIZER

8.1.1 MODULE DESCRIPTION


• Uitilizer
he utilizer module plays a crucial role in coordinating the information gathered
from signature-based detection .It combines data from both signature and
anomaly detection methods to provide a comprehensive view of potential worm
activity.The module correlates the information obtained from both detection
methods to determine the likelihood of a worm outbreak accurately.
It assists in making decisions on whether an identified activity is indeed a
worm attack by analyzing the combined results from signature and anomaly
detection.The utilizer module generates alerts or notifications based on the
analyzed data, informing security teams about potential worm threats that need
attention.It helps in coordinating the response actions to mitigate the worm
attack, such as isolating infected systems, applying patches, or deploying
countermeasures.
The utilizer module acts as a central component that maximizes the
effectiveness of two-factor worm detection by leveraging the strengths of both
signature and anomaly detection techniques. Its role is critical in ensuring a
proactive and robust defense against worm attacks.

21
CHAPTER-9

SOFTWARE ENVIRONMENT

9.1 PYTHON

Below are some facts about Python.

Python is currently the most widely used multi-purpose, high-level programming


language.

Python allows programming in Object-Oriented and Procedural paradigms. Python


programs generally are smaller than other programming languages like Java.

Programmers have to type relatively less and indentation requirement of the


language, makes them readable all the time.

Python language is being used by almost all tech-giant companies like – Google,
Amazon, Facebook, Instagram, Dropbox, Uber… etc.

The biggest strength of Python is huge collection of standard library which can be
used for the following –

• Machine Learning
• GUI Applications (like Kivy, Tkinter, PyQt etc. )
• Web frameworks like Django (used by YouTube, Instagram, Dropbox)
• Image processing (like Opencv, Pillow)
• Web scraping (like Scrapy, BeautifulSoup, Selenium)
• Test frameworks
• Multimedia

Advantages of Python :-
Let’s see how Python dominates over other languages.

1. Extensive Libraries
Python downloads with an extensive library and it contain code for various purposes like

22
regular expressions, documentation-generation, unit-testing, web browsers, threading,
databases, CGI, email, image manipulation, and more. So, we don’t have to write the
complete code for that manually.

2. Extensible
As we have seen earlier, Python can be extended to other languages. You can write some
of your code in languages like C++ or C. This comes in handy, especially in projects.

3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your Python
code in your source code of a different language, like C++. This lets us add scripting
capabilities to our code in the other language.

4. Improved Productivity
The language’s simplicity and extensive libraries render programmers more
productive than languages like Java and C++ do. Also, the fact that you need to write less
and get more things done.

5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright
for the Internet Of Things. This is a way to connect the language with the real world.

When working with Java, you may have to create a class to print ‘Hello World’. But in
Python, just a print statement will do. It is also quite easy to learn, understand, and code.
This is why when people pick up Python, they have a hard time adjusting to other more
verbose languages like Java.

7. Readable
Because it is not such a verbose language, reading Python is much like reading English.
This is the reason why it is so easy to learn, understand, and code. It also does not need
curly braces to define blocks, and indentation is mandatory. This further aids the
readability of the code.

23
8. Object-Oriented
This language supports both the procedural and object-oriented programming
paradigms. While functions help us with code reusability, classes and objects let us model
the real world. A class allows the encapsulation of data and functions into one.

9. Free and Open-Source


Like we said earlier, Python is freely available. But not only can you download
Python for free, but you can also download its source code, make changes to it, and even
distribute it. It downloads with an extensive collection of libraries to help you with your
tasks.

10. Portable
When you code your project in a language like C++, you may need to make some changes
to it if you want to run it on another platform. But it isn’t the same with Python. Here, you
need to code only once, and you can run it anywhere. This is called Write Once Run
Anywhere (WORA). However, you need to be careful enough not to include any system-
dependent features.

11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are executed one
by one, debugging is easier than in compiled languages.
Any doubts till now in the advantages of Python? Mention in the comment section.

Advantages of Python Over Other Languages :

1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in
other languages. Python also has an awesome standard library support, so you don’t have
to search for any third-party libraries to get your job done. This is the reason that many
people suggest learning Python to beginners.

2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage
the free available resources to build applications. Python is popular and widely used so it
gives you better community support.

24
The 2019 Github annual survey showed us that Python has overtaken Java in the
most popular programming language category.

3. Python is for Everyone


Python code can run on any machine whether it is Linux, Mac or Windows. Programmers
need to learn different languages for different jobs but with Python, you can professionally
build web apps, perform data analysis and machine learning, automate things, do web
scraping and also build games and powerful visualizations. It is an all-rounder
programming language.

Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you
should be aware of its consequences as well. Let’s now see the downsides of choosing
Python over another language.

1. Speed Limitations

We have seen that Python code is executed line by line. But since Python is interpreted, it
often results in slow execution. This, however, isn’t a problem unless speed is a focal point
for the project. In other words, unless high speed is a requirement, the benefits offered by
Python are enough to distract us from its speed limitations.

2. Weak in Mobile Computing and Browsers

While it serves as an excellent server-side language, Python is much rarely seen on


the client-side. Besides that, it is rarely ever used to implement smartphone-based
applications. One such application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.

3. Design Restrictions

As you know, Python is dynamically-typed. This means that you don’t need to declare the
type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well,
it just means that if it looks like a duck, it must be a duck.

While this is easy on the programmers during coding, it can raise run-time errors.

25
4. Underdeveloped Database Access Layers

Compared to more widely used technologies like JDBC (Java DataBase


Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access
layers are a bit underdeveloped. Consequently, it is less often applied in huge enterprises.

5. Simple

No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I
don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity
of Java code seems unnecessary.

This was all about the Advantages and Disadvantages of Python Programming Language.

History of Python : -
What do the alphabet and the programming language Python have in common? Right, both
start with ABC. If we are talking about ABC in the Python context, it's clear that the
programming language ABC is meant. ABC is a general-purpose programming language
and programming environment, which had been developed in the Netherlands, Amsterdam,
at the CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to
influence the design of Python.Python was conceptualized in the late 1980s. Guido van
Rossum worked that time in a project at the CWI, called Amoeba, a distributed operating
system. In an interview with Bill Venners1, Guido van Rossum said: "In the early 1980s, I
worked as an implementer on a team building a language called ABC at Centrum voor
Wiskunde en Informatica (CWI). I don't know how well people know ABC's influence on
Python. I try to mention ABC's influence because I'm indebted to everything I learned
during that project and to the people who worked on it."Later on in the same Interview,
Guido van Rossum continued: "I remembered all my experience and some of my
frustration with ABC. I decided to try to design a simple scripting language that possessed
some of ABC's better properties, but without its problems. So I started typing. I created a
simple virtual machine, a

simple parser, and a simple runtime. I made my own version of the various ABC parts that
I liked. I created a basic syntax, used indentation for statement grouping instead of curly
braces or begin-end blocks, and developed a small number of powerful data types: a hash
table (or dictionary, as we call it), a list, strings, and numbers."

26
What is Machine Learning : -
Before we take a look at the details of various machine learning methods, let's start by
looking at what machine learning is, and what it isn't. Machine learning is often categorized
as a subfield of artificial intelligence, but I find that categorization can often be misleading
at first brush. The study of machine learning certainly arose from research in this context,
but in the data science application of machine learning methods, it's more helpful to think
of machine learning as a means of building models of data.

Fundamentally, machine learning involves building mathematical models to help


understand data. "Learning" enters the fray when we give these models tunable
parameters that can be adapted to observed data; in this way the program can be considered
to be "learning" from the data. Once these models have been fit to previously seen data,
they can be used to predict and understand aspects of newly observed data. I'll leave to the
reader the more philosophical digression regarding the extent to which this type of
mathematical, model-based "learning" is similar to the "learning" exhibited by the human
brain.Understanding the problem setting in machine learning is essential to using these
tools effectively, and so we will start with some broad categorizations of the types of
approaches we'll discuss here.

Categories Of Machine Leaning :-

At the most fundamental level, machine learning can be categorized into two main types:
supervised learning and unsupervised learning.Supervised learning involves somehow
modeling the relationship between measured features of data and some label associated
with the data; once this model is determined, it can be used to apply labels to new, unknown
data. This is further subdivided into classification tasks and regression tasks: in
classification, the labels are discrete categories, while in regression, the labels are
continuous quantities.

We will see examples of both types of supervised learning in the following section.

Unsupervised learning involves modeling the features of a dataset without reference to any
label, and is often described as "letting the dataset speak for itself." These models include
tasks such as clustering and dimensionality reduction. Clustering algorithms identify
distinct groups of data, while dimensionality reduction algorithms search for more succinct

27
representations of the data. We will see examples of both types of unsupervised learning
in the following section.

Need for Machine Learning

Human beings, at this moment, are the most intelligent and advanced species on earth
because they can think, evaluate and solve complex problems. On the other side, AI is still
in its initial stage and haven’t surpassed human intelligence in many aspects. Then the
question is that what is the need to make machine learn? The most suitable reason for doing
this is, “to make decisions, based on data, with efficiency and scale”.

Lately, organizations are investing heavily in newer technologies like Artificial


Intelligence, Machine Learning and Deep Learning to get the key information from data to
perform several real-world tasks and solve problems. We can call it data-driven decisions
taken by machines, particularly to automate the process. These data-driven decisions can
be used, instead of using programing logic, in the problems that cannot be programmed
inherently. The fact is that we can’t do without human intelligence, but other aspect is that
we all need to solve real-world problems with efficiency at a huge scale. That is why the
need for machine learning arises.

Challenges in Machines Learning :-

While Machine Learning is rapidly evolving, making significant strides with cybersecurity
and autonomous cars, this segment of AI as whole still has a long way to go. The reason
behind is that ML has not been able to overcome number of challenges. The challenges
that ML is facing currently are −

Quality of data − Having good-quality data for ML algorithms is one of the biggest
challenges. Use of low-quality data leads to the problems related to data preprocessing and
feature extraction.

Time-Consuming task − Another challenge faced by ML models is the consumption of


time especially for data acquisition, feature extraction and retrieval.

Lack of specialist persons − As ML technology is still in its infancy stage, availability of


expert resources is a tough job.

28
No clear objective for formulating business problems − Having no clear objective and
well-defined goal for business problems is another key challenge for ML because this
technology is not that mature yet.

Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot


be represented well for the problem.

Curse of dimensionality − Another challenge ML model faces is too many features of


data points. This can be a real hindrance.

Difficulty in deployment − Complexity of the ML model makes it quite difficult to be


deployed in real life.

Applications of Machines Learning :-

Machine Learning is the most rapidly growing technology and according to researchers we
are in the golden year of AI and ML. It is used to solve many real-world complex problems
which cannot be solved with traditional approach. Following are some real-world
applications of ML −

• Emotion analysis

• Sentiment analysis

• Error detection and prevention

• Weather forecasting and prediction

• Stock market analysis and forecasting

• Speech synthesis

• Speech recognition

• Customer segmentation

• Object recognition

• Fraud detection

• Fraud prevention

• Recommendation of products to customer in online shopping.

29
How to Start Learning Machine Learning?

Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of
study that gives computers the capability to learn without being explicitly
programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is
one of the most popular (if not the most!) career choices. According to Indeed, Machine
Learning Engineer Is The Best Job of 2019 with a 344% growth and an average base salary
of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start
learning it? So this article deals with the Basics of Machine Learning and also the path you
can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get
started!!!

How to start learning ML?

This is a rough roadmap you can follow on your way to becoming an insanely talented
Machine Learning Engineer. Of course, you can always modify the steps according to your
needs to reach your desired end-goal!

Step 1 – Understand the Prerequisite : case you are a genius, you could start ML directly but
normally, there are some prerequisites that you need to know which include Linear Algebra,
Multivariate Calculus, Statistics, and Python. And if you don’t know these, never fear! You
don’t need a Ph.D. degree in these topics to get started but you do need a basic understanding.

(a) Learn Linear Algebra and Multivariate Calculus

Both Linear Algebra and Multivariate Calculus are important in Machine Learning.
However, the extent to which you need them depends on your role as a data scientist. If you
are more focused on application heavy machine learning, then you will not be that heavily
focused on maths as there are many common libraries available. But if you want to focus
on R&D in Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is
very important as you will have to implement many ML algorithms from scratch.

30
(b) Learn Statistics

Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML
expert will be spent collecting and cleaning data. And statistics is a field that handles the
collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical Significance,
Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is
also a very important part of ML which deals with various concepts like Conditional
Probability, Priors, and Posteriors, Maximum Likelihood, etc.

(c) Learn Python

Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn
them as they go along with trial and error. But the one thing that you absolutely cannot skip
is Python! While there are other languages you can use for Machine Learning like R, Scala,
etc. Python is currently the most popular language for ML. In fact, there are many Python
libraries that are specifically useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using various
online resources and courses such as Fork Python available Free on GeeksforGeeks.

Step 2 – Learn Various ML Concepts

Now that you are done with the prerequisites, you can move on to actually learning ML
(Which is the fun part!!!) It’s best to start with the basics and then move on to the more
complicated stuff. Some of the basic concepts in ML are:

(a) Terminologies of Machine Learning

• Model – A model is a specific representation learned from data by applying some machine
learning algorithm. A model is also called a hypothesis.
• Feature – A feature is an individual measurable property of the data. A set of numeric
features can be conveniently described by a feature vector. Feature vectors are fed as input

31
to the model. For example, in order to predict a fruit, there may be features like color, smell,
taste, etc.
• Target (Label) – A target variable or label is the value to be predicted by our model. For the
fruit example discussed in the feature section, the label with each set of input would be the
name of the fruit like apple, orange, banana, etc.
• Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so
after training, we will have a model (hypothesis) that will then map new data to one of the
categories trained on.
• Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a
predicted output(label).

(b) Types of Machine Learning

• Supervised Learning – This involves learning from a training dataset with labeled data
using classification and regression models. This learning process continues until the required
level of performance is achieved.
• Unsupervised Learning – This involves using unlabelled data and then finding the
underlying structure in the data in order to learn more and more about the data itself using
factor and cluster analysis models.
• Semi-supervised Learning – This involves using unlabelled data like Unsupervised
Learning with a small amount of labeled data. Using labeled data vastly increases the
learning accuracy and is also more cost-effective than Supervised Learning.
• Reinforcement Learning – This involves learning optimal actions through trial and error.
So the next action is decided by learning behaviors that are based on the current state and
that will maximize the reward in the future.

Advantages of Machine learning :-

1. Easily identifies trends and patterns -

Machine Learning can review large volumes of data and discover specific trends and patterns
that would not be apparent to humans. For instance, for an e-commerce website like Amazon,
it serves to understand the browsing behaviors and purchase histories of its users to help cater
to the right products, deals, and reminders relevant to them. It uses the results to reveal
relevant advertisements to them.

32
2. No human intervention needed (automation)

With ML, you don’t need to babysit your project every step of the way. Since it means giving
machines the ability to learn, it lets them make predictions and also improve the algorithms
on their own. A common example of this is anti-virus softwares; they learn to filter new threats
as they are recognized. ML is also good at recognizing spam.

3. Continuous Improvement

As ML algorithms gain experience, they keep improving in accuracy and efficiency. This
lets them make better decisions. Say you need to make a weather forecast model. As the
amount of data you have keeps growing, your algorithms learn to make more accurate
predictions faster.

4. Handling multi-dimensional and multi-variety data

Machine Learning algorithms are good at handling data that are multi-dimensional and multi-
variety, and they can do this in dynamic or uncertain environments.

5. Wide Applications

You could be an e-tailer or a healthcare provider and make ML work for you. Where it does
apply, it holds the capability to help deliver a much more personal experience to customers
while also targeting the right customers.

Disadvantages of Machine Learning :-

1. Data Acquisition

Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must wait for
new data to be generated.

2. Time and Resources

ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose
with a considerable amount of accuracy and relevancy. It also needs massive resources to
function. This can mean additional requirements of computer power for you.

33
3. Interpretation of Results

Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.

4. High error-susceptibility

Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased predictions
coming from a biased training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors that can go undetected
for long periods of time. And when they do get noticed, it takes quite some time to recognize
the source of the issue, and even longer to correct it.

Python Development Steps : -

Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in

February 1991. This release included already exception handling, functions, and the core data
types of list, dict, str and others. It was also object oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included in this
release were the functional programming tools lambda, map, filter and reduce, which Guido
Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was
introduced. This release included list comprehensions, a full garbage collector and it was
supporting unicode.Python flourished for another 8 years in the versions 2.x before the next
major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python
3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the
removal of duplicate programming constructs and modules, thus fulfilling or coming close to
fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably only one
-- obvious way to do it."Some changes in Python 7.3:

• Print is now a function


• Views and iterators instead of lists
• The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot
be sorted, because all the elements of a list must be comparable to each other.
• There is only one integer type left, i.e. int. long is int as well.

34
• The division of two integers returns a float instead of an integer. "//" can be used to have
the "old" behaviour.
• Text Vs. Data Instead Of Unicode Vs. 8-bit

Purpose :-

We demonstrated that our approach enables successful segmentation of intra-retinal


layers—even with low-quality images containing speckle noise, low contrast, and different
intensity ranges throughout—with the assistance of the ANIS feature.

Python

Python is an interpreted high-level programming language for general-purpose


programming. Created by Guido van Rossum and first released in 1991, Python has a
design philosophy

that emphasizes code readability, notably using significant whitespace.

Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.

• Python is Interpreted − Python is processed at runtime by the interpreter. You do not need
to compile your program before executing it. This is similar to PERL and PHP.
• Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse code
is part of this, and so is access to powerful constructs that avoid tedious repetition of code.
Maintainability also ties into this may be an all but useless metric, but it does say something
about how much code you have to scan, read and/or understand to troubleshoot problems
or tweak behaviors. This speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard library is key to another
area where Python excels. All its tools have been quick to implement, saved a lot of time,
and several of them have later been patched and updated by people with no Python
background - without breaking.

Modules Used in Project :-

Tensorflow
35
TensorFlow is a free and open-source software library for dataflow and differentiable
programming across a range of tasks. It is a symbolic math library, and is also used
for machine learning applications such as neural networks. It is used for both research and
production at Google.

TensorFlow was developed by the Google Brain team for internal Google use. It was
released under the Apache 2.0 open-source license on November 9, 2015.

Numpy : Numpy is a general-purpose array-processing package. It provides a high-


performance

multidimensional array object, and tools for working with these arrays.

It is the fundamental package for scientific computing with Python. It contains various
features including these important ones:

▪ A powerful N-dimensional array object


▪ Sophisticated (broadcasting) functions
▪ Tools for integrating C/C++ and Fortran code
▪ Useful linear algebra, Fourier transform, and random number capabilities
Besides its obvious scientific uses, Numpy can also be used as an efficient multi-
dimensional container of generic data. Arbitrary data-types can be defined using Numpy
which allows Numpy to seamlessly and speedily integrate with a wide variety of databases.

Pandas

Pandas is an open-source Python Library providing high-performance data manipulation


and analysis tool using its powerful data structures. Python was majorly used for data
munging and preparation. It had very little contribution towards data analysis. Pandas
solved this problem. Using Pandas, we can accomplish five typical steps in the processing
and analysis of data, regardless of the origin of data load, prepare, manipulate, model, and
analyze. Python with Pandas is used in a wide range of fields including academic and
commercial domains including finance, economics, Statistics, analytics, etc.

Matplotlib

36
Matplotlib is a Python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats and interactive environments across platforms. Matplotlib can
be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web
application servers, and four graphical user interface toolkits. Matplotlib tries to make easy
things easy and hard things possible. You can generate plots, histograms, power spectra,
bar charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see
the sample plots and thumbnail gallery.

For simple plotting the pyplot module provides a MATLAB-like interface, particularly
when combined with IPython. For the power user, you have full control of line styles, font
properties, axes properties, etc, via an object oriented interface or via a set of functions
familiar to MATLAB users.

Scikit – learn

Scikit-learn provides a range of supervised and unsupervised learning algorithms via a


consistent interface in Python. It is licensed under a permissive simplified BSD license
and is distributed under many Linux distributions, encouraging academic and commercial
use. Python

Python is an interpreted high-level programming language for general-purpose


programming. Created by Guido van Rossum and first released in 1991, Python has a
design philosophy that emphasizes code readability, notably using significant whitespace.

Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.

• Python is Interpreted − Python is processed at runtime by the interpreter. You do not need
to compile your program before executing it. This is similar to PERL and PHP.
• Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse code
is part of this, and so is access to powerful constructs that avoid tedious repetition of code.
Maintainability also ties into this may be an all but useless metric, but it does say something
about how much code you have to scan, read and/or understand to troubleshoot problems
or tweak behaviors. This speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard library is key to another

37
area where Python excels. All its tools have been quick to implement, saved a lot of time,
and several of them have later been patched and updated by people with no Python
background - without breaking.

Install Python Step-by-Step in Windows and Mac :

Python a versatile programming language doesn’t come pre-installed on your computer


devices. Python was first released in the year 1991 and until today it is a very popular high-
level programming language. Its style philosophy emphasizes code readability with its notable
use of great whitespace.
The object-oriented approach and language construct provided by Python enables
programmers to write both clear and logical code for projects. This software does not come
pre-packaged with Windows.

How to Install Python on Windows and Mac :

There have been several updates in the Python version over the years. The question is how to
install Python? It might be confusing for the beginner who is willing to start learning Python
but this tutorial will solve your query. The latest or the newest version of Python is version
3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.

Before you start with the installation process of Python. First, you need to know about
your System Requirements. Based on your system type i.e. operating system and based
processor, you must download the python version. My system type is a Windows 64-bit
operating system. So the steps below are to install python version 3.7.4 on Windows 7 device
or to install Python 3. Download the Python Cheatsheet here.The steps on how to install
Python on Windows 10, 8 and 7 are divided into 4 parts to help understand better.

Download the Correct version into the system

38
Step 1: Go to the official site to download and install python using Google Chrome or any
other web browser. OR Click on the following link: https://www.python.org

Now, check for the latest and the correct version for your operating system.

Step 2: Click on the Download Tab.

39
Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color
or you can scroll further down and click on download with respective to their version. Here,
we are downloading the most recent python version for windows 3.7.4

Step 4: Scroll down the page until you find the Files option.

Step 5: Here you see a different version of python along with the operating system.

• To download Windows 32-bit python, you can select any one from the three options:
Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86
web-based installer.

40
•To download Windows 64-bit python, you can select any one from the three options:
Windows x86-64 embeddable zip file, Windows x86-64 executable installer or Windows
x86-64 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding
which version of python is to be downloaded is completed. Now we move ahead with the
second part in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the
Release Note Option.

Installation of Python

Step 1: Go to Download and Open the downloaded python version to carry out the installation
process.

Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to
PATH.

41
Step 3: Click on Install NOW After the installation is successful. Click on Close.

With these above three steps on python installation, you have successfully and correctly
installed Python. Now is the time to verify the installation.
Note: The installation process might take a couple of minutes.

Verify the Python Installation

42
Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.

Step 3: Open the Command prompt option.


Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter.

Step 5: You will get the answer as 3.7.4


Note: If you have any of the earlier versions of Python already installed. You must first
uninstall the earlier version and then install the new one.

Check how the Python IDLE works

Step 1: Click on Start

43
Step 2: In the Windows Run command, type “python idle”.

Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click
on Save

Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have
named the files as Hey World.

9.2 SOURCE CODE

Internet Worm Detection

from tkinter import message box

from tkinter import *

from tkinter.filedialog import askopenfilename

44
from tkinter import simpledialog

import tkinter

import numpy as np

from tkinter import filedialog

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score

import matplotlib.pyplot as plt

from scapy.all import *

from multiprocessing import Queue

from SignatureBasedDetection import *

import pandas as pd

from sklearn.preprocessing import LabelEncoder

from sklearn.ensemble import RandomForestClassifier

from sklearn.tree import DecisionTreeClassifier

from sklearn.naive_bayes import GaussianNB

from sklearn.neural_network import MLPClassifier

from sklearn.metrics import precision_score

from sklearn.metrics import recall_score

from sklearn.metrics import f1_score

import webbrowser

main = tkinter.Tk()

main.title("Two Factor Worm Detection Based on Signature & Anomaly")

45
main.geometry("1300x1200")

worms = ['back', 'buffer_overflow', 'ftp_write', 'guess_passwd', 'imap', 'ipsweep', 'multihop',


'neptune', 'nmap', 'normal', 'pod', 'portsweep', 'rootkit',

'satan', 'smurf', 'teardrop', 'warezclient', 'warezmaster']

global filename

accuracy = []

global dataset

global X, Y

global X_train, X_test, y_train, y_test

global output

global classifier

global le

def uploadPCAP():

global filename

filename = filedialog.askopenfilename(initialdir = "PCAP_Signatures")

pathlabel.config(text=filename)

text.delete('1.0', END)

text.insert(END,'PCAP Signatures loaded\n')

def runSignatureDetection():

text.delete('1.0', END)

46
queue = Queue()

packets = rdpcap(filename)

for pkt in packets:

queue.put(pkt)

total_packets = queue.qsize();

text.insert(END,"Packets loaded to Queue\n");

text.insert(END,"Total available packets in Queue are : "+str(queue.qsize()))

sbd = SignatureBasedDetection(queue,text)

sbd.start()

def uploadAnomaly():

global le

text.delete('1.0', END)

global dataset

global X, Y

global X_train, X_test, y_train, y_test

global filename

filename = filedialog.askopenfilename(initialdir = "AnomalyDataset")

pathlabel.config(text=filename)

text.delete('1.0', END)

text.insert(END,'IDS dataset loaded\n')

dataset = pd.read_csv(filename)

temp = pd.read_csv(filename)

le = LabelEncoder()

47
dataset['protocol_type'] = pd.Series(le.fit_transform(dataset['protocol_type']))

dataset['service'] = pd.Series(le.fit_transform(dataset['service']))

dataset['flag'] = pd.Series(le.fit_transform(dataset['flag']))

dataset['label'] = pd.Series(le.fit_transform(dataset['label']))

temp = temp.values

worm = temp[:,temp.shape[1]-1]

(worm, count) = np.unique(worm, return_counts=True)

dataset = dataset.values

X = dataset[:,0:dataset.shape[1]-2]

Y = dataset[:,dataset.shape[1]-1]

print(Y)

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)

text.insert(END,"Dataset contains total records : "+str(len(X))+"\n")

text.insert(END,"Train & Test Dataset Splits to 80 and 20%\n")

text.insert(END,"Dataset records to train classification model : "+str(len(X_train))+"\n")

text.insert(END,"Dataset records to test classification model : "+str(len(X_test))+"\n")

fig, ax = plt.subplots()

y_pos = np.arange(len(worm))

plt.bar(y_pos, count)

plt.xticks(y_pos, worm)

48
ax.xaxis_date()

fig.autofmt_xdate()

plt.show()

def runAnomalyDetection():

global classifier

global X_train, X_test, y_train, y_test

text.delete('1.0', END)

global output

output = ''

output='<html><body><center><table border=1><tr><th>Algorithm
Name</th><th>Accuracy</th><th>Precision</th><th>Recall</th><th>FScore</th></tr>'

accuracy.clear()

dt = DecisionTreeClassifier()

dt.fit(X_train, y_train)

predict = dt.predict(X_test)

tree_acc = accuracy_score(y_test,predict)*100

text.insert(END,"Decsion Tree Classification Algorithm Prediction Accuracy :


"+str(tree_acc)+"\n")

accuracy.append(tree_acc)

precision = precision_score(y_test, predict,average='macro') * 100

recall = recall_score(y_test, predict,average='macro') * 100

fmeasure = f1_score(y_test, predict,average='macro') * 100

49
output+='<tr><td>Decision
Tree</td><td>'+str(tree_acc)+'</td><td>'+str(precision)+'</td><td>'+str(recall)+'</td><td>'+str
(fmeasure)+'</td></tr>'

rf = RandomForestClassifier()

rf.fit(X_train, y_train)

predict = rf.predict(X_test)

rf_acc = accuracy_score(y_test,predict)*100

classifier = rf

text.insert(END,"Random Forest Classification Algorithm Prediction Accuracy :


"+str(rf_acc)+"\n")

accuracy.append(rf_acc)

precision = precision_score(y_test, predict,average='macro') * 100

recall = recall_score(y_test, predict,average='macro') * 100

fmeasure = f1_score(y_test, predict,average='macro') * 100

output+='<tr><td>Random
Forest</td><td>'+str(rf_acc)+'</td><td>'+str(precision)+'</td><td>'+str(recall)+'</td><td>'+str(
fmeasure)+'</td></tr>'

bn = GaussianNB()

bn.fit(X_train, y_train)

predict = bn.predict(X_test)

bn_acc = accuracy_score(y_test,predict)*100

text.insert(END,"Bayesian Network Classification Algorithm Prediction Accuracy :


"+str(bn_acc)+"\n")

50
accuracy.append(bn_acc)

precision = precision_score(y_test, predict,average='macro') * 100

recall = recall_score(y_test, predict,average='macro') * 100

fmeasure = f1_score(y_test, predict,average='macro') * 100

output+='<tr><td>Naive
Bayes</td><td>'+str(bn_acc)+'</td><td>'+str(precision)+'</td><td>'+str(recall)+'</td><td>'+str
(fmeasure)+'</td></tr>'

def predictAttack():

global classifier

filename = filedialog.askopenfilename(initialdir = "IDSAttackDataset")

testData = pd.read_csv(filename)

testData['protocol_type'] = pd.Series(le.fit_transform(testData['protocol_type']))

testData['service'] = pd.Series(le.fit_transform(testData['service']))

testData['flag'] = pd.Series(le.fit_transform(testData['flag']))

testData = testData.values

testData = testData[:,0:testData.shape[1]-1]

predict = classifier.predict(testData)

print(predict)

for i in range(len(predict)):

value = int(predict[i])

print(str(value)+" "+str(worms[value]))

text.insert(END,str(testData[i])+" =====> PREDICTED WORM : "+worms[value]+"\n\n")

def graph():

51
height = accuracy

bars = ('Decision Tree Accuracy', 'Random Forest Accuracy','Bayesian Network Accuracy')

y_pos = np.arange(len(bars))

plt.bar(y_pos, height)

plt.xticks(y_pos, bars)

plt.show()

def compareTable():

global output

f = open("table.html", "w")

f.write(output)

f.close()

webbrowser.open("table.html",new=2)

def close():

main.destroy()

font = ('times', 16, 'bold')

title = Label(main, text='Two Factor Worm Detection Based on Signature & Anomaly')

title.config(bg='chocolate', fg='white')

title.config(font=font)

title.config(height=3, width=120)

title.place(x=0,y=5)

52
font1 = ('times', 13, 'bold')

upload = Button(main, text="Upload PCAP Signature Dataset", command=uploadPCAP)

upload.place(x=700,y=100)

upload.config(font=font1)

pathlabel = Label(main)

pathlabel.config(bg='lawn green', fg='dodger blue')

pathlabel.config(font=font1)

pathlabel.place(x=700,y=150)

predictButton = Button(main, text="Run Signature Based Worm Detection",


command=runSignatureDetection)

predictButton.place(x=700,y=200)

predictButton.config(font=font1)

svmButton = Button(main, text="Upload Anomaly Dataset", command=uploadAnomaly)

svmButton.place(x=700,y=250)

svmButton.config(font=font1)

knnButton = Button(main, text="Run Machine Learning Based Anomaly Detection",


command=runAnomalyDetection)

knnButton.place(x=700,y=300)

knnButton.config(font=font1)

batButton = Button(main, text="Comparison Graph", command=graph)

53
batButton.place(x=700,y=350)

batButton.config(font=font1)

batButton = Button(main, text="Comparison Table", command=compareTable)

batButton.place(x=700,y=400)

batButton.config(font=font1)

nbButton = Button(main, text="Predict Attack from Test Data", command=predictAttack)

nbButton.place(x=700,y=450)

nbButton.config(font=font1)

font1 = ('times', 12, 'bold')

text=Text(main,height=30,width=80)

scroll=Scrollbar(text)

text.configure(yscrollcommand=scroll.set)

text.place(x=10,y=100)

text.config(font=font1)

main.config(bg='light salmon')

main.mainloop()

54
Signature Based Detection

from threading import Thread

from scapy.all import *

from datetime import datetime

from tkinter import *

from tkinter import messagebox

class SignatureBasedDetection(Thread):

__flagsTCP = {

'F': 'FIN',

'S': 'SYN',

'R': 'RST',

'P': 'PSH',

'A': 'ACK',

'U': 'URG',

'E': 'ECE',

'C': 'CWR',

__ip_cnt_TCP = {}

malicious = 0

def _init_(self, queue, text):

Thread._init_(self)

55
self.stopped = False

self.queue = queue

self.text = text

self.malicious = 0

self. __ip_cnt_TCP.clear()

def stop(self):

self.stopped = True

def getMalicious(self):

return self.malicious

def stopfilter(self, x):

return self.stopped

def detect_TCPflood(self, packet):

if UDP in packet:

print("========"+str(packet))

if TCP in packet:

pckt_src=packet[IP].src

pckt_dst=packet[IP].dst

stream = pckt_src + ':' + pckt_dst

if stream in self.__ip_cnt_TCP:

self.__ip_cnt_TCP[stream] += 1

56
else:

self.__ip_cnt_TCP[stream] = 1

for stream in self.__ip_cnt_TCP:

pckts_sent = self.__ip_cnt_TCP[stream]

if pckts_sent > 255:

src = stream.split(':')[0]

dst = stream.split(':')[1]

self.malicious = self.malicious + 1

print("Possible Flooding Attack from %s --> %s --> %s"%(src,dst,str(pckts_sent)))

self.text.insert(END,"Possible Flooding Attack from %s --> %s -->


%s\n"%(src,dst,str(pckts_sent)))

else:

src = stream.split(':')[0]

dst = stream.split(':')[1]

print("Normal traffic from %s --> %s --> %s"%(src,dst,str(pckts_sent)))

#self.text.insert(END,"Normal traffic from %s --> %s --> %s"%(src,dst,packet.ttl))

def process(self, queue):

self.malicious = 0

while not queue.empty():

pkt = queue.get()

if IP in pkt:

pckt_src=pkt[IP].src

pckt_dst=pkt[IP].dst

57
#print("IP Packet: %s ==> %s ,
%s"%(pckt_src,pckt_dst,str(datetime.now().strftime("%Y-%m-%d %H:%M:%S"))), end=' ')

if TCP in pkt:

src_port=pkt.sport

dst_port=pkt.dport

#print(", Port: %s --> %s, "%(src_port,dst_port), end='')

#print([__flagsTCP[x] for x in pkt.sprintf('%TCP.flags%')])

self.detect_TCPflood(pkt)

queue.empty()

messagebox.showinfo("Signature Based Malicious Packet Detection","Signature Based


Malicious Packet Detection : "+str(self.getMalicious()))

def run(self):

print("Sniffing started. ")

self.process(self.queue)

Table.html

<html><body><center><table border=1><tr><th>Algorithm
Name</th><th>Accuracy</th><th>Precision</th><th>Recall</th><th>FScore</th></tr><tr><t
d>Decision
Tree</td><td>99.45759368836292</td><td>84.69137523332905</td><td>83.8269197846557
</td><td>84.23260834416631</td></tr><tr><td>Random
Forest</td><td>99.5069033530572</td><td>91.44475426061575</td><td>90.7152111885533
4</td><td>91.05740964809938</td></tr><tr><td>Naive
Bayes</td><td>45.710059171597635</td><td>24.127812124176394</td><td>30.3785196421
4656</td><td>21.518817745037794</td></tr>

58
CHAPETER 10

RESULT /DISCUSSIONS

10.1 SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the Software system meets its requirements
and user expectations and does not fail in an unacceptable manner. There are various types of
test. Each test type addresses a specific testing requirement.

TYPES OF TESTS

Unit testing
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual software
units of the application .it is done after the completion of an individual unit before integration.
This is a structural testing, that relies on knowledge of its construction and is invasive. Unit
tests perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs and
expected results.

Integration testing
Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically aimed
at exposing the problems that arise from the combination of components.

59
Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system documentation, and
user manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to identify Business
process flows; data fields, predefined processes, and successive processes must be considered
for testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.

System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An example of
system testing is the configuration oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least its purpose.
It is purpose. It is used to test areas that cannot be reached from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other kinds
of tests, must be written from a definitive source document, such as specification or

60
requirements document, such as specification or requirements document. It is a testing in which
the software under test is treated, as a black box .you cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.

Unit Testing

Unit testing is usually conducted as part of a combined code and unit test phase
of the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written
in detail.
Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.

Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.

Integration Testing

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by interface
defects.

The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level
– interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

61
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

In propose work we are providing one stop worm detection based on Two Factor authentication
such as Signature & Anomaly. Internet worms are malicious program which get downloaded
in to user’s computer through internet and get installed and then start corrupting user’s files or
steal user information and send to attacker. To avoid such detection various techniques has
been introduced such as

1) Signature based detection which analyse internet traffic signature and then matched
with predefine rules to identify whether traffic contains normal or attack signature and
to analyse this signature we can use PCAP (traffic capture) files.
2) Detection through Honeypot logs: Honeypot is a server which sits between server and
user request and if user send any malicious request then Honeypot will log all such
request and later this logs will get inspected to block all such attackers IP ADDRESS
3) Netflow Based: This technique also inspect UDP and TCP signatures and then verify
whether request contains normal or attack signatures
4) Anomaly Detection Models Based on Traffic Behaviour: Here using previous datasets
machine learning algorithms such as Random Forest, Decision Tree and Bayesian
Networks will get trained and this trained model will use to analyse traffic behaviour
as NORMAL or ABNORMAL.

To detect worms or attack with above techniques I am using PCAP (packet capture) dataset for
signature based detection and traffic dataset for Anomaly detection and below screen showing
dataset details of Anomaly worms

62
10.1.1 TEST CASES:

Test case1:

Test case for Login form:

FUNCTION: LOGIN

UITILIZER
Upload PCAP signature dataset

Run signature based worm detection

Upload anomaly dataset

Run machine learning based anomaly detection

Comparison graph

Comparison table

Predicted attack from test data

ACTUAL RESULTS:
Predicted the diabatic disease

LOW PRIORITY No

HIGH PRIORITY Yes

63
10.2 SCREEN SHOTS

Fig-1
In above worm dataset we have traffic signatures and each signature is associated with one
label as normal or Neptune attack or warezclient etc. Below is PCAP signature dataset

Fig-2

Now we will used above two dataset to detect worms using signature and Anomaly technique

To run project double click on ‘run.bat’ file to get below screen

64
Fig-3

In above screen click on ‘Upload PCAP Signature Dataset’ button to upload signature

Fig-4

In above screen selecting and uploading ‘signature2.pcap’ file and then click on ‘Open’ button
to load PCAP file and to get below output

65
Fig-5

In above screen PCAP file loaded and now click on ‘Run Signature Based & NetFlow Based
Detection’ button to start analysing PCAP file to detect worms

Fig-6

In above screen in loaded file we got total 673 packets and in below console application start
inspecting each packet

66
Fig-7

In above screen application inspect packets from different IP and then identifying whether
signature contains normal packet or worm attack packet and after processing all packets will
get below result

Fig-8

67
In above screen displaying total 36 malicious worm packets detected in loaded dataset out of
673 packets and with IP address and port number from which worm or attack happen and now
click on ‘Upload Anomaly Dataset’ button to upload Anomaly dataset

Fig-9

In above screen selecting and uploading Anomaly ‘dataset.txt’ file and then click on ‘Open’
button to get below output

68
Fig-10

In above graph we can see various types of worms/attack names in x-axis and total packets
count from that attack is available in y-axis and in above screen in text area we can see total
records in dataset and then splitting and using 80% records from dataset to train machine
learning detection model and 20% records for testing detection model prediction accuracy.
Now train and test dataset is ready and now click on ‘Run Machine Learning Based Anomaly
Detection’ button to train various detection model and then calculate its accuracy

Fig-11

In above screen decision tree and random forest accuracy is 98% and Bayesian network
accuracy is 45% and we can use either decision tree or random forest model to detect future
worm attacks and now click on ‘Comparison Graph’ button to get below graph

69
Fig-12

In above graph x-axis represents algorithm name and y-axis represents accuracy of those
algorithms and from above graph we can conclude that Random forest and decision tree is
giving better prediction accuracy. Now close above graph and then click on ‘Comparison
Table’ button to get below output

Fig-13

In above screen we can see each algorithm performance in terms of accuracy, precision, recall
and FSCORE. Now click on ‘Predict Attack from Test Data’ button to upload test file and then
Anomaly detection model will analyse traffic from each test and predict worm

70
Fig-14

In above test data you can see we have only network signature data without any worm or attack
or normal packet details but machine learning algorithm will analyse above packets and give
prediction output

Fig-15

71
In above screen we are selecting and uploading ‘testData.csv’ and then click on ‘Open’ button
to get below output

Fig-16

In above screen before arrow symbol =➔ we can see network traffic data and after traffic
symbol we can see worm prediction output from machine learning model as Normal or worm
attack.

So in above screen we gave signature base worm detection using PCAP signature and Anomaly
based detection using network dataset and machine learning algorithms.

You can further enhance above project by using deep learning models or by applying
evolutionary feature selection algorithms like genetic algorithm or particle swam optimization
(PSO) or any other novel technique

72
CHAPTER-11

CONCLUSION

11.1 CONCLUSION

In conclusion, the utilization of Microwave Medical Images presents a paradigm shift in stroke
detection, offering the ability to discern intricate brain tissues. The proposed DBIM technique
showcases promising results in enhancing the accuracy of stroke detection and segmentation
in MMI. This approach holds the potential to significantly improve diagnostic capabilities in
neurological conditions, providing clinicians with more precise information for effective
treatment planning. The project not only contributes to the field of medical imaging but also
opens avenues for further exploration and refinement of image processing techniques tailored
to advanced imaging modalities.

11.2 FUTURE SCOPE

Future advancements may involve integrating machine learning algorithms into the detection
process. Machine learning can enhance the accuracy of anomaly detection by learning normal
network behavior and identifying deviations more effectively .There is potential for
incorporating behavioral analysis techniques to further improve anomaly detection. Analyzing
user and system behavior patterns can help in detecting subtle changes that may indicate worm
activity.Future systems could integrate real-time threat intelligence feeds to enhance signature-
based detection. By leveraging up-to-date threat information, the system can quickly identify
and respond to new worm signaturesAutomation in detection and response processes can
streamline security operations. Future advancements may focus on developing automated
response mechanisms that can contain and mitigate worm outbreaks swiftly.As the Internet of
Things (IoT) and cloud computing continue to expand, future developments worm detection
may address the unique security challenges posed by these technologies. Tailoring detection
methods to secure IoT devices and cloud environments will be crucial.Advancements in zero-
day threat detection capabilities can enhance the system's ability to identify and mitigate
previously unknown worm attacks effectively.

73
CHAPTER-12

REFERENCES

1.2016–2017 Global Cyberspace Security Roundup, 2017.

2.R. Kaur and M. Singh, "A survey on zero-day polymorphic worm detection
techniques", IEEE Commun. Surveys Tuts., vol. 16, no. 3, pp. 1520-1549, 3rd Quart. 2014.

3.S. A. Aljawarneh, R. A. Moftah and A. M. Maatuk, "Investigations of automatic methods for


detecting the polymorphic worms signatures", Future Gener. Comput. Syst., vol. 60, pp. 67-
77, Jul. 2016.

4.B. Bayoglu and I. Sogukpınar, "Graph based signature classes for detecting polymorphic
worms via content analysis", Comput. Netw., vol. 56, no. 2, pp. 832-844, Feb. 2012.

5.Y. Tang, B. Xiao and X. Lu, "Signature tree generation for polymorphic worms", IEEE
Trans. Comput., vol. 60, no. 4, pp. 565-579, Apr. 2011.

6.A. Mondal, S. Paul, A. Mitra and B. Gope, "Automated signature generation for polymorphic
worms using substrings extraction and principal component analysis", Proc. IEEE Int. Conf.
Comput. Intell. Comput. Res. (ICCIC), pp. 1-4, Dec. 2015.

7.R. Eskandari, M. Shajari and A. Asadi, "Automatic signature generation for polymorphic
worms by combination of token extraction and sequence alignment approaches", Proc. 7th
Conf. Inf. Knowl. Technol. (IKT), pp. 1-6, May 2015.

8.A. Krizhevsky, I. Sutskever and G. E. Hinton, "Imagenet classification with deep


convolutional neural networks", Proc. Adv. Neural Inf. Process. Syst. (NIPS), vol. 2012, pp.
1097-1105.

9.N. Kalchbrenner, E. Grefenstette and P. Blunsom, "A convolutional neural network for
modelling sentences"in arXiv:1404.2188,2014, [online]
Available: http://arxiv.org/abs/1404.2188.

10.D. Zhu, H. Jin, Y. Yang, D. Wu and W. Chen, "DeepFlow: Deep learning-based malware
detection by mining Android application for abnormal usage of sensitive data", Proc. IEEE
Symp. Comput. Commun. (ISCC), pp. 438-443, Jul. 2017.

11.K. Ozkan, S. Isik and Y. Kartal, "Evaluation of convolutional neural network features for
malware detection", Proc. 6th Int. Symp. Digit. Forensic Secur. (ISDFS), pp. 1-5, Mar. 2018.

74
12.C. H. Kim, E. K. Kabanga and S.-J. Kang, "Classifying malware using convolutional gated
neural network", Proc. 20th Int. Conf. Adv. Commun. Technol. (ICACT), pp. 40-44, Feb. 2018.

13.S. M. A. Sulieman and Y. A. Fadlalla, "Detecting zero-day polymorphic worm: A


review", Proc. 21st Saudi Comput. Soc. Nat. Comput. Conf. (NCC), pp. 1-7, Apr. 2018.

14.D. Nahmias, A. Cohen, N. Nissim and Y. Elovici, "Deep feature transfer learning for trusted
and automated malware signature generation in private cloud environments", Neural Netw.,
vol. 124, pp. 243-257, Apr. 2020.

15.P. Szynkiewicz and A. Kozakiewicz, "Design and evaluation of a system for network threat
signatures generation", J. Comput. Sci., vol. 22, pp. 187-197, Sep. 2017.

16. F. Wang, S. Yang, D. Zhao and C. Wang, "An automatic signature-based approach for
polymorphic worms in big data environment", Proc. Int. Conf. Netw. Netw. Appl., pp. 223-228,
Oct. 2019.

17.Y. Afek, A. Bremler-Barr and S. L. Feibish, "Zero-day signature extraction for high-volume
attacks", IEEE/ACM Trans. Netw., vol. 27, no. 2, pp. 691-706, Apr. 2019.

18.S. N. Nguyen, V. Q. Nguyen, J. Choi and K. Kim, "Design and implementation of intrusion
detection system using convolutional neural network for DoS detection", Proc. 2nd Int. Conf.
Mach. Learn. Soft Comput., pp. 34-38, 2018.

19.R. Vinayakumar, K. P. Soman and P. Poornachandran, "Applying convolutional neural


network for network intrusion detection", Proc. Int. Conf. Adv. Comput. Commun. Informat.
(ICACCI), pp. 1222-1228, Sep. 2017.

20.S. Venkatraman, M. Alazab and R. Vinayakumar, "A hybrid deep learning image-based
analysis for effective malware detection", J. Inf. Secur. Appl., vol. 47, pp. 377-389, Aug. 2019.

21.W. Zhong and F. Gu, "A multi-level deep learning system for malware detection", Expert
Syst. Appl., vol. 133, pp. 151-162, Nov. 2019.

22.Y. Bengio, R. Ducharme and P. Vincent, "A neural probabilistic language model" in
Innovations in Machine Learning, Berlin, Germany:Springer, pp. 137-186, 2006.

23.A. M. Rush, S. Chopra and J. Weston, "A neural attention model for abstractive sentence
summarization", Comput. Sci., 2015.

75
24.M. Amin, T. A. Tanveer, M. Tehseen, M. Khan, F. A. Khan and S. Anwar, "Static malware
detection and attribution in Android byte-code through an end-to-end deep system", Future
Gener. Comput. Syst., vol. 102, pp. 112-126, Jan. 2020.

25.S. Alam, S. A. Alharbi and S. Yildirim, "Mining nested flow of dominant APIs for detecting
Android malware", Comput. Netw., vol. 167, Feb. 2020.

26. S. Kaur and M. Singh, "Hybrid intrusion detection and signature generation using deep
recurrent neural networks", Neural Comput. Appl., vol. 32, no. 12, pp. 7859-7877, Jun. 2020.

76

You might also like