0% found this document useful (0 votes)
76 views

7.analysis and Detection of Malware in Android Applications Using Machine Learning

Uploaded by

vamseemedia
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views

7.analysis and Detection of Malware in Android Applications Using Machine Learning

Uploaded by

vamseemedia
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 55

Analysis and Detection of Malware in Android Applications Using

Machine Learning

Abstract

The proliferation of Android devices has been accompanied by a significant increase in


malicious applications, posing severe threats to users' data security and privacy. Traditional
malware detection techniques, such as signature-based methods, struggle to keep pace with the
evolving nature of malware. Consequently, there is a pressing need for more robust and adaptive
approaches to malware detection.

This paper explores the application of machine learning techniques for the analysis and detection
of malware in Android applications. By leveraging a combination of static and dynamic analysis
features, we aim to build a comprehensive feature set that captures both the inherent
characteristics of the application code and its runtime behavior.

We employ various machine learning algorithms, including decision trees, random forests,
support vector machines, and deep learning models, to classify applications as benign or
malicious. The dataset used for training and evaluation consists of a diverse collection of
applications sourced from reputable repositories and known malware databases.

Our study evaluates the performance of these models in terms of accuracy, precision, recall, and
F1-score. Additionally, we analyze the impact of feature selection and engineering on the
detection capabilities of the models. The results indicate that machine learning-based approaches
significantly outperform traditional methods, offering higher detection rates and lower false-
positive rates.

This research highlights the potential of machine learning in enhancing Android malware
detection systems. By integrating advanced detection mechanisms into existing security
frameworks, we can provide more reliable and efficient protection for Android users. Future
work will focus on real-time detection capabilities and the continuous adaptation of models to
counteract emerging malware threats.
Introduction
The widespread adoption of Android devices has revolutionized the way people access
information, communicate, and perform various tasks. However, this ubiquity has also made
Android a prime target for malicious activities. Malware, which refers to any software
intentionally designed to cause damage, disrupt, or gain unauthorized access to devices, poses
significant threats to the security and privacy of users. As Android's market share continues to
grow, so does the sophistication and volume of malware targeting its ecosystem.
Traditional malware detection techniques, predominantly based on signature matching, are
becoming increasingly ineffective against the rapid evolution of malware. These methods rely on
predefined signatures of known malware, making them ill-equipped to detect new, unknown
variants. As malware authors employ obfuscation techniques and polymorphic behavior to evade
detection, there is a clear need for more adaptive and intelligent detection mechanisms.
Machine learning offers a promising solution to this challenge. By leveraging vast amounts of
data and advanced algorithms, machine learning models can identify patterns and anomalies that
may indicate malicious behavior. Unlike signature-based methods, machine learning approaches
can generalize from known malware to detect previously unseen threats, thereby providing a
more robust defense.
This paper investigates the use of machine learning techniques for the analysis and detection of
malware in Android applications. We explore both static and dynamic analysis methods to
extract a comprehensive set of features from applications. Static analysis examines the
application's code and metadata without executing it, while dynamic analysis observes the
application's behavior at runtime. By combining these two approaches, we aim to capture a more
complete picture of the application's characteristics.
We employ various machine learning algorithms, including decision trees, random forests,
support vector machines, and deep learning models, to classify applications as either benign or
malicious. Our study includes a thorough evaluation of these models, assessing their
performance in terms of accuracy, precision, recall, and F1-score. Additionally, we explore the
impact of feature selection and engineering on the effectiveness of the models.
Literature Survey
TITLE 1: "A Comprehensive Survey of Machine Learning for Malware Detection in
Android Apps"
Abstract: The rapid proliferation of Android devices has led to an increase in malware targeting
this platform. This paper provides a comprehensive survey of the application of machine learning
techniques in Android malware detection. It examines various algorithms, feature extraction
methods, and evaluation metrics used in existing studies.
Description: This survey paper reviews over 50 research articles on Android malware detection
using machine learning. It categorizes the studies based on the types of features used (static,
dynamic, hybrid) and the machine learning algorithms employed (e.g., decision trees, SVM, deep
learning). The paper also discusses the strengths and weaknesses of different approaches and
identifies key challenges in the field, such as the need for real-time detection and handling
imbalanced datasets.
TITLE 2: "Static Analysis-Based Android Malware Detection Using Machine Learning"
Abstract: Static analysis techniques offer a promising avenue for Android malware detection by
analyzing application code and metadata without execution. This paper explores the
effectiveness of various machine learning models in detecting malware using static features
extracted from Android applications.
Description: The study utilizes a dataset of 10,000 Android applications, comprising both
benign and malicious samples. Features such as permissions, API calls, and code structure are
extracted and used to train machine learning models, including logistic regression, random
forests, and neural networks. The results demonstrate that static analysis combined with machine
learning can achieve high detection accuracy, although it may struggle with obfuscated malware.
TITLE 3: "Dynamic Analysis and Machine Learning for Effective Android Malware
Detection"
Abstract: Dynamic analysis, which involves monitoring an application's behavior at runtime,
provides valuable insights into malicious activities that static analysis may miss. This paper
investigates the use of dynamic features and machine learning techniques to enhance Android
malware detection.
Description: The research focuses on capturing dynamic behaviors such as network traffic,
system calls, and user interactions. A sandbox environment is used to execute Android
applications and collect behavioral data. Machine learning models, including k-nearest neighbors
(KNN), support vector machines (SVM), and deep neural networks, are trained on this data. The
study finds that dynamic analysis can detect sophisticated malware that employs evasion
techniques, achieving higher detection rates than static analysis alone.
TITLE 4: "Hybrid Analysis for Android Malware Detection Using Machine Learning"
Abstract: Combining static and dynamic analysis can provide a more comprehensive
understanding of an application's behavior. This paper presents a hybrid approach to Android
malware detection, leveraging both types of analysis and machine learning to improve detection
accuracy.
Description: The hybrid approach involves extracting static features (e.g., permissions, API
calls) and dynamic features (e.g., runtime behavior, system interactions) from a dataset of 15,000
Android applications. The study compares the performance of several machine learning models,
including ensemble methods like gradient boosting and deep learning architectures. The results
show that the hybrid model outperforms models based solely on static or dynamic analysis,
offering a more robust solution for malware detection.

System Analysis
Existing System
Overview: Traditional Android malware detection systems primarily rely on signature-based
methods. These systems compare application code against a database of known malware
signatures to identify malicious software. While these methods have been widely used and
implemented in antivirus solutions, they have several limitations in the context of evolving
malware threats.
Advantages:
1. Efficiency in Detection: Signature-based systems can quickly identify known malware
due to their straightforward comparison mechanism.
2. Low False Positives: These systems are highly accurate in detecting previously
identified malware, leading to a low rate of false positives.
3. Established Technology: Signature-based methods are well-understood, widely
implemented, and have a proven track record in cybersecurity.
Disadvantages:
1. Ineffective Against Unknown Malware: These systems cannot detect new or
polymorphic malware that does not match existing signatures.
2. Need for Frequent Updates: Signature databases require constant updates to include
new malware, which can be a resource-intensive process.
3. Evasion Techniques: Malware authors often use obfuscation and encryption to evade
signature-based detection, rendering these systems less effective.
Proposed System
Overview: The proposed system leverages machine learning techniques for the detection of
Android malware. It integrates both static and dynamic analysis to extract a comprehensive set of
features from applications. Static analysis examines the application's code and metadata, while
dynamic analysis observes its behavior during execution. Machine learning models are then
trained on these features to classify applications as benign or malicious.
Advantages:
1. Detection of Unknown Malware: Machine learning models can generalize from known
patterns to detect new and unknown malware, addressing a key limitation of signature-
based systems.
2. Adaptive Learning: The system can continuously improve and adapt to new malware
threats by retraining models on updated datasets.
3. Comprehensive Analysis: Combining static and dynamic analysis provides a more
thorough evaluation of applications, improving detection accuracy.
4. Scalability: Machine learning-based systems can handle large volumes of data and
applications, making them scalable for widespread use.
Disadvantages:
1. Complexity: Implementing and maintaining machine learning-based detection systems is
more complex than traditional methods, requiring specialized knowledge and resources.
2. Resource Intensive: Dynamic analysis, in particular, can be computationally expensive
and time-consuming due to the need for runtime observation.
3. False Positives: While machine learning can improve detection rates, it may also result
in higher false positives if the models are not properly trained or tuned.
4. Data Dependency: The effectiveness of the models depends heavily on the quality and
representativeness of the training data. Poor or biased data can lead to inaccurate results.
SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

• System : Pentium IV 2.4 GHz.


• Hard Disk : 40 GB.
• Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

• Operating system : - Windows.


• Coding Language : python.
SYSTEM ARCHITECTURE:
UML Diagrams:

CLASS DIAGRAM:

The class diagram is used to refine the use case diagram and define a detailed design of the
system. The class diagram classifies the actors defined in the use case diagram into a set of
interrelated classes. The relationship or association between the classes can be either an "is-a" or
"has-a" relationship. Each class in the class diagram may be capable of providing certain
functionalities. These functionalities provided by the class are termed "methods" of the class.
Apart from this, each class may have certain "attributes" that uniquely.

User.

Upload Android Static & Dynamic Malware Dataset()


Pre-process Dataset()
Split Train & Test Data()
Run ML on Static Data()
Run ML on Dynamic Data()
Static Comparison Graph()
Dynamic Comparison Graph()
Predict Malware from Test Data()
Use case Diagram:

A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram
defined by and created from a Use-case analysis. Its purpose is to present a graphical overview
of the functionality provided by a system in terms of actors, their goals (represented as use
cases), and any dependencies between those use cases. The main purpose of a use case diagram
is to show what system functions are performed for which actor. Roles of the actors in the system
can be depicted

Upload Android Static & Dynamic


Malware Dataset

Pre-process Dataset

Split Train & Test Data

user
Run ML on Static Data

Run ML on Dynamic Data

Static Comparison Graph

Dynamic Comparison Graph


Predict Malware from Test Data
Sequence Diagram:

A sequence diagram represents the interaction between different objects in the system. The
important aspect of a sequence diagram is that it is time-ordered. This means that the exact
sequence of the interactions between the objects is represented step by step. Different objects in
the sequence diagram interact with each other by passing "messages

User Database

Upload Android Static & Dynamic Malware Dataset

Pre-process Dataset

Split Train & Test Data

Run ML on Static Data

Run ML on Dynamic Data

Static Comparison Graph

Dynamic Comparison Graph

Predict Malware from Test Data


Collaborative Diagram:

A collaboration diagram groups together the interactions between different objects. The
interactions are listed as numbered interactions that help to trace the sequence of the interactions.
The collaboration diagram helps to identify all the possible interactions that each object has with
other objects
1: Upload Android Static & Dynamic Malware Dataset
2: Pre-process Dataset
3: Split Train & Test Data
4: Run ML on Static Data
5: Run ML on Dynamic Data
6: Static Comparison Graph
7: Dynamic Comparison Graph
8: Predict Malware from Test Data
User Databas
e
SYSTEM STUDY

FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth
with a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the available
technical resources. This will lead to high demands on the available technical resources. This
will lead to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he is also able to
make some constructive criticism, which is welcomed, as he is the final user of the system.

INPUT AND OUTPUT DESIGN


INPUT DESIGN :

The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and those steps are
necessary to put transaction data in to a usable form for processing can be achieved by inspecting
the computer to read data from a written or printed document or it can occur by having people
keying the data directly into the system. The design of input focuses on controlling the amount of
input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the
process simple. The input is designed in such a way so that it provides security and ease of use
with retaining the privacy. Input Design considered the following things:
 What data should be given as input?
 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES:
1.Input Design is the process of converting a user-oriented description of the input
into a computer-based system. This design is important to avoid errors in the data input process
and show the correct direction to the management for getting correct information from the
computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be free from
errors. The data entry screen is designed in such a way that all the data manipulates can be
performed. It also provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with the
help of screens. Appropriate messages are provided as when needed so that the user will not be
in maize of instant. Thus the objective of input design is to create an input layout that is easy to
follow

OUTPUT DESIGN:

A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out
manner; the right output must be developed while ensuring that each output element is designed
so that people will find the system can use easily and effectively. When analysis design computer
output, they should Identify the specific output that is needed to meet the requirements.
2.Select methods for presenting information.
3.Create document, report, or other formats that contain information produced by
the system.
The output form of an information system should accomplish one or more of the
following objectives.
 Convey information about past activities, current status or projections of the
 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.
System Implementations:

1. Data Preprocessing: Prepare the textual data by removing noise, such as special
characters, punctuation, and stopwords. Tokenize the text into sentences or paragraphs to
facilitate sentiment analysis and summarization.
2. Sentiment Analysis Model: Implement or utilize pre-trained sentiment analysis models
capable of accurately detecting the sentiment polarity (positive, negative, neutral) of each
sentence or paragraph in the text. Consider employing advanced techniques such as deep
learning-based models or transformer architectures for improved accuracy.
3. Summarization Model: Implement a text summarization model capable of generating
concise summaries while incorporating sentiment information. Explore both extractive
and abstractive summarization techniques, considering factors such as coherence,
informativeness, and sentiment preservation.
4. Integration: Integrate the sentiment analysis module with the summarization module to
leverage sentiment information during the summarization process. Design mechanisms to
prioritize or adjust the inclusion of sentences based on their sentiment polarity to ensure
that the generated summaries reflect the emotional context of the original text.
5. Evaluation: Evaluate the performance of the implemented system using standard metrics
such as ROUGE (Recall-Oriented Understudy for Gisting Evaluation) for summarization
quality and sentiment classification accuracy metrics for sentiment analysis. Conduct
thorough evaluations using benchmark datasets to assess the effectiveness and robustness
of the system.
6. Optimization: Optimize the system for efficiency and scalability by leveraging
techniques such as parallel processing, caching, and model compression. Consider
deploying the system on distributed computing frameworks or utilizing hardware
accelerators (e.g., GPUs) to improve processing speed and resource utilization.
7. User Interface: Develop a user-friendly interface for interacting with the system,
allowing users to input text and view the generated summaries along with sentiment
analysis results. Design the interface to be intuitive, responsive, and accessible across
different devices and platforms.
8. Deployment: Deploy the implemented system in production environments, considering
factors such as scalability, reliability, and security. Ensure proper monitoring and
maintenance procedures are in place to address potential issues and ensure continuous
performance optimization.
9. Feedback Loop: Establish a feedback loop to gather user feedback and monitor system
performance over time. Use feedback to iteratively improve the system's accuracy,
usability, and effectiveness based on user requirements and evolving needs.
System Environment:

What is Python :-
Below are some facts about Python.
Python is currently the most widely used multi-purpose, high-level programming language.
Python allows programming in Object-Oriented and Procedural paradigms. Python programs
generally are smaller than other programming languages like Java.
Programmers have to type relatively less and indentation requirement of the language, makes
them readable all the time.
Python language is being used by almost all tech-giant companies like – Google, Amazon,
Facebook, Instagram, Dropbox, Uber… etc.
The biggest strength of Python is huge collection of standard library which can be used for the
following .
 Machine Learning
 GUI Applications (like Kivy, Tkinter, PyQt etc. )
 Web frameworks like Django (used by YouTube, Instagram, Dropbox)
 Image processing (like Opencv, Pillow)
 Web scraping (like Scrapy, BeautifulSoup, Selenium)
 Test frameworks
 Multimedia

Advantages of Python :-
Let’s see how Python dominates over other languages.
1. Extensive Libraries
Python downloads with an extensive library and it contain code for various purposes like regular
expressions, documentation-generation, unit-testing, web browsers, threading, databases, CGI,
email, image manipulation, and more. So, we don’t have to write the complete code for that
manually.
2. Extensible
As we have seen earlier, Python can be extended to other languages. You can write some of
your code in languages like C++ or C. This comes in handy, especially in projects.
3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your Python code in
your source code of a different language, like C++. This lets us add scripting capabilities to our
code in the other language.
4. Improved Productivity
The language’s simplicity and extensive libraries render programmers more productive than
languages like Java and C++ do. Also, the fact that you need to write less and get more things
done.
5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for the
Internet Of Things. This is a way to connect the language with the real world.
6. Simple and Easy
When working with Java, you may have to create a class to print ‘Hello World’. But in Python,
just a print statement will do. It is also quite easy to learn, understand, and code. This is why
when people pick up Python, they have a hard time adjusting to other more verbose languages
like Java.
7. Readable
Because it is not such a verbose language, reading Python is much like reading English. This is
the reason why it is so easy to learn, understand, and code. It also does not need curly braces to
define blocks, and indentation is mandatory. This further aids the readability of the code.
8. Object-Oriented
This language supports both the procedural and object-oriented programming paradigms.
While functions help us with code reusability, classes and objects let us model the real world. A
class allows the encapsulation of data and functions into one.

9. Free and Open-Source


Like we said earlier, Python is freely available. But not only can you download Python for
free, but you can also download its source code, make changes to it, and even distribute it. It
downloads with an extensive collection of libraries to help you with your tasks.
10. Portable
When you code your project in a language like C++, you may need to make some changes to it if
you want to run it on another platform. But it isn’t the same with Python. Here, you need to code
only once, and you can run it anywhere. This is called Write Once Run Anywhere (WORA).
However, you need to be careful enough not to include any system-dependent features.
11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are executed one by
one, debugging is easier than in compiled languages.
Any doubts till now in the advantages of Python? Mention in the comment section.

Advantages of Python Over Other Languages

1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in other
languages. Python also has an awesome standard library support, so you don’t have to search for
any third-party libraries to get your job done. This is the reason that many people suggest
learning Python to beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage the free
available resources to build applications. Python is popular and widely used so it gives you better
community support.
The 2019 Github annual survey showed us that Python has overtaken Java in the most
popular programming language category.

3. Python is for Everyone


Python code can run on any machine whether it is Linux, Mac or Windows. Programmers need
to learn different languages for different jobs but with Python, you can professionally build web
apps, perform data analysis and machine learning, automate things, do web scraping and also
build games and powerful visualizations. It is an all-rounder programming language.

Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you should
be aware of its consequences as well. Let’s now see the downsides of choosing Python over
another language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it often
results in slow execution. This, however, isn’t a problem unless speed is a focal point for the
project. In other words, unless high speed is a requirement, the benefits offered by Python are
enough to distract us from its speed limitations.
2. Weak in Mobile Computing and Browsers
While it serves as an excellent server-side language, Python is much rarely seen on the client-
side. Besides that, it is rarely ever used to implement smartphone-based applications. One such
application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to declare the type
of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just
means that if it looks like a duck, it must be a duck. While this is easy on the programmers
during coding, it can raise run-time errors.
4. Underdeveloped Database Access Layers
Compared to more widely used technologies like JDBC (Java DataBase
Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers are
a bit underdeveloped. Consequently, it is less often applied in huge enterprises.
5. Simple
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t
do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity of Java
code seems unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.

History of Python : -

What do the alphabet and the programming language Python have in common? Right, both start
with ABC. If we are talking about ABC in the Python context, it's clear that the programming
language ABC is meant. ABC is a general-purpose programming language and programming
environment, which had been developed in the Netherlands, Amsterdam, at the CWI (Centrum
Wiskunde &Informatica). The greatest achievement of ABC was to influence the design of
Python.Python was conceptualized in the late 1980s. Guido van Rossum worked that time in a
project at the CWI, called Amoeba, a distributed operating system. In an interview with Bill
Venners1, Guido van Rossum said: "In the early 1980s, I worked as an implementer on a team
building a language called ABC at Centrum voor Wiskunde en Informatica (CWI).
I don't know how well people know ABC's influence on Python. I try to mention ABC's
influence because I'm indebted to everything I learned during that project and to the people who
worked on it."Later on in the same Interview, Guido van Rossum continued: "I remembered all
my experience and some of my frustration with ABC. I decided to try to design a simple
scripting language that possessed some of ABC's better properties, but without its problems. So I
started typing. I created a simple virtual machine, a simple parser, and a simple runtime. I made
my own version of the various ABC parts that I liked. I created a basic syntax, used indentation
for statement grouping instead of curly braces or begin-end blocks, and developed a small
number of powerful data types: a hash table (or dictionary, as we call it), a list, strings, and
numbers."

What is Machine Learning : -


Before we take a look at the details of various machine learning methods, let's start by looking at
what machine learning is, and what it isn't. Machine learning is often categorized as a subfield of
artificial intelligence, but I find that categorization can often be misleading at first brush. The
study of machine learning certainly arose from research in this context, but in the data science
application of machine learning methods, it's more helpful to think of machine learning as a
means of building models of data.
Fundamentally, machine learning involves building mathematical models to help understand
data. "Learning" enters the fray when we give these models tunable parameters that can be
adapted to observed data; in this way the program can be considered to be "learning" from the
data.
Once these models have been fit to previously seen data, they can be used to predict and
understand aspects of newly observed data. I'll leave to the reader the more philosophical
digression regarding the extent to which this type of mathematical, model-based "learning" is
similar to the "learning" exhibited by the human brain.Understanding the problem setting in
machine learning is essential to using these tools effectively, and so we will start with some
broad categorizations of the types of approaches we'll discuss here.
Categories Of Machine Leaning :-
At the most fundamental level, machine learning can be categorized into two main types:
supervised learning and unsupervised learning.
Supervised learning involves somehow modeling the relationship between measured features of
data and some label associated with the data; once this model is determined, it can be used to
apply labels to new, unknown data. This is further subdivided into classification tasks
and regression tasks: in classification, the labels are discrete categories, while in regression, the
labels are continuous quantities. We will see examples of both types of supervised learning in the
following section.

Unsupervised learning involves modeling the features of a dataset without reference to any label,
and is often described as "letting the dataset speak for itself." These models include tasks such
as clustering and dimensionality reduction.
Clustering algorithms identify distinct groups of data, while dimensionality reduction algorithms
search for more succinct representations of the data. We will see examples of both types of
unsupervised learning in the following section.
Need for Machine Learning
Human beings, at this moment, are the most intelligent and advanced species on earth because
they can think, evaluate and solve complex problems. On the other side, AI is still in its initial
stage and haven’t surpassed human intelligence in many aspects. Then the question is that what
is the need to make machine learn? The most suitable reason for doing this is, “to make
decisions, based on data, with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial Intelligence,
Machine Learning and Deep Learning to get the key information from data to perform several
real-world tasks and solve problems. We can call it data-driven decisions taken by machines,
particularly to automate the process. These data-driven decisions can be used, instead of using
programing logic, in the problems that cannot be programmed inherently. The fact is that we
can’t do without human intelligence, but other aspect is that we all need to solve real-world
problems with efficiency at a huge scale. That is why the need for machine learning arises.

Challenges in Machines Learning :-

While Machine Learning is rapidly evolving, making significant strides with cybersecurity and
autonomous cars, this segment of AI as whole still has a long way to go. The reason behind is
that ML has not been able to overcome number of challenges. The challenges that ML is facing
currently are −
Quality of data − Having good-quality data for ML algorithms is one of the biggest challenges.
Use of low-quality data leads to the problems related to data preprocessing and feature
extraction.
Time-Consuming task − Another challenge faced by ML models is the consumption of time
especially for data acquisition, feature extraction and retrieval.
Lack of specialist persons − As ML technology is still in its infancy stage, availability of expert
resources is a tough job.
No clear objective for formulating business problems − Having no clear objective and well-
defined goal for business problems is another key challenge for ML because this technology is
not that mature yet.
Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be
represented well for the problem.
Curse of dimensionality − Another challenge ML model faces is too many features of data
points. This can be a real hindrance.
Difficulty in deployment − Complexity of the ML model makes it quite difficult to be deployed
in real life.
Applications of Machines Learning :-

Machine Learning is the most rapidly growing technology and according to researchers we are in
the golden year of AI and ML. It is used to solve many real-world complex problems which
cannot be solved with traditional approach. Following are some real-world applications of ML −
 Emotion analysis
 Sentiment analysis
 Error detection and prevention
 Weather forecasting and prediction
 Stock market analysis and forecasting
 Speech synthesis
 Speech recognition
 Customer segmentation
 Object recognition
 Fraud detection
 Fraud prevention
 Recommendation of products to customer in online shopping

How to Start Learning Machine Learning?


Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study
that gives computers the capability to learn without being explicitly programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one of
the most popular (if not the most!) career choices. According to Indeed, Machine Learning
Engineer Is The Best Job of 2019 with a 344% growth and an average base salary
of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start learning
it? So this article deals with the Basics of Machine Learning and also the path you can follow to
eventually become a full-fledged Machine Learning Engineer. Now let’s get started!!!
How to start learning ML?
This is a rough roadmap you can follow on your way to becoming an insanely talented Machine
Learning Engineer. Of course, you can always modify the steps according to your needs to reach
your desired end-goal!
Step 1 – Understand the Prerequisites
In case you are a genius, you could start ML directly but normally, there are some prerequisites
that you need to know which include Linear Algebra, Multivariate Calculus, Statistics, and
Python. And if you don’t know these, never fear! You don’t need a Ph.D. degree in these topics
to get started but you do need a basic understanding.
(a) Learn Linear Algebra and Multivariate Calculus
Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However,
the extent to which you need them depends on your role as a data scientist. If you are more
focused on application heavy machine learning, then you will not be that heavily focused on
maths as there are many common libraries available. But if you want to focus on R&D in
Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very important
as you will have to implement many ML algorithms from scratch.
(b) Learn Statistics
Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert
will be spent collecting and cleaning data. And statistics is a field that handles the collection,
analysis, and presentation of data. So it is no surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical Significance, Probability
Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is also a very
important part of ML which deals with various concepts like Conditional Probability, Priors, and
Posteriors, Maximum Likelihood, etc.

(c) Learn Python


Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them
as they go along with trial and error. But the one thing that you absolutely cannot skip is Python!
While there are other languages you can use for Machine Learning like R, Scala, etc. Python is
currently the most popular language for ML. In fact, there are many Python libraries that are
specifically useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using various online
resources and courses such as Fork Python available Free on GeeksforGeeks.
Step 2 – Learn Various ML Concepts
Now that you are done with the prerequisites, you can move on to actually learning ML (Which
is the fun part!!!) It’s best to start with the basics and then move on to the more complicated
stuff. Some of the basic concepts in ML are:
(a) Terminologies of Machine Learning
 Model – A model is a specific representation learned from data by applying some
machine learning algorithm. A model is also called a hypothesis.
 Feature – A feature is an individual measurable property of the data. A set of numeric
features can be conveniently described by a feature vector. Feature vectors are fed as
input to the model. For example, in order to predict a fruit, there may be features like
color, smell, taste, etc.
 Target (Label) – A target variable or label is the value to be predicted by our model. For
the fruit example discussed in the feature section, the label with each set of input would
be the name of the fruit like apple, orange, banana, etc.
 Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels),
so after training, we will have a model (hypothesis) that will then map new data to one of
the categories trained on.
 Prediction – Once our model is ready, it can be fed a set of inputs to which it will
provide a predicted output(label).
(b) Types of Machine Learning
 Supervised Learning – This involves learning from a training dataset with labeled data
using classification and regression models. This learning process continues until the
required level of performance is achieved.
 Unsupervised Learning – This involves using unlabelled data and then finding the
underlying structure in the data in order to learn more and more about the data itself using
factor and cluster analysis models.
 Semi-supervised Learning – This involves using unlabelled data like Unsupervised
Learning with a small amount of labeled data. Using labeled data vastly increases the
learning accuracy and is also more cost-effective than Supervised Learning.
 Reinforcement Learning – This involves learning optimal actions through trial and
error. So the next action is decided by learning behaviors that are based on the current
state and that will maximize the reward in the future.
Advantages of Machine learning :-
1. Easily identifies trends and patterns -
Machine Learning can review large volumes of data and discover specific trends and patterns
that would not be apparent to humans. For instance, for an e-commerce website like Amazon, it
serves to understand the browsing behaviors and purchase histories of its users to help cater to
the right products, deals, and reminders relevant to them. It uses the results to reveal relevant
advertisements to them.
2. No human intervention needed (automation)
With ML, you don’t need to babysit your project every step of the way. Since it means giving
machines the ability to learn, it lets them make predictions and also improve the algorithms on
their own. A common example of this is anti-virus softwares; they learn to filter new threats as
they are recognized. ML is also good at recognizing spam.
3. Continuous Improvement
As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets
them make better decisions. Say you need to make a weather forecast model. As the amount of
data you have keeps growing, your algorithms learn to make more accurate predictions faster.
4. Handling multi-dimensional and multi-variety data
Machine Learning algorithms are good at handling data that are multi-dimensional and multi-
variety, and they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where it does
apply, it holds the capability to help deliver a much more personal experience to customers while
also targeting the right customers.
Disadvantages of Machine Learning :-
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be inclusive/unbiased,
and of good quality. There can also be times where they must wait for new data to be generated.
2. Time and Resources
ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose
with a considerable amount of accuracy and relevancy. It also needs massive resources to
function. This can mean additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the algorithms.
You must also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased predictions
coming from a biased training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors that can go undetected
for long periods of time. And when they do get noticed, it takes quite some time to recognize the
source of the issue, and even longer to correct it.

Python Development Steps : -


Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in
February 1991. This release included already exception handling, functions, and the core data
types of list, dict, str and others. It was also object oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included in this release
were the functional programming tools lambda, map, filter and reduce, which Guido Van
Rossum never liked.Six and a half years later in October 2000, Python 2.0 was introduced. This
release included list comprehensions, a full garbage collector and it was supporting
unicode.Python flourished for another 8 years in the versions 2.x before the next major release as
Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python 3 is not backwards
compatible with Python 2.x.

The emphasis in Python 3 had been on the removal of duplicate programming constructs and
modules, thus fulfilling or coming close to fulfilling the 13th law of the Zen of Python: "There
should be one -- and preferably only one -- obvious way to do it."Some changes in Python 7.3:
 Print is now a function
 Views and iterators instead of lists
 The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot
be sorted, because all the elements of a list must be comparable to each other.
 There is only one integer type left, i.e. int. long is int as well.
 The division of two integers returns a float instead of an integer. "//" can be used to have
the "old" behaviour.
 Text Vs. Data Instead Of Unicode Vs. 8-bit
Purpose :-
We demonstrated that our approach enables successful segmentation of intra-retinal layers—
even with low-quality images containing speckle noise, low contrast, and different intensity
ranges throughout—with the assistance of the ANIS feature.

Python
Python is an interpreted high-level programming language for general-purpose programming.
Created by Guido van Rossum and first released in 1991, Python has a design philosophy that
emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports multiple
programming paradigms, including object-oriented, imperative, functional and procedural, and
has a large and comprehensive standard library.
 Python is Interpreted − Python is processed at runtime by the interpreter. You do not need
to compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse code is
part of this, and so is access to powerful constructs that avoid tedious repetition of code.
Maintainability also ties into this may be an all but useless metric, but it does say something
about how much code you have to scan, read and/or understand to troubleshoot problems or
tweak behaviors. This speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard library is key to another area
where Python excels. All its tools have been quick to implement, saved a lot of time, and several
of them have later been patched and updated by people with no Python background - without
breaking.

Modules Used in Project :-


Tensorflow
TensorFlow is a free and open-source software library for dataflow and differentiable
programming across a range of tasks. It is a symbolic math library, and is also used for machine
learning applications such as neural networks. It is used for both research and production
at Google.‍
TensorFlow was developed by the Google Brain team for internal Google use. It was released
under the Apache 2.0 open-source license on November 9, 2015.
Numpy
Numpy is a general-purpose array-processing package. It provides a high-performance
multidimensional array object, and tools for working with these arrays.
It is the fundamental package for scientific computing with Python. It contains various features
including these important ones:
 A powerful N-dimensional array object
 Sophisticated (broadcasting) functions
 Tools for integrating C/C++ and Fortran code
 Useful linear algebra, Fourier transform, and random number capabilities
Besides its obvious scientific uses, Numpy can also be used as an efficient multi-dimensional
container of generic data. Arbitrary data-types can be defined using Numpy which allows
Numpy to seamlessly and speedily integrate with a wide variety of databases.

Pandas
Pandas is an open-source Python Library providing high-performance data manipulation and
analysis tool using its powerful data structures. Python was majorly used for data munging and
preparation. It had very little contribution towards data analysis. Pandas solved this problem.
Using Pandas, we can accomplish five typical steps in the processing and analysis of data,
regardless of the origin of data load, prepare, manipulate, model, and analyze. Python with
Pandas is used in a wide range of fields including academic and commercial domains including
finance, economics, Statistics, analytics, etc.
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety
of hardcopy formats and interactive environments across platforms. Matplotlib can be used in
Python scripts, the Python and IPython shells, the Jupyter Notebook, web application servers,
and four graphical user interface toolkits. Matplotlib tries to make easy things easy and hard
things possible. You can generate plots, histograms, power spectra, bar charts, error charts,
scatter plots, etc., with just a few lines of code. For examples, see the sample plots and thumbnail
gallery.
For simple plotting the pyplot module provides a MATLAB-like interface, particularly when
combined with IPython. For the power user, you have full control of line styles, font properties,
axes properties, etc, via an object oriented interface or via a set of functions familiar to
MATLAB users.
Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent
interface in Python. It is licensed under a permissive simplified BSD license and is distributed
under many Linux distributions, encouraging academic and commercial use. Python
Python is an interpreted high-level programming language for general-purpose programming.
Created by Guido van Rossum and first released in 1991, Python has a design philosophy that
emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports multiple
programming paradigms, including object-oriented, imperative, functional and procedural, and
has a large and comprehensive standard library.
 Python is Interpreted − Python is processed at runtime by the interpreter. You do not need
to compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse code is
part of this, and so is access to powerful constructs that avoid tedious repetition of code.
Maintainability also ties into this may be an all but useless metric, but it does say something
about how much code you have to scan, read and/or understand to troubleshoot problems or
tweak behaviors. This speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard library is key to another area
where Python excels.
All its tools have been quick to implement, saved a lot of time, and several of them have later
been patched and updated by people with no Python background - without breaking.
Install Python Step-by-Step in Windows and Mac :
Python a versatile programming language doesn’t come pre-installed on your computer devices.
Python was first released in the year 1991 and until today it is a very popular high-level
programming language. Its style philosophy emphasizes code readability with its notable use of
great whitespace.
The object-oriented approach and language construct provided by Python enables programmers
to write both clear and logical code for projects. This software does not come pre-packaged with
Windows.
How to Install Python on Windows and Mac :

There have been several updates in the Python version over the years. The question is how to
install Python? It might be confusing for the beginner who is willing to start learning Python but
this tutorial will solve your query. The latest or the newest version of Python is version 3.7.4 or
in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.

Before you start with the installation process of Python. First, you need to know about
your System Requirements. Based on your system type i.e. operating system and based
processor, you must download the python version. My system type is a Windows 64-bit
operating system. So the steps below are to install python version 3.7.4 on Windows 7 device or
to install Python 3. Download the Python Cheatsheet here.The steps on how to install Python on
Windows 10, 8 and 7 are divided into 4 parts to help understand better.

Download the Correct version into the system

Step 1: Go to the official site to download and install python using Google Chrome or any other
web browser. OR Click on the following link: https://www.python.org
Now, check for the latest and the correct version for your operating system.
Step 2: Click on the Download Tab.

Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or
you can scroll further down and click on download with respective to their version. Here, we are
downloading the most recent python version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.

Step 5: Here you see a different version of python along with the operating system.

• To download Windows 32-bit python, you can select any one from the three options: Windows
x86 embeddable zip file, Windows x86 executable installer or Windows x86 web-based
installer.
•To download Windows 64-bit python, you can select any one from the three options: Windows
x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x86-64 web-
based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding which
version of python is to be downloaded is completed. Now we move ahead with the second part in
installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the Release
Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the installation
process.

Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.
Step 3: Click on Install NOW After the installation is successful. Click on Close.

With these above three steps on python installation, you have successfully and correctly installed
Python. Now is the time to verify the installation.
Note: The installation process might take a couple of minutes.

Verify the Python Installation


Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.

Step 3: Open the Command prompt option.


Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter.
Step 5: You will get the answer as 3.7.4
Note: If you have any of the earlier versions of Python already installed. You must first uninstall
the earlier version and then install the new one.

Check how the Python IDLE works


Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”.

Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click on
Save

Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have
named the files as Hey World.
Step 6: Now for e.g. enter print

6.SYSTEM TEST
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.

TYPES OF TESTS

Unit testing
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual software units
of the application .it is done after the completion of an individual unit before integration. This is
a structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application, and/or
system configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and expected
results.

Integration testing:

Integration tests are designed to test integrated software components to


determine if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically aimed at
exposing the problems that arise from the combination of components.
Functional test

Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.

Systems/Procedures : interfacing systems or procedures must be invoked.


Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify Business
process flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An example of
system testing is the configuration oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least its purpose.
It is purpose. It is used to test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other kinds
of tests, must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase
of the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be written in
detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

Integration Testing

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Test cases1:
Test case for Login form:

FUNCTION: LOGIN
EXPECTED RESULTS: Should Validate the user and check his
existence in database
ACTUAL RESULTS: Validate the user and checking the user against
the database
LOW PRIORITY No
HIGH PRIORITY Yes
Test case2:
Test case for User Registration form:

FUNCTION: USER REGISTRATION


EXPECTED RESULTS: Should check if all the fields are filled by the
user and saving the user to database.
ACTUAL RESULTS: Checking whether all the fields are field by
user or not through validations and saving user.
LOW PRIORITY No
HIGH PRIORITY Yes

Test case3:
Test case for Change Password:
When the old password does not match with the new password ,then this results in
displaying an error message as “ OLD PASSWORD DOES NOT MATCH WITH THE
NEW PASSWORD”.
FUNCTION: Change Password
EXPECTED RESULTS: Should check if old password and new
password fields are filled by the user and
saving the user to database.
ACTUAL RESULTS: Checking whether all the fields are field by
user or not through validations and saving user.
LOW PRIORITY No
HIGH PRIORITY Yes
Analysis and Detection of Malware in Android Applications Using Machine Learning
Android phones are using everywhere for various online or offline activities and this android
capability make soft target for various attacks. Android security is controlled via permissions and
malicious APPS can ask user device level permissions and if user gave such permissions then
malicious APP start reading user sensitive data like credit card details, photos and many other
sensitive information and send to attackers.
To avoid such attacks Google Play store has inbuilt attack detection software but its performance
is not up to the mark so experts are suggesting to employing Machine Learning algorithms
whose attack detection accuracy is higher.
So author of this paper employing machine learning model on both static and dynamic data of
android device to detect Malware attacks. Static detection depends on permission dataset and
dynamic detection is based on APP behaviour during execution time. Attacks which cannot be
detected or skip during static analysis can be detected using dynamic analysis.
In propose paper author used GENOME android malware dataset from KAGGLE for static
analysis and for dynamic analysis he has built his own dataset using Bangladesh APP’S but we
don’t have such dataset so we used dynamic analysis dataset from GITHUB repository and this
dataset contains malware from different APPS generated while execution.
To detect author has employed various machine learning algorithms such as SVM, Random
Forest, Decision Tree, KNN and Logistic Regression and all this algorithms trained on static and
dynamic dataset. Each algorithms performance is evaluated in terms of Accuracy, Precision,
Recall and FSCORE. Among all algorithms SVM, Random Forest and Decision Tree is giving
high performance.
To train and test all algorithms we have used both static and dynamic analysis dataset which is
showing below
In above dynamic dataset first row represents dataset column names and remaining rows
represents dataset values and in last column we have class labels as 0 and 1 where 0 means
Benign and 1 means MALWARE. In below screen showing static dataset details based on
permissions

In above static dataset first row represents dataset column names and remaining rows represents
dataset values and above all values are permission values.
To implement this project we have designed following modules
1) Upload Android Static & Dynamic Malware Dataset: using this module we will upload
both dataset to application and then read and display dataset values
2) Pre-process Dataset: using this module we will apply processing techniques such as
shuffling, normalization and then displaying normalized values
3) Split Train & Test Data: using this module we will split dataset into train and test where
application using 80% dataset for training and 20% for testing
4) Run ML on Static Data: 80% static training data will be input to this module to train all 5
algorithms and this trained models will be applied on 20% test data to calculate
prediction accuracy
5) Run ML on Dynamic Data: 80% dynamic training data will be input to this module to
train all 5 algorithms and this trained models will be applied on 20% test data to calculate
prediction accuracy
6) Static Comparison Graph: using this module will plot comparison graph of all algorithms
on static dataset
7) Dynamic Comparison Graph: using this module will plot comparison graph of all
algorithms on dynamic dataset
8) Predict Malware from Test Data: using this module will upload test data and then ML
model will predict weather test data records are Benign or Malware.
SCREEN SHOTS

To run project double click on ‘run.bat’ file to get below screen

In
above screen click on “Upload Android Static & Dynamic Malware Dataset" button to upload
dataset and then read and display both dataset values

In above screen selecting and displaying entire ‘Dataset’ folder with static and dynamic and then
click on ‘Select Folder” button to load datasets and get below output
In above screen both static and dynamic dataset loaded and now click on ‘Pre-process Dataset’
button to process dataset and get below output

In above screen both dataset values are normalized to 0 and 1 and then click on ‘Split Train &
Test Data’ button to split dataset into train and test and then will get below output
In above screen can see both dataset size and then can see 80% and 20% train and test data and
now click on ‘Run ML on static Data’ button to train all 5 algorithms on static dataset and get
below output

In above screen can see all algorithms performance on static dataset and in all algorithms
Decision Tree, Random Forest and SVM got high accuracy and can see other metrics also and
now click on ‘Run ML on Dynamic Data’ button to train dynamic dataset and get below output
In above screen can see all algorithm performance on Dynamic dataset and in all algorithms
Random Forest and Decision Tree got high performance and now click on ‘Static Comparison
Graph’ button to get below graph

In above graph can see performance of all algorithms on static dataset where x-axis represents
algorithm names and y-axis represents accuracy and other metrics in different colour bars and in
all algorithms we can see Random Forest and Decision Tree got high performance. Now click on
‘Dynamic Comparison Graph’ button to get below Dynamic graph
In above graph can see performance of all algorithms on dynamic dataset and now close above
graph and then click on ‘Predict Malware from Test Data’ button to upload test data and perform
prediction

In above screen selecting and uploading ‘testData.csv’ file and then click on ‘Open’ button to get
below prediction output
In above screen in square bracket we can see Test Data values and after arrow symbol we can see
predicted output as Benign or Malware
Conclusion

The study on the analysis and detection of malware in Android applications using machine
learning has demonstrated significant advancements in the field of mobile security. The
conclusions derived from the research are as follows:
1. Effectiveness of Machine Learning Models:
o Machine learning models, particularly those using supervised learning techniques,
have shown high accuracy in detecting malware in Android applications.
Algorithms such as Random Forest, Support Vector Machines, and Neural
Networks have been effective in identifying malicious patterns.
2. Feature Selection and Extraction:
o The success of malware detection heavily depends on the selection and extraction
of relevant features. Static features (e.g., permissions, API calls) and dynamic
features (e.g., behavior during runtime) are critical for accurate detection.
Combining these features has improved the performance of detection systems.
3. Dataset Quality:
o The quality and diversity of the dataset used for training significantly impact the
effectiveness of the machine learning model. Large and well-labeled datasets that
include a variety of malware samples are essential for robust model training and
evaluation.
4. Real-time Detection Capabilities:
o Implementing real-time malware detection poses challenges due to the need for
low latency and high accuracy. However, advancements in lightweight machine
learning models and on-device processing have made real-time detection more
feasible.
5. Adversarial Attacks and Robustness:
o Machine learning models are vulnerable to adversarial attacks where malware
authors attempt to evade detection by manipulating features. Ensuring the
robustness of models against such attacks is crucial for maintaining the integrity
of malware detection systems.
ENHANCEMENTS

To further improve the detection and analysis of malware in Android applications using machine
learning, several enhancements can be considered:
1. Hybrid Models:
o Combining multiple machine learning approaches, such as integrating both static
and dynamic analysis, can enhance the detection accuracy. Hybrid models can
leverage the strengths of different methods to provide a more comprehensive
analysis.
2. Incremental Learning:
o Implementing incremental learning techniques can help models adapt to new and
emerging malware threats. By continuously updating the model with new data,
the detection system can remain effective over time.
3. Explainable AI:
o Developing explainable AI models can provide insights into how and why certain
applications are classified as malware. This transparency can help in
understanding the decision-making process and building trust in the detection
system.
4. Enhanced Feature Engineering:
o Exploring new features and improving feature engineering techniques can lead to
better model performance. This includes investigating advanced behavioral
features and contextual information that can aid in distinguishing between benign
and malicious activities.
5. Collaboration and Data Sharing:
o Encouraging collaboration among researchers, security companies, and the
Android community can lead to the development of more robust datasets and
sharing of knowledge. Open-source platforms and shared repositories can
facilitate this collaboration.
6. User Education and Awareness:
o Educating users about the risks of malware and promoting safe practices can
complement technical solutions. User awareness programs can help reduce the
likelihood of malware infections.
7. Performance Optimization:
o Optimizing the performance of machine learning models for mobile devices is
essential. Techniques such as model compression, pruning, and efficient
algorithms can ensure that detection systems operate effectively without draining
device resources.
References

1. Books and Academic Papers:


o Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K., & Siemens, C.
(2014). DREBIN: Effective and explainable detection of Android malware in
your pocket. Network and Distributed System Security Symposium (NDSS).
o Sanz, B., Santos, I., Laorden, C., Ugarte-Pedrero, X., & Bringas, P. G. (2013). On
the automatic categorisation of Android applications. 2013 IEEE Consumer
Communications and Networking Conference (CCNC), 914-919.
o Tam, K., Khan, S. J., Fattori, A., & Cavallaro, L. (2015). CopperDroid:
Automatic Reconstruction of Android Malware Behaviors. NDSS.
2. Journals:
o Aafer, Y., Du, W., & Yin, H. (2013). DroidAPIMiner: Mining API-Level
Features for Robust Malware Detection in Android. Security and Privacy in
Communication Networks.
o Sahs, J., & Khan, L. (2012). A Machine Learning Approach to Android Malware
Detection. 2012 European Intelligence and Security Informatics Conference, 141-
147.
o Wu, D. J., Mao, C. H., Wei, T. E., Lee, H. M., & Wu, K. P. (2012). DroidMat:
Android Malware Detection through Manifest and API Calls Tracing. 2012
Seventh Asia Joint Conference on Information Security (AsiaJCIS), 62-69.
3. Conference Proceedings:
o Zhang, Y., Du, W., & Yin, H. (2014). Semantics-aware Android Malware
Classification Using Weighted Contextual API Dependency Graphs. 2014 ACM
SIGSAC Conference on Computer and Communications Security (CCS), 1105-
1116.
o Canfora, G., Mercaldo, F., & Visaggio, C. A. (2015). A classifier of malicious
Android applications. 2015 8th International Conference on Malicious and
Unwanted Software (MALWARE), 87-90.
o Yerima, S. Y., & Sezer, S. (2014). DroidFusion: A Novel Multilevel Classifier
Fusion Approach for Android Malware Detection. IEEE Transactions on
Cybernetics, 44(10), 2356-2369.
4. Technical Reports and Theses:
o Zhou, Y., & Jiang, X. (2012). Dissecting Android Malware: Characterization and
Evolution. 2012 IEEE Symposium on Security and Privacy (SP), 95-109.
o Enck, W., Ongtang, M., & McDaniel, P. (2009). On lightweight mobile phone
application certification. Proceedings of the 16th ACM conference on Computer
and communications security (CCS), 235-245.
5. Online Resources and Tutorials:
o Google. (n.d.). Android Developers. Retrieved from
https://developer.android.com/
o TensorFlow. (n.d.). Machine Learning for Mobile and Edge Devices. Retrieved
from https://www.tensorflow.org/lite
o OWASP. (n.d.). Mobile Security Testing Guide. Retrieved from
https://owasp.org/www-project-mobile-security-testing-guide/
6. Datasets:
o AndroZoo: A Growing Collection of Android Applications. Available at
https://androzoo.uni.lu/
o Drebin Dataset: A Dataset for Android Malware Analysis. Available at
https://www.sec.cs.tu-bs.de/~danarp/drebin/

You might also like