0% found this document useful (0 votes)
25 views10 pages

AI Based Threat Detection System - IEEE Report

The document presents an AI-based threat detection system designed to classify network traffic as normal or malicious, utilizing advanced machine learning techniques and feature engineering methods. It addresses limitations of traditional intrusion detection systems by incorporating algorithms like Gradient Boosted Trees and Multi-Layer Perceptron, and features such as Principal Component Analysis for improved accuracy and efficiency. The system is implemented with user-friendly interfaces, including a Flask application and Python GUI, for real-time predictions and testing.

Uploaded by

dandu.dharmaraju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views10 pages

AI Based Threat Detection System - IEEE Report

The document presents an AI-based threat detection system designed to classify network traffic as normal or malicious, utilizing advanced machine learning techniques and feature engineering methods. It addresses limitations of traditional intrusion detection systems by incorporating algorithms like Gradient Boosted Trees and Multi-Layer Perceptron, and features such as Principal Component Analysis for improved accuracy and efficiency. The system is implemented with user-friendly interfaces, including a Flask application and Python GUI, for real-time predictions and testing.

Uploaded by

dandu.dharmaraju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

AI Based Threat Detection System

Appikonda Shyam Sai Venkata Agastya, Bandi Rishikesh Kumar, Dandu Sasi Sathvik Varma, Gangu Chirudeep,
Kavitha C. R.*
Department of Computer Science & Engineering
Amrita School of Computing, Bengaluru
Amrita Vishwa Vidyapeetham, India
*Corresponding Author: [email protected],
[email protected], [email protected],
[email protected], [email protected].

Abstract— With the rapid growth of network (DDoS) attacks and APT. These attacks are
cyber threats, there exists a growing need for potentially very damaging to the organizations,
advanced, scalable and highly accurate governments and individuals, and they are meant to
mechanisms for threat detection. AI based threat cause harm to the confidentiality, integrity and
detection system is presented in this paper that availability of critical data and systems.
uses machine learning and feature engineering
techniques for classifying network traffic as
normal to malicious.It leverages state of the art Current rule based intrusion detection systems (IDS)
algorithms including Gradient Boosted Trees and suffer from their inability to adapt to rapidly changing
a Multi Layer Perceptron, achieving high attack patterns and sheer wide breadth of inputted
accuracy with optimized preprocessing steps such network traffic. Due to the growing innovation of
as Principal Component Analysis and Chi Square attackers these systems have been facing challenges
feature selection.arning techniques and feature moving to detect novel threats, to handle huge scale
engineering methods to classify network traffic as data and generate fast response. In response to these
either normal or malicious. The system limitations, Artificial Intelligence (AI) and Machine
incorporates state-of-the-art algorithms, including learning (ML) are powerful tools that can potentially
Gradient Boosted Trees and Multi-Layer provide the ability to analyzise the complex patterns,
Perceptron achieving high accuracy through spot anomalies and adapt to the new attack strategy.
optimized preprocessing steps such as Principal
Component Analysis and Chi-Square feature In this paper, we present an AI Based Threat
selection. A Flask application and Python GUI are Detection System which is used to detect the network
utilized as a means to test the system via user traffic and classify it as normal or malicious traffic.
friendly interfaces for real time prediction and Range of machine learning algorithms, such as
validation. ensemble models and neural networks are used in the
system to identify known and unknown threats. To
improve model performance and computational
efficiency, advanced feature engineering techniques
Keywords— Network Threat Detection, Machine
are applied, including Principal Component Analysis
Learning, Gradient Boosted Trees, Multi-Layer
(PCA) for dimensionality reduction and Chi-Square
Perceptron, Feature Engineering, Principal
feature selection. These methods permit the system to
Component Analysis, Cybersecurity.
effectively process large datasets and identify crucial
I. INTRODUCTION 'feature characteristics' that are critical to the
identification of threats.
Digital networks and inter connected systems have
fastly expanded giving us the communications and The system is built to be both friendly to use, and
also gadgets more advancement. Unfortunately this scalable such that it integrates easily with testing
growth has also resulted in an alarming rise in the interfaces including a Flask based web application
sophistication of cyber threats from malware and and a Python GUI. They are further useful for
phishing, to large scale Distributed Denial of Service practical scenarios in cybersecurity for the following
reason: they enable the user to input manually strategic insights into how to defend against offensive
features and get real time predictions. AI, proposing ways to assess and mitigate these
threats when they emerge. Bo-Xiang Wang and Jiann-
By utilizing AI and ML techniques, this work seeks to
Liang Chen [4] built an AI powered network threat
address some challenges of modern cybersecurity
detection network extended with 52 features derived
landscape by introducing a new, robust and scalable
from network interactions including message based,
threat detection system.
host based and geography based data. The aim was to
prevent command line based threats at the remote
network connection and to get better detection
II. LITERATURE SURVEY accuracy and effectiveness. Nevertheless, this
comprehensive system was successful and it was
Growing threats of cyber have led to extensive
pointed out that greater advancement would have
research on AI driven and integrated systems for
been possible were more elements of optimization
cyber threats detection. These works consider diverse
entered into the system and if the detection
areas such as financial networks and smart
algorithms were refined. In Software – Defined
infrastructures, and are novel in the methodologies
Networking (SDN), Francesco Salatino et al. [5]
and technologies that they successfully apply to
proposed an intrusion detection system based on
enhance cybersecurity. This part summarizes detailed
artificial intelligence techniques for detecting
review of few important works in this domain, which
Distributed Denial of Service (DDoS) attacks. The
discusses the research gaps and its matters in
authors combine advanced Machine Learning (ML)
proposing project. Kuldeep Singh and Lakshmi
and Deep Learning (DL) methods to improve
Sevukamoorthy [1] suggested one possible method of
detection accuracy without increasing computational
strengthening security of such financial networks: the
complexity. A research gap is indicated by a need for
use of blockchain technology combined with AI. The
better feature analysis and selection to further reduce
challenge of increasing cyber threats to financial
computational requirements and improve the
institutions was addressed by the authors who pointed
scalability of the system. Android malware detection
out that cybersecurity frameworks must be robust.
was the main focus of Shamsher Ullah et al.[6]
One of their findings is the inability of existing
where they gave attention to the fact that the cyber
comprehensive frameworks that can adopt the
threats against Android devices are increasing
advantages of blockchain as well as AI in the
exponentially. Moreover, they pointed out that the
detection of threats. Their research fills the gap by
deficiencies in the performance of current machine
demonstrating that it is possible to build secure and
learning models with regard to transparency and
resilient system using immutable blockchain ledgers
interpretability were especially glaring when
and intelligent AI based threat detection mechanisms.
considering the explanatory AI (XAI). According to
Marc Schmitt[2] investigated AI based malware and
their study, XAI techniques come in handy in
intrusion detection in smart infrastructures and digital
demystifying the decision-making processes of ML
industries. It also highlighted the urgent need to
models and supply actionable insights for end users
protect environments that are getting more and more
and stakeholders. Sonu Preetam et al. [7] proposed a
interlinked against sophisticated cyber threats. But
behaviour based threat modelling approach with their
Schmitt pointed out the difficulties of bringing AI/ML
explanations for intelligent decision making. In order
models into internal digital ecosystems that are
to overcome the various issues associated with
complex. The gap identified suggests that solutions
traditional intrusion detection systems, the authors
that improve accuracy of detection do not have to
wanted to make them scalable and real time. A gap in
disrupt existing infrastructures or operations while
model development was demonstrated that integrates
seamlessly interfacing with them. Yisroel Mirsky et
diverse data sources, correlates tactics, techniques
al. [3] examined the threat of offensive AI
and procedures (TTPs) with advanced AI techniques,
highlighting how AI capable adversaries might
and ultimately delivers real time, explainable threat
exploit organizational systems vulnerabilities. This
detection.In the context of 5G networks, Thulitha
research presents a structured perspective on
Senevirathna et al.[8] investigated the vulnerabilities
offensive AI tactics taking for granted the cyber kill
of Explainable AI (XAI) methods in Network
chain and what it means to security. The authors
Intrusion Detection Systems (NIDS).s. The results of
found that the most glaring gap in their study
their study also showed the vulnerabilities of XAI
concerned the fact that this emerging threat lacked
methods towards scaffolding attacks and lack of
effective solutions to the problem of detecting such
sophisticate adversarial attacks. Jonghoon Lee et al.
[9] proposed an artificial neural networks based cyber
threat detection system using event profiles. The
challenge of analyzing vast amounts of security event
data where false positives are high and real world
threats are difficult to detect from those large
amounts of data was addressed. The identified gap
shows that existing methods in general are unable to
to generalize to multiple data sets well and do not
adequately reduce false alarms. Viraj Rathod et al.
[10] applied their AI and ML based anomaly
detection system to detect adversarial behaviors,
based on the EMBER dataset. They found the need
for systems with real time response mechanisms
coupled with AI driven anomaly detection, a space
that if properly filled in, would greatly improve
system responsiveness and accuracy in a dynamic
threat environment. Reviews of the literature show
that there has been excellent progress in the
development of AI based threat detection systems, but
critical gaps also still remain.The limitations include
a lack of integrated frameworks of blockchain and AI
[1], difficulty of seamlessly deploying AI/ML models
in complex ecosystems [2] and the demand of
transparency and interpretability in AI models [6,8].
Additional work is crucial, including advances in
feature analysis and selection processes [5], Figure 1 Architecture
generalization across different datasets [9], and real-
time response mechanisms [10] to alleviate current Implementation Flow
limitations. The proposed system addresses these
The architecture of the proposed methodology is
gaps in order to contribute to the science of
shown in figure 1. It starts with raw data collection,
cybersecurity through the development of an AI
follows through preprocessing, feature engineering,
based threat detection system specializing in
followed by model training. We integrate the model
scalability, interpretability, and real time response.
that performs the best into the interfaces for testing
This presented system will take advantage of the
and maintain a workflow from any data input to
strengths, and lessons learned in already existing
threat prediction.
works, as it will enhance the existing tools with the
new ideas to overcome the limitations in the present
approaches.
Dataset Description
III. METHODOLOGY
Network traffic data are contained in the dataset
The proposed AI Based Threat Detection System is making both categorical as well as numerical features
based on a systematic methodology comprising of such as protocol types, service ports, and connection
data preprocessing, feature engineering, machine flags. We label each instance as normal or malicious
learning model training and evaluation. Figure 1 so that supervised machine learning models can
illustrates the architecture of this methodology with classify threats. Our challenges include working with
the modular design and the flow between high dimensional data, unbalanced class distributions,
components. In this section we describe individually and heterogenous feature types.
what the steps are and the role in the end that they
play to get at accurate and efficient threat detection.
Data Preprocessing 3.Support Vector Machines (SVM): As binary
classification constructs, it is hyperplanes for high
Transforming raw or non structured data into
dimensional data.
structured format for machine learning algorithms is
called preprocessing. Tasks include : 4.Random Forest: A multiple decision trees ensemble
method for robust classification.
1.String Indexing: It transforms categorial attributes
(e.g. protocol type, service flags) into numerical 5.Decision Tree: This model will split data by using
indicies to match those that machine learning models the most important features at each step, it will be a
can handle. simple and interpretable model.

2.Handling Missing Values: Data integrity is 6.Naive Bayes: A quick baseline categorical data
maintained by either missing entries imputed or classifier and a probabilistic model effective for
removed. categorical data.

3.Scaling and Normalization: To make model 7.Logistic Regression: An interpretability and


converge, Min-Max scaling or Z-score normalization simplicity baseline linear model for binary
is used to scale numerical features to a uniform range. classification.

Model Training and Evaluation

Feature Engineering

The dataset is then split into the training (70%) and


the testing (30%) subsets for model development and
Feature engineering makes the dataset better by
evaluation.
improving the quality, and at the same time reducing
computational overhead. The following techniques 1.Training Phase: Then, we process the dataset and
are applied: train machine learning models on it, which are then
tuned to best performance.
1.Principal Component Analysis (PCA): It reduces
dimensionality by finding out principal components, 2.Evaluation Metrics: The assessment of models is
and keeping maximum variance on fewer features. through metrics like accuracy, precision, recall, F1-
score and confusion matrix. In this case, these metrics
2.Chi-Square Feature Selection: Removes the
make sure the models learn how to strike a fine
irrelevant or redundant attributes that contribute most
balance between classifying datasets as’true’threat
to the classification task and identifies the statistically
and ’false positive’, avoiding to neglect a threat.
significant features.
3.Cross-Validation: The robustness and generality of
3.SMOTE (Synthetic Minority Over-sampling
the models is validated with k-fold cross validation
Technique): It addresses class imbalance and balances
using different subsets of data.
the dataset by generating synthetic samples for the
minority class.

Machine Learning Models Testing Interfaces

A variety of machine learning models are To validate the system, two user-friendly testing
implemented to classify network traffic accurately: interfaces are developed:

1.Gradient Boosted Trees (GBT): It iteratively 1.Flask Application: An application or interface


combines weak learners into a robust ensemble model through which a user types in feature values to obtain
on which high performance can be achieved even on immediate predictions from the developed model.
complex datasets.
2.Python GUI: A graphical user interface of a local
2.Multi-Layer Perceptron (MLP): It is effective for application that can be used for interactive testing
anomaly detection because basis learns non linear where the user can input feature vector, and get the
relationships using neural networks. classification immediately.
2.Real-Time Prediction: The prediction regarding
what prompts the system to make the prediction come
Flask Application for Testing
as soon as the user enters the input.
The web framework used in this system is known as
3.Reset Option: A reset button to reset the inputs and
the flask application which is used to create a web
test new feature values without restarting the
application where one can test the system. The
application was included in the GUI.
entered values of features are subsequently obtained
through the interface, and the extracted information is
passed to the trained model for a prediction.

Key Features of the Flask Application:

1.Input Form: The user is given a form in which he or


she is expected to fill all the required features. All of
the features are specific to the one to one mapping of
the attribute used in the machine learning model.

2.Prediction Output: Once the user fills the form the


Flask application post the data to the backend where
the data gets passed on to the ML model. Thus the
response of the system will either read ‘Normal (No
Threat)’ or ‘Anomaly (Intrusion Threat)’ on the
screen.
Figure 2 Flask Application to test the system

Figure 2 shows a screenshot of the Flask application


Functionality Workflow:
interface, highlighting the input fields and the
1.User connects to the Flask application over a web prediction result.
browser.

2.The feature values are entered by the user manually.

3.Frontend model can only classify, which will be


sent to the backend model for further treatment.

4.It results in the same interface showing results


instantment.

Python GUI for Testing

A another alternative test availability interface is


made in Python GUI itself which is more user
friendly for desktop environments. It is just like the
Flask application with interactive as well as
standalone desktop application.

Key Features of the Python GUI:

1.Graphical Input Fields: Users can conveniently


enter feature values in a good interface.
Both of the interfaces are employed in evaluating the
performance of the system under development in live
conditions as well.

We have put together a scalable and robust threat


detection system out of best preprocessing, good
feature selection and a myriad of machine learning
models. Modular design facilitates future
enhancement to meet emerging cybersecurity
challenges, and testing interfaces demonstrate that the
design provides usability and practical applicability.

IV. RESULTS AND DISCUSSION

Results Of Machine Learning Algorithms

The results offer a comprehensive analysis of the


proposed AI-Based.Threat Detection System by
employing methods of different machine
learning.models trained on the preprocessing of the
dataset as provided in Table 3 below.Accuracy,
precision, recall, again, are measures of the
performance of the algorithm that the ML model
retrieves after training.accuracy along with F1 score
is investigated to know about the efficiency ofmodels.
Tables 1 thru 4 & Figures 4 through 11show the
assessment criteria and comparative diagrams in the
example Configuration 2 each of them taking a
relative broad view on the configurations. system's
effectiveness.

Table 1 Evaluation Metrics of Machine learning


algorithms

Accuracy Precision Recall F1 Score


Algorithms
Naive Bayes 72.353420 76.441927 72.353420 71.729593
SVM 95.697611 95.702568 95.697611 95.695478
Decision Tree 98.439197 98.444107 98.439197 98.438586
Random Forest 98.276330 98.289491 98.276330 98.275212
Figure 3 Python GUI to test the system MLP 96.647666 96.706550 96.647666 96.642367
Logistic 95.317590 95.327916 95.317590 95.314340
Figure 3 displays a screenshot of the Python GUI Regression
interface, showcasing the input fields and the Gradient- 99.619978 99.621049 99.619978 99.619916
Boosted Tree
prediction result area.

Functionality Workflow: In the following, the results of the evaluation of the


1.Python GUI appears on the face of the user’s applied machine learning algorithms on the raw
desktop. dataset are given in Table 1. GBT produced the
highest accuracy of 99.6% making it the best model
2.The GUI input presents entered feature values done in this configuration.
by hand.
Accuracy: GBT emerged as the most accurate model.
3.The input is then input to the trained model after
submission and the result will be depicted on the Precision, Recall, and F1-Score: GBT consistently
graphical user interface. outperformed other models across these metrics.
Figures 4 through 7 present comparison graphs for Figure 6 Comparison graph for Recall scores on machine
accuracy, precision, recall, and F1-scores, learning algorithms
respectively, demonstrating the relative performance
of all algorithms on the raw dataset.

Figure 7 Comparison graph for F1 scores on machine


learning algorithms

Figure 4 Comparison graph for accuracies on machine Results when PCA is performed
learning algorithms
Table 2 PCA Results

Accuracy Precision Recall F1 Score

Algorithms
SVM 94.536489 94.536688 94.536489 94.536585
Decision Tree 96.844181 96.850596 96.844181 96.841893
Random Forest 96.469428 96.583230 96.469428 96.458673
MLP 99.408284 99.408357 99.408284 99.408305
Logistic 94.902066 94.902875 94.902066 94.900487
Regression
Gradient-Boosted 98.121814 98.133233 98.121814 98.120693
Tree

When Principal Component Analysis (PCA) was


applied to the dataset for dimensionality reduction,
the Multi-Layer Perceptron (MLP) model performed
best, achieving an accuracy of 99.4%. The evaluation
metrics for all models under this configuration are
shown in Table 2.

Figure 5 Comparison graph for Precision scores on MLP: Achieved high scores across all metrics,
machine learning algorithms including precision, recall, and F1-score,
demonstrating its ability to generalize well with
reduced dimensionality.

Results when ChiSquare Selection is performed

Table 3 ChiSquare selection Results

Accuracy Precision Recall F1 Score


Algorithms
Naive Bayes 72.268245 76.22653 72.268245 71.825852
1
SVM 95.226824 95.22718 95.226824 95.226991
8
Decision Tree 98.836292 98.83766 98.836292 98.836511
7
Random Forest 98.796844 98.81776 98.796844 98.795536
1
MLP 96.824458 96.86172 96.824458 96.819258
Logistic 95.266272 95.26553 95.266272 through
95.265766 11.
Regression
Gradient- 99.526627 99.52700 99.526627 99.526676
Boosted Tree 6

The application of Chi-Square Feature Selection led


to Gradient Boosted Trees (GBT) again achieving the
best performance, with an accuracy of 99.5%. The
evaluation metrics for this configuration are detailed
in Table 3.

GBT: Demonstrated improved precision, recall, and


F1-scores, reinforcing its ability to effectively
classify threats with optimized features. Figure 8 Comparison graph for accuracies on machine
learning algorithms when PCA followed by ChiSquare
Results when PCA is applied is also followed by selection is performed
ChiSquare Feature Selection

Table 4 Results of PCA followed by ChiSquare Selection

F1 Score
Accuracy Precision Recall
Algorithms
SVM 94.930966 94.936025 94.930966 94.932422
Decision Tree 97.100592 97.160715 97.100592 97.094559
Random Forest 96.646943 96.749411 96.646943 96.637386
MLP 99.447732 99.448701 99.447732 99.447827
Logistic
Regression 94.990138 94.994401 94.990138 94.986298
Gradient-Boosted
Tree 98.560158 98.560378 98.560158 98.560233

Figure 9 Comparison graph for Precision scores on


As with the previous analysis, the best result was
machine learning algorithms when PCA followed by
achieved by the Multi-Layer Perceptron (MLP)
ChiSquare selection is performed
model, which obtained 99.4% accuracy when PCA
was used and then Chi-Square Feature Selection was
applied. Table 4 shows possible evaluation metrics
for all the models under this configuration.

MLP: Achieved comparable performance in terms of


precision, recall, and F1-score, demonstrating
efficiency in response to the lowered and slimmed
down vector dimensionality.

In the machine learning algorithms case, comparison


graphs for accuracy, precision, recall and F1 score
when applying the technique PCA followed by Chi-
Square selection are shown in the following figures 8
Figure 10 Comparison graph for Recall scores on
machine learning algorithms when PCA followed by
ChiSquare selection is performed
which allowed for immediate live predictions with
manual testing. Thus, these interfaces guarantee
practical applicability and improve the user
experience of the cybersecurity professionals.

Finally,The study concludes that a robust and scalable


solution for the problem of detecting network threats
can be formed through using machine learning with
feature engineering and user friendly interface. This
work creates a solid foundation for future
improvements that incorporate with real time
monitoring systems to help overcome the cyber
Figure 11 Comparison graph for F1 scores on machine security issues of today.
learning algorithms when PCA followed by ChiSquare
selection is performed VI. REFERRENCES
GBT and MLP have been pointed out to be the [1] Singh, Kuldeep, and Lakshmi Sevukamoorthy.
models with the best performance under various "Blockchain and AI-Based Threat Detection for
configurations as observed by the evaluation. Enhanced Security in Financial Networks." 2023
Malioutov and Zhang proposed two feature selection IEEE Technology & Engineering Management
algorithms: PCA and Chi-Square selection, by Conference-Asia Pacific (TEMSCON-ASPAC).
applying which the performance of the system IEEE, 2023.
increases, because noises are omitted and efforts are
made on the most significant features. The results [2] Schmitt, Marc. "Securing the Digital World:
confirm the applicability of the proposed Protecting smart infrastructures and digital industries
methodology for anomalies and intrusion threats with with Artificial Intelligence (AI)-enabled malware and
better accuracy. The evaluation metrics and the intrusion detection." Journal of Industrial Information
comparison of each configurations are illustrated in Integration 36 (2023): 100520.
the following figures: Figure 4, 5, 6, 7, 8, 9, 10 and [3] Mirsky, Yisroel, et al. "The threat of offensive ai
11. to organizations." Computers & Security 124 (2023):
V. CONCLUSION 103006.

The AI Based Threat Detection System develops to [4] Wang, Bo-Xiang, Jiann-Liang Chen, and Chiao-
address the problem of developing advanced scalable, Lin Yu. "An ai-powered network threat detection
accurate mechanisms for network traffic anomalies system." IEEE Access 10 (2022): 54029-54037.
detection and classification in the presence of [5] Salatino, Francesco, et al. "Detecting DDoS
evolving cyber threats. The system employs state of Attacks Through AI driven SDN Intrusion Detection
art machine learning algorithms and robust feature System." 2024 IEEE 21st Consumer Communications
engineering methods, which yield high accuracy & Networking Conference (CCNC). IEEE, 2024.
across numerous configurations and show the system
to be an effective and flexible tool while in operation [6] Ullah, Shamsher, et al. "The revolution and vision
in the real world. of explainable AI for android malware detection and
protection." Internet of Things (2024): 101320.
Key results include evidence that raw data are
performing better than Gradient Boosted Trees (GBT) [7] Preetam, Sonu, et al. "An Approach for Intelligent
with 99.6% accuracy and that the Multi-layer Behaviour-Based Threat Modelling with
Perceptron (MLP) model is robust to Principal Explanations." 2023 IEEE Conference on Network
Component Analysis (PCA) and Chi-square feature Function Virtualization and Software Defined
selection. These results demonstrate the improvement Networks (NFV-SDN). IEEE, 2023.
of model accuracy, precision, recall and F1 scores
[8] Senevirathna, Thulitha, et al. "Deceiving Post-hoc
achieved by feature optimization.
Explainable AI (XAI) Methods in Network Intrusion
The usability of the system was further validated in Detection." 2024 IEEE 21st Consumer
the development of Flask and Python GUI interfaces
Communications & Networking Conference (CCNC).
IEEE, 2024.

[9] Lee, Jonghoon, et al. "Cyber threat detection


based on artificial neural networks using event
profiles." Ieee Access 7 (2019): 165607-165626.

[10] Rathod, Viraj, Chandresh Parekh, and Dharati


Dholariya. "AI & ML Based Anamoly Detection and
Response Using Ember Dataset." 2021 9th
International Conference on Reliability, Infocom
Technologies and Optimization (Trends and Future
Directions)(ICRITO). IEEE, 2021.

[11] Sidarth V., Kavitha C.R., “Network Intrusion


Detection System Using Stacking and Boosting
Ensemble Methods “, Proceedings of the 3rd
International Conference on Inventive Research in
Computing Applications, ICIRCA 2021, pp: 357 –
363

[12] Shanmukha Aditya G., Kruthika B., Shinu


M.Rajagopal, C. R. Kavitha, Homomorphic
Encryption for Secure Data Analysis: A Hybrid
Approach using PKCS1_OAEP Padding, 2nd
International Conference on Intelligent Data
Communication Technologies and Internet of Things
(IDCIoT 2024), January 2024

You might also like