A Novel IoT-Based Explainable Deep Learning Framework For Intrusion Detection Systems
A Novel IoT-Based Explainable Deep Learning Framework For Intrusion Detection Systems
Abstract
The growth of the Internet of Things (IoT) is accompanied by serious cybersecurity risks, especially with the emergence of IoT botnets. In
this context, intrusion detection systems (IDSs) proved their efficiency in detecting various attacks that may target IoT networks, especially
when leveraging machine/deep learning (ML/DL) techniques. In fact, ML/DL-based solutions make “machine-centric” decisions about
intrusion detection in the IoT network, which are then executed by humans (i.e., executive cyber-security staff). However, ML/DL-based
solutions do not provide any explanation of why such decisions were made, and thus their results cannot be properly understood/exploited
by humans. To address this issue, explainable artificial intelligence (XAI) is a promising paradigm that helps to explain the decisions of ML/
DL-based IDSs to make them understandable to cyber-security experts. In this article, we design a novel XAI-powered framework to enable
not only detecting intrusions/attacks in IoT networks, but also interpret critical decisions made by ML/DL-based IDSs. Therefore, we first
build an ML/DL-based IDS using a deep neural network (DNN) to detect and predict IoT attacks in real time. Then we develop multiple
XAI models (i.e., RuleFit and SHapley Additive exPlanations, SHAP) on top of our DNN architecture to enable more trust, transparency, and
explanation of the decisions made by our ML/DL-based IDS to cyber security experts. The in-depth experiment results with well-known IoT
attacks show the efficiency and explainability of our proposed framework.
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on January 10,2024 at 12:27:01 UTC from IEEE Xplore. Restrictions apply.
several XAI approaches, including SHAP, the Decisions or recommendations
contrastive explanations method, LIME, and
ProtoDash. Similarly, another framework used
the SHAP approach to improve the transparen-
Explanation Interface
IDS Deep
Predictions
cy of IDSs of any ML/DL-based IDS system, in
Learning Model
Who ? Users of the models
Objective ? Understand and trust the model itself,
[9]. The authors used also the NSL-KDD dataset Data
to test the performance of the framework. Who ? Executive staff
In [10], XAI is integrated with an ML-based IDS XAI
Objective ? Understand and execute the model’s decision,
Explanations
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on January 10,2024 at 12:27:01 UTC from IEEE Xplore. Restrictions apply.
a) b)
Figure 2. Feature importance scores on the UNSW-NB15 dataset for: a) RuleFit; b) SHAP.
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on January 10,2024 at 12:27:01 UTC from IEEE Xplore. Restrictions apply.
Figure 4. Interpretation of our DNN model on the UNSW-NB15 dataset with: a) Sload of 4.5 * 104, a stcpb
of 1.43 * 109, and a tcpb of 3.5 * 109; b) Sload of 4.9 * 105, a stcpb of 0, and a dtcpb of 0; c) Sload of
1.8 * 109, a stcpb of 5.8 * 109, and a dtcpb of 2.7 * 109.
Authorized licensed use limited to: UNIVERSITY OF SHARJAH. Downloaded on January 10,2024 at 12:27:01 UTC from IEEE Xplore. Restrictions apply.