0% found this document useful (0 votes)
10 views33 pages

Information 15 00538

This document reviews modern forest fire detection techniques utilizing image processing, computer vision, and deep learning from 2013 to 2023. It highlights the limitations of traditional fire detection methods and emphasizes the advancements in technology that allow for real-time analysis and improved accuracy in detecting fire-related cues. The review aims to provide insights into current research and future directions for enhancing fire detection and extinguishing systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views33 pages

Information 15 00538

This document reviews modern forest fire detection techniques utilizing image processing, computer vision, and deep learning from 2013 to 2023. It highlights the limitations of traditional fire detection methods and emphasizes the advancements in technology that allow for real-time analysis and improved accuracy in detecting fire-related cues. The review aims to provide insights into current research and future directions for enhancing fire detection and extinguishing systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

information

Review
Review of Modern Forest Fire Detection Techniques:
Innovations in Image Processing and Deep Learning
Berk Özel 1,† , Muhammad Shahab Alam 2 and Muhammad Umer Khan 1, *,†

1 Department of Mechatronics Engineering, Atilim University, Ankara 06830, Turkey; [email protected]


2 Defense Technologies Institute, Gebze Technical University, Gebze 41400, Turkey; [email protected]
* Correspondence: [email protected]
† These authors contributed equally to this work.

Abstract: Fire detection and extinguishing systems are critical for safeguarding lives and minimizing
property damage. These systems are especially vital in combating forest fires. In recent years, several
forest fires have set records for their size, duration, and level of destruction. Traditional fire detection
methods, such as smoke and heat sensors, have limitations, prompting the development of innovative
approaches using advanced technologies. Utilizing image processing, computer vision, and deep
learning algorithms, we can now detect fires with exceptional accuracy and respond promptly to
mitigate their impact. In this article, we conduct a comprehensive review of articles from 2013 to
2023, exploring how these technologies are applied in fire detection and extinguishing. We delve into
modern techniques enabling real-time analysis of the visual data captured by cameras or satellites,
facilitating the detection of smoke, flames, and other fire-related cues. Furthermore, we explore the
utilization of deep learning and machine learning in training intelligent algorithms to recognize fire
patterns and features. Through a comprehensive examination of current research and development,
this review aims to provide insights into the potential and future directions of fire detection and
extinguishing using image processing, computer vision, and deep learning.

Keywords: artificial intelligence; deep learning; detection; fire; flame; forest fire; smoke; wildfire

Citation: Özel, B.; Alam, M.S.;


Khan, M.U. Review of Modern Forest 1. Introduction
Fire Detection Techniques:
Forests cover approximately 4 billion hectares of the world’s landmass, roughly equiv-
Innovations in Image Processing and
alent to 30% of the total land [1]. The preservation of forests is essential for maintaining
Deep Learning. Information 2024, 15,
538. https://doi.org/10.3390/
biodiversity on a global scale. Wildfires are destructive events that could adversely change
info15090538
the balance of our planet and threaten our future [2]. Wildfires have long-term devastating
effects on ecosystems, such as destroying vegetation dynamics, greenhouse gas emissions,
Academic Editor: Marco Leo loss of wildlife habitat, and destruction of land covers. The early detection and rapid
Received: 3 July 2024 extinguishing of fires are crucial in minimizing the loss of life and property [3]. Traditional
Revised: 28 August 2024 fire detection systems that rely on smoke or heat detectors suffer from low accuracy and
Accepted: 29 August 2024 long response times [4]. However, advancements in image processing (IP), computer vision
Published: 3 September 2024 (CV), and deep learning (DL) have opened up new possibilities for more effective and
efficient fire detection and extinguishing systems [5]. These systems utilize cameras and
sophisticated algorithms to analyze visual data in real-time, enabling early fire detection
and efficient fire suppression strategies.
Copyright: © 2024 by the authors. In most of the literature, researchers have mainly posed their problem under the
Licensee MDPI, Basel, Switzerland. paradigm of fire detection [6–8]. But some researchers have also explored different aspects
This article is an open access article of the phenomenon of combustion i.e., smoke [9,10], flame [11], and fire [12], with the intent
distributed under the terms and
to effectively determine the threats due to fire. In summary, fire is the overall phenomenon
conditions of the Creative Commons
of combustion involving the rapid oxidation of a fuel source, while flame represents the
Attribution (CC BY) license (https://
visible, gaseous part of a fire that emits light and heat. Smoke, on the other hand, is the
creativecommons.org/licenses/by/
collection of particles and gases released during a fire, which can be toxic and pose health
4.0/).

Information 2024, 15, 538. https://doi.org/10.3390/info15090538 https://www.mdpi.com/journal/information


Information 2024, 15, 538 2 of 32

hazards [13]. In this paper, we review the automatic fire, flame, and smoke detection for
the last eleven years, i.e., from 2013–2023, using deep learning and image processing.
Image processing techniques enable the extraction of relevant features from images or
video streams that are captured by cameras [14]. This includes analyzing color, texture, and
spatial information to identify potentially fire-related patterns [15]. By applying algorithms
such as edge detection, segmentation, and object recognition, fire can be detected and
differentiated from non-fire elements with a high degree of accuracy [16,17].
Computer vision can play a crucial role in early fire detection by utilizing image and
video processing techniques to analyze visual data and identify signs of fire [18]. CV algo-
rithms can identify patterns based on features such as color, shape, and motion [19,20]. CV
with thermal imaging technology can detect fires based on temperature variations [21,22].
It is important to note that CV conjugated with other fire safety measures, such as smoke
detectors, heat sensors, and human intervention, enhances early fire detection. DL com-
bined with CV can also effectively recognize various fire characteristics, including flames,
smoke patterns, and heat signatures [23]. It enables more precise and reliable fire detection,
even in challenging environments with variable lighting conditions or occlusions.
Deep learning, a subset of machine learning (ML), has revolutionized the field of CV
by enabling the training of highly complex and accurate models [24]. Deep learning models,
such as convolutional neural networks (CNNs), can be trained on vast amounts of labeled
fire-related images and videos, learning to automatically extract relevant features and
classify fire instances with remarkable precision [25,26]. These models can continuously
improve their performance through iterative training, enhancing their ability to detect fires
and reducing false alarms [27].
This work provides a systematic review of the most representative fire and/or smoke
detection and extinguishing systems, highlighting the potential of image processing, com-
puter vision, and deep learning. Based on three types of inputs, i.e., camera images, videos,
and satellite images, the widely used methods for identifying active fire, flame, and smoke
are discussed. As research and development continue to advance these technologies, future
fire extinguishing systems promise to provide robust protection against the devastating
effects of fires, ultimately saving lives and minimizing property damage.
The remainder of this paper is structured as follows: Section 2 presents the search
strategy and selection criteria. Section 3 details the broadly defined classes for fire and
smoke detection. Section 4 presents an analysis of the selected topic areas, discussing
representative publications from each area in detail. In Section 5, we provide the discussion
related to the factors critical for forest fire, followed by the recommendations for future
research in Section 6. Lastly, Section 7 concludes this study with some concluding thoughts.

2. Methodology: Search Strategy and Selection Criteria


The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) [28]
framework defined the methodology for this systematic review. PRISMA provides a
standardized approach for conducting and reporting systematic reviews, ensuring that
all relevant studies are identified and assessed comprehensively and transparently. This
review aims to understand the approaches used to detect or extinguish forest fires. The
required data for this systematic review were gathered from two renowned sources, Web of
Science™ and IEEE Xplore® , and the review was limited to peer-reviewed journal articles
published from 2013 to 2023. Web of Science™ is a research database that offers a wide
range of scholarly articles across many disciplines. It includes citation indexing, which
helps track the impact of research. IEEE Xplore® is a digital library focused on electrical
engineering, electronics, computer science, and other related fields. It provides access to
technical literature like journal articles, conference proceedings, and technical standards.
We used the EndNote 20.6 reference manager, a software tool by Clarivate, to organize
and manage the references collected during the review process. EndNote helped us to
classify the references, filter relevant studies, and screen for duplicates, as well as ensure a
comprehensive and systematic review of the literature. This tool is widely used in academic
Information 2024, 15, 538 3 of 32

research to streamline the process of citation management and bibliography creation. “Fire
Detection” was used in conjunction with “Computer Vision”, “Machine Learning”, “Image
Processing”, and “Deep Learning” to define the primary search string. To identify the
applications of fire detection, “Fire Extinguishing” conjugated with “UAV” and “UGV”
was used to define the secondary search string. The pictorial view of the selected areas of
the research along with their distribution is depicted in Figure 1.

ML FD: Fire Detection


DL
CV: Computer Cision
CV ML: Machine Learning
IP FD DL: Deep Learning
IP: Image Processing
FE: Fire Extinguishing
UAV: Unmanned Aerial Vehicle
FE UGV: Unmanned Ground Vehicle
UGV
UAV

Figure 1. Selected areas for research.

Figure 2 illustrates the PRISMA framework used to identify and select the most
relevant literature. As a result of the research conducted using the primary keywords,
1872 records in Web of Science™ and 288 records in IEEE Xplore® were retrieved. Data
from both sources were merged and after duplicate removal, 1823 records were left. By
excluding all records published before 2013 and after 2023, and by applying the search
string (“Forest Fire” || “Wildfire”) & (“detection” || “recognition” || “extinguish”) in the
abstract, title, and keyword fields, only 270 were retained. Another screening was applied
to obtain the most relevant data aligned with our interest and by excluding publications for
which the full text was not accessible, a total of 155 journal papers from the most relevant
journals were retained for detailed review.
To analyze these publications, Figure 3 illustrates the number of journal publications
from 2013–2023. The increasing trend after 2018 is an indicator of growing interest in
this area of study. The top five journals publishing the most papers on this topic are Fire
Technology (9), Forests (14), IEEE Access (9), Remote Sensing (21), and Sensors (13). These
journals account for almost 43% of all publications.
Information 2024, 15, 538 4 of 32

Identification
Records identified through
database searching (n = 2160)
Web of Science = 1872
IEEE Xplore = 288

Records after duplicates


removed (n = 1823)
Screening

Records excluded (n = 1553)


Excluded those published before
Records screened (n = 1823) 2013 and after 2023 (n = 575)
Excluded by abstract, title, and
keywords (n = 978)

Full-text articles excluded, with


reasons (n = 115)
Eligibility

Full-text articles assessed for


Study focus irrelevant (n = 95)
eligibility (n = 270)
Systematic review (n = 8)
Full-text not available (n = 12)
Included

Studies included in qualitative


synthesis (n = 155)

Figure 2. PRISMA framework.

Number of References by Year


44
40
34
30
References

24
20
20
17

10
5 4
1 2 2 2
0
2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023
Year
Figure 3. Distribution of the number of publications over the period of 2013 to 2023.
Information 2024, 15, 538 5 of 32

3. Research Topics
While conducting our literature search, we tried to cover all aspects contributing to the
overall topic. Though these can be considered distinct research topics, from the perspective
of deep learning, they play their part mutually.
• Image Processing: Research that focuses on fire detection based on the features ex-
tracted after processing the image [29,30].
• Computer Vision: Research focusing on the algorithms to understand and interpret
the visual data to identify fire [31].
• Deep Learning: Research associated with the models that can continuously enhance
their ability to detect fires [32].
Based on the literature search, four main groups were formulated to classify the
publication results. This classification is mainly based on the research topic, theme, title,
practical implication, and keywords. Each publication in our search fell broadly into one of
these categories:
1. Fire: Research that addresses the methods capable of identifying the forest fire in
real-time or based on datasets [33,34].
2. Smoke: Research focusing on the methods to identify smoke with its different color
variations [35,36].
3. Fire and Flame: Research associated with the methods that can identify fire and
flame [37].
4. Fire and Smoke: Research that explores the methods focusing on the accurate deter-
mination of fire and smoke [38].
Another category has been introduced that is a part of the above-defined categories in
the field, but with application orientation, with the help of robots.
5. Applications: Research that addresses a robot’s ability not only to detect fire but also
to extinguish it [39–41].

4. Analysis
The distribution of various publications in selected categories is illustrated in Figure 4.
From the defined categories, fire detection was the most dominant class containing 68
(44%) of the 155 total publications, followed by smoke detection with 33 (21%), fire and
smoke with 23 (15%), applications with 18 (12%), and fire and flame with 13 (8%). The
data highlight that fire detection and monitoring are foundational areas in the field, while
practical applications for fire extinguishing, particularly those involving unmanned ground
vehicles (UGVs) and unmanned aerial vehicles (UAVs), remain less developed. Only
seven articles focused on UGVs and 11 on UAVs for fire extinguishing, indicating that
on-filed utilization in this area is still in its early stages.
Deep learning has been successfully applied to fire, flame, and smoke detection tasks,
where its ability has been utilized to learn complex patterns and features from large amounts
of data [42,43]. The primary task in fire detection is dataset collection, which consists of a
large dataset of images or videos containing both fire and non-fire scenes [44]. The collected
data need to be preprocessed to ensure consistency and quality. This may involve resizing
images, normalizing pixel values, removing noise, and augmenting the dataset by applying
transformations like rotation, scaling, or flipping [45]. Afterward, a deep learning model
needs to be designed and trained to perform fire, smoke, or flame detection. CNNs are
commonly used for this purpose due to their effectiveness in image-processing tasks [46].
The architecture can be customized based on the specific requirements and complexity of
the detection task [47].
For all publications, we extracted some key information such as dataset, data type,
method, objective, and achievement. One or two representative publications were picked
from each category based on the annual citation count (ACC). The ACC is a metric that
indicates the average number of citations per year since publication. The citation count
Information 2024, 15, 538 6 of 32

was retrieved from the Web of Science™ till July 2024. To qualify for the representative
publication, each publication’s ACC should have a positive standard deviation, Std (ACC).

Figure 4. Publications in selected categories.

4.1. Fire
It is important to note that deep learning models for fire detection rely heavily on the
quality and diversity of the training data. Obtaining a comprehensive and representative
dataset is crucial for achieving accurate and robust fire detection performance. Past research
efforts related to fire detection are listed in Table 1 in terms of the dataset, method, objectives,
and achievements.
Information 2024, 15, 538 7 of 32

Table 1. List of the past work related to fire detection.


Ref Dataset Data Type Method Objective Achievement
[48] 47,992 images Images Transfer learning Achieving early prevention and control of large-scale forest fires. Recognition accuracy of 79.48% through FTResNet50 model.
YOLOv5 and Overcoming the shortcomings of manual feature extraction and achieving higher accuracy in forest fire
[49] 2976 images Images The average accuracy of the proposed model for forest fire identification reached 87%.
EfficientDet recognition by weighted fusion.
YCbCr and
[50] 11 videos Videos Achieving efficient forest fire detection using rule-based multi-color space and a correlation coefficient. Achieved 95.87% and 97.89% of F-score and accuracy on fire detection.
correlation coefficient
Identifying the existence of fire by first segmenting all fire-like areas and then processing through the
[51] 11,456 images Images SqueezeNet Attained 93% accuracy.
classification module.
[52] 2100 images Images CNN Attempting to extract and classify image features for fire recognition based on CNN. Achieved a classification accuracy of around 95%.
* data obtained
[53] from USGS Satellite images SVM Performing forest fire detection on LANDSAT images using SVM. Obtained 99.21% accuracy and a high precision of 98.41% on fire detection.
website
Automatic gain
[54] 12,000 frames Thermal images Utilizing thermal infrared sensing for near real-time, data-driven fire detection and monitoring. The proposed approach achieved better situation awareness when compared to existing methods.
control algorithm
Simple linear Building an unsupervised change detection framework that uses post-fire VHR images with prefire PS
[55] 37 images Satellite images Achieved an overall accuracy of over 99% on wildfire damage assessments.
iterative clustering data to facilitate the assessment of wildfire damage.
YCbCr color space
[56] 500 images Images Introducing conventional image processing techniques, CNNs, and an adaptive pooling approach. Achieved an accuracy of 90.7% on fire detection.
and CNN
[57] 52 images Images MWIR Detecting forest fires by middle infrared channel measurement. Achieved 77.63% accuracy on fire detection.
Horn and Schunck Experimental results have verified that the proposed forest fire detection method can achieve good
[58] * Images Performing aerial images-based forest FD for firefighting using optical remote-sensing techniques.
optical flow performance.
[59] 175 videos Videos SVM Performing multi-feature analysis in YUV color space for early forest FD. Attained an average detection rate of 96.29%.
Compared to the existing algorithms, the proposed algorithm produced a much more accurate detection
[60] VIIRS Satellite images FILDA Developing FILDA that characterizes fire pixels based on both visible light and IR signatures at night.
of fire.
Spatio-temporal
[61] 13 images Images Developing a spatio-temporal model for forest FD using HJ-IRS satellite data Achieved 94.45% detection rate on fire detection.
model
[62] 5 images Images GMM Building an early detection system of forest fire smoke signatures using GMM. The developed system detected fire in all of the test videos in less than 2 min.
Achieved an 82.1 [email protected] in forest fire detection and a 70.3 [email protected] in small-target forest fire
[63] 3320 images Images YOLOv5 Performing small-target forest fire detection.
detection.
22 tiles of
[64] Landsat-8 Satellite images Deep CNN Determining the starting point of the fire for the early detection of forest fires. Achieved a 97.35% overall accuracy under different scenarios.
images
[65] 11,681 images Images FCOS Detecting forest fires in real-time and providing firefighting assistance. Attained 89.34% accuracy in forest fire detection.
Solving the problems of poor small-target recognition and many missed and false detections in complex
[66] 6595 images Images MTL Achieved 98.3% accuracy through segmentation and classification.
forest scenes.
Classifying video frames as two classes (fire, no-fire) according to the presence or absence of fire and the An accuracy of 93.65% and a precision of 91.85% were achieved on forest-fire detection and
[67] 8000 images Images R-CNN
segmentation method used for incipient forest-fire detection and segmentation. segmentation.
Non-sub-sampling
It was claimed that the fusion results of the proposed method had higher clarity and contrast, and
[68] * Images contourlet transform Building a machine vision-based network monitoring system for solar-blind ultraviolet signals.
retained more image features.
and visual saliency
Information 2024, 15, 538 8 of 32

Table 1. Cont.
Ref Dataset Data Type Method Objective Achievement
R-CNN, Bayesian
[69] 81,810 images Images Improving fire detection accuracy when compared with other video-based methods. Achieved an accuracy of 97.68% for affected areas.
network, and LSTM

500 images RGB and NIR


[70] Vision transformer Achieving early detection and segmentation to predict their spread and help with firefighting. Obtained a 97.7% F1-score on wildfire segmentation.
image
Artificial bee colony
[71] 2000 images Images algorithm-based Detecting forest fires using color space. Obtained an evaluated mean Jaccard index value of 0.76 and a mean Dice index value of 0.85.
color space
[72] 4000 images Images Deep CNN Detecting fire as early as possible. Achieved a 94.6% F-score fire detection rate.

48,010 images Images CNN and vision


[73] Detecting wildfire at early stages. Obtained a 85.12% accuracy on wildfire classification and a 99.9% F1-score on semantic segmentation.
transformers
[74] 37,016 images Satellite images CNN Building automated an active fire detection framework using Sentinel-2 imagery. Obtained an average IoU higher than 70% on active fire detection.
[75] 38,897 images Satellite images CNN Accurately detecting the fire-affected areas from satellite imagery. Achieved a 92% detection rate under cloud-free weather conditions.
[76] 8194 images Satellite images CNN Performing active fire detection using deep learning techniques. Achieved a precision of 87.2% and a recall of 92.4% on active fire detection.

10,000 images Images RNN, LSTM, and


[77] Performing early detection of forest fires with higher accuracy. An accuracy of 99.89% and a loss function value of 0.0088 were achieved on fire detection.
GRU
Performed GRU-based detection of the wildfire earlier than the VIIRS active fire products in most of the
[78] * Satellite images GRU network Building an early fire detection system.
study area.
[79] 5469 images Satellite images CNN Building an accurate monitoring system for wildfires. Achieved an accuracy of 99.9% on fire detection.

10,581 images Images EfficientDet and


[80] YOLOv5 Detecting forest fires in different scenarios by an ensemble learning method. Obtained 99.6% accuracy on fire detection.

[81] 4000 images Images CNN Introducing an additive neural network for forest fire detection. Attained 96% accuracy on fire detection.
[82] 1500 images Images DCNN Performing saliency detection and DL-based wildfire identification in UAV imagery. Achieved an overall accuracy of 98% on fire classification.
[83] 6137 images Images CNN Building a system that can spot wildfire in real-time with high accuracy. Achieved detection precision of 98% for fire detection.
[84] 2425 images Images GMM-EM Detecting fire based on combining color-motion-shape features with machine learning. A TPR of 89.97% and an FNR of 10.03% were achieved for detection.
Through experimental results based on four real datasets and one synthetic dataset, the supremacy of
[85] * Images CEP Performing real-time wildfire detection with semantic explanations.
the proposed method was established.
12 images and 7 Images and
[86] kNN Performing pixel-level automatic annotation for forest fire images. Achieved a higher fire detection rate and a lower false alarm rate in comparison to existing algorithms.
videos videos
Developing a dataset of aerial images of fire and performing fire detection and segmentation on this
[87] 39,375 frames Videos ANN Achieved a precision of 92% and a recall of 84% for detection.
dataset.
Developing a robust algorithm to deal with the problems of a complex background, the weak Accomplished fire detection with a recognition rate of 97.6%, a false alarm rate of 1.4%, and a missed
[88] 2000 images Images CNN and SVM
generalization ability of image recognition, and low accuracy. alarm rate of 1%.
2 Landsat-7 Utilizing an adaptive ensemble of ELMs for the classification of RS images into change/no change
[89] Satellite images ELM Achieved an accuracy of 90.5% in detecting the change.
images classes.

30 images Videos and SVM


[90] Images Identifying fires and providing fire warnings yielding excellent noise suppression and promotion. Obtained a 97% TPR on classification.

[91] 8500 images Images Data fusion Detecting smoke from fires, usually within 15 min of ignition. Achieved an accuracy of 91% on the test set and an F-1 score of 89%.

WSN Transmission AAPF


[92] data Utilizing auto-organization and adaptive frame periods for forest fire detection. Developed a comprehensive model to evaluate the communication delay and energy consumption.
Information 2024, 15, 538 9 of 32

Table 1. Cont.
Ref Dataset Data Type Method Objective Achievement
[93] 20,250 pixels Satellite images Random forest Building a three-step forest fire detection algorithm by using Himawari-8 geostationary satellite data. Achieved an overall accuracy of 99.16%, a POD of 93.08%, and a POFD of 0.07%.
Obtained 98% or more classification accuracy and claimed improvement by 2% than the traditional
[94] 1194 images Images Multi-channel CNN Performing fire detection using multichannel CNN.
feature-based methods.
Developing an improved DCNN model for forest fire risk prediction. Implementing the BPNN fire
[95] 7690 images Images DCNN and BPNN Achieved an 84.37% accuracy in real-time forest fire recognition.
algorithm to calculate video image processing speed and delay rate.
Presenting Defog DeepLabV3+ for collaborative defogging and precise flame segmentation. Proposing
[96] * Images DeepLabV3+ Achieved a 94.26% accuracy, 94.04% recall, and 89.51% mIoU.
DARA to enhance flame-related feature extraction.
Exploring several CNN models, applying transfer learning, using SVM and RF for detection, and using
[97] 1452 images Images Transfer learning Achieved a 99.32% accuracy.
train/test networks with random and ImageNet weights on a forest fire dataset.
FuF-Det Designing AAFRM to preserve positional features. Constructing RECAB to retain fine-grained fire point
[98] 14,094 images Images (encoder–decoder Achieve an [email protected] of 86.52% and a fire spot detection rate of 78.69%.
transformer) details. Introducing CA in the detection head to improve localization accuracy

Integrating the transformer module into YOLOv5’s feature extraction network. Inserting the CA
[99] 3000 images Images YOLOv5 mechanism before the YOLOv5 head. Using the ASFF in the model’s head to enhance multi-scale feature Achieved an [email protected] of 84.56%.
fusion.
Proposing a stacking ensemble model. Using pre-trained models as base learners for feature extraction Achieved 97.37%, 95.79%, and 95.79% accuracy with hold-out validation, five-fold cross-validation, and
[100] 1900 images Images Ensemble learning
and initial classification, followed by a Bi-LSTM network as a meta-learner for final classification. tenfold cross-validation.
5250 infrared Proposing FFDSM based on YOLOv5s-seg and incorporating ECA and SPPFCSPC modules to enhance
[101] Images YOLOv5s Achieved an [email protected] of 0.907.
images fire detection accuracy and feature extraction.
Deep ensemble Presenting a deep ensemble neural network model using Faster R-CNN, RetinaNet, YOLOv2, and The proposed approach significantly improved detection accuracy for potential fire incidents in the
[102] 204,300 images Images
learning YOLOv3. input data.
Proposing a forest fire detection method using CNN architecture. Employing separable convolution
[103] 1900 images Images CNN layers for immediate fire detection, reducing computational resources, and enabling real-time Achieved an accuracy of a 97.63% and an F1-score of 98.00%.
applications.
Proposing CT-Fire by combining deep CNN RegNetY and vision transformer EfficientFormer v2 to
[104] 51,906 images Images Ensemble learning Attained accuracy rates of 99.62% for ground images and 87.77% for aerial images.
detect forest fires in ground and aerial images.
Detecting forest fires using different deep-learning models. Preparing a dataset. Comparing the
[105] 348,600 images Images Detectron2 Achieved a precision of 99.3%.
proposed method with existing ones. Implementing it on Raspberry Pi for CPU and GPU utilization.
Integrating PSO with FL to optimize communication time. Developing a CNN model incorporating FL
[106] 1900 images Images FL and PSO and PSO to set basic parameters based on local client data. Enhancing FL performance and reducing Achieved a prediction accuracy of 94.47%.
latency in disaster response.
* data obtained Introducing FU-NetCastV2. Collecting historic GeoMac fire perimeters, elevation, and satellite maps.
[107] Satellite images U-Net Achieved an accuracy rate of 94.6% and an AUC score of 97.7%.
from Landsat-8 Retrieving 24-hour weather data. Implementing and optimizing U-Nets. Generating a burned area map.
5060 images
Images and Proposing a VSU prototype with embedded ML algorithms for timely forest fire detection. Collecting
[108] and 14,320s CNN Achieved a 96.15% accuracy.
audio audio and utilizing two datasets and audio and picture data for training the ML algorithm.

Introducing a FIRE-mDT model combining ResNet-50 and multiscale deformable transformer for early
210 images 360-degree Multi-scale vision
[109] fire detection, location, and propagation estimation. Creating a dataset from real fire events in Seich Sou Achieved an F-score of 91.6%.
images transformer
Forest.
Information 2024, 15, 538 10 of 32

Table 1. Cont.
Ref Dataset Data Type Method Objective Achievement
Proposing EdgeFireSmoke++, based on EdgeFireSmoke, using ANN in the first level and CNN in the
[110] 55,746 images Images ANN and CNN Achieved over 95% accuracy.
second level.
Proposing a two-step recognition method combining FireYOLO and ESRGAN Net. Using GhostNet
23,982 images Images FireYOLO and
[111] with dynamic convolution in FireYOLO’s backbone to eliminate redundant features. Enhance suspected Achieved a 94.22% average precision when implemented on embedded devices.
Real-ESRGAN
small fire images with Real-ESRGAN before re-identifying them with FireYOLO.
Proposing FFS-UNet, a spatio-temporal architecture combining a transformer with a modified
Vision transformers lightweight UNet. Extracting keyframe and reference frames using three encoder paths for feature Achieved a 95.1% F1-score and 86.8% IoU on the UAV-collected videos, as well as a 91.4% F1-score and
[112] 48 videos Videos
(ViTs) and CNNs fusion, and then using a transformer for deep temporal-feature extraction. Finally, segmenting the fire 84.8% IoU on the Corsican Fire dataset.
using shallow keyframe features with skip connections in the decoder path.
Proposing FireXnet, a lightweight model for wildfire detection that is suitable for resource-constrained
[113] 3800 images Images CNN devices. Incorporating SHAP to make the model’s decisions interpretable. Compare FireXnet’s Achieved an accuracy of 98.42%.
performance against five pre-trained models.
Utilizing four detection heads in FireDetn. Integrating transformer encoder blocks with multi-head
[114] 4674 images Images YOLOv5 attention. Fusing the spatial pyramid pooling fast structure in detecting multi-scale flame objects at a Achieved an AP50 of 82.6%.
lower computational cost.
2 active fire Temporal patterns Comparing various MODIS fire products with ground wildfire investigation records in southwest China
products and
[115] Satellite images and kernel density to identify differences in the spatio-temporal patterns of regional wildfires detected and exploring the Detected at least twice as many wildfire events as that in the ground records.
1 burned area
product estimation (KDE) influence of instantaneous and local environmental factors on MODIS wildfire detection probability.

* Information not available.


Information 2024, 15, 538 11 of 32

Representative Publications:
The annual citation count for all the papers listed in this category was calculated
and is illustrated in Figure 5. The paper entitled “A Forest Fire Detection System Based
on Ensemble Learning” was selected from this category as a representative publication,
published in 2021, due to its highest ACC score [80]. In this work, the authors developed
a forest fire detection system based on ensemble learning. First, two individual learners
YOLOv5 and EfficientNet, were integrated to accomplish fire detection. Secondly, another
individual learner, EfficientNet, was introduced for learning global information to avoid
false positives. The used dataset contains 2976 forest fire images and 7605 non-fire images.
Sufficient training sets enabled EfficientNet to show a good discriminability between fire
objects and fire-like objects, with 99.6% accuracy on 476 fire images and a 99.7% accuracy
on 676 fire-like images.

65 11.1
60
11.0
55
10.9
50
45 10.8
40 10.7

Std(ACC)
ACC

35 10.6
30
10.5
25
10.4
20
15 10.3
10 10.2
5 10.1
0
48 52 56 60 64 68 72 76 80 84 88 92 96 100 104 108 112
Ref. No
Figure 5. ACC and its standard deviation (- - -) for fire.

4.2. Smoke
Deep learning models learn to extract relevant features from input data automatically.
During training, the model can learn discriminative features from smoke images that are
independent of color. By focusing on shape, texture, and spatial patterns rather than color-
specific cues, the model becomes less sensitive to color variations and can detect smoke
effectively. Table 2 highlights the research focused on smoke detection.
Information 2024, 15, 538 12 of 32

Table 2. List of the past works related to smoke detection.


Ref Dataset Data Type Method Objective Achievement
[116] 6 videos Videos Fusion deep network Enhancing the detection accuracy of smoke objects through video sequences. Achieved a 94.57% accuracy on smoke detection.
GIS and augmented
[117] 2977 images Images Improving the detection range and the rate of correct detection and reducing false alarm rates. Managed to reduce the false alarm rate to 0.001.
reality
Class activation map
[118] 6225 images Images Building a class activation map-based data augmentation system for smoke scene detection. Achieved the best accuracy of 94.95%.
and ResNet-50
3D
convolution-based Building a 3D convolution-based encoder–decoder network architecture for video semantic
[119] 90 videos Videos encoder/decoder Achieved a 99.31% accuracy on wildfire smoke segmentation.
segmentation.
network
[120] 90 videos Videos CNN Building a 3D fully convolutional network for segmenting smoke regions. Achieved a 0.7618 mAP on smoke detection.
[121] 50,000 images Images CNN Performing real-time forest smoke detection using hand-designed features and DL. The detection model achieved 97.124% accuracy on the test set.
38 smoke
videos and 20 CNN
[122] Videos Detection of wildfire smoke based on faster RCNN and 3D CNNN. Achieved a 95.23% accuracy on smoke detection.
non-smoke
videos
[123] 22 videos Videos Vibe algorithm Detecting forest fire smoke based on a visual smoke root and diffusion model. Achieved an accuracy higher than 90% on smoke detection.
Stereo vision
[124] 37,712 images Images Achieving wildfire smoke detection using stereo vision. Obtained results with an over 0.95 TPR on smoke detection.
triangulation
Achieved an average smoke segmentation precision of 93.0% and a precision as high as 99.0% for forest
[125] 11 videos Videos Saliency maps Building a saliency-based method for early smoke detection through video sequences.
fires.
[126] 3225 images Images TECNN Classification of smoke-like scenes in remote sensing images. Obtained a 98.39% accuracy on smoke classification.
[127] 3645 images Images R-CNN Detecting smoke columns that are visible below or above the horizon. Produced an F1-score of 80%, a G-mean of 80%, and a detection rate of 90%.
Developing an open-source transformer-supercharged benchmark for fine-grained wildfire smoke
[128] 1073 videos Videos DETR Detected 97.9% of the fires in the incipient stage and 80% within 5 min from the start.
detection.
[129] 240 videos Videos CNN Developing an intelligent smoke detection algorithm for wildfire monitoring cameras. The overall fire risk of the test region is reduced to just 36.28% of its original value.
460 custom Achieving a forest fire flame and smoke detection from UAV-captured images using fire-specific color
[130] Images GLCM, LBP, an ANN Achieved an F1-score of 90% for smoke detection.
images features and multi-color space local binary patterns.
[131] 4595 images Images CNN Detecting wildfire smoke images based on a densely dilated CNN. Achieved a 99.2% accuracy on smoke detection.
[132] 2000 images Images LSTM Utilizing enhanced bidirectional LSTM for early forest fire smoke recognition. Obtained an accuracy of 97.8% on smoke detection.
HDLBP, CoLBP, and Achieving a lesser rate of incorrect alarms by identifying the smoke and examining its distinctive
[133] 240 videos Videos ELM Results obtained with 95% F1-score on fire detection.
texture attributes.
Multi-spectral fusion A tool was built for researchers and professionals through which they can access the dataset and
[134] 500 images Images Developing a wildfire image dataset and performing analysis on that dataset.
algorithm also contribute.
Collecting forest fire smoke photos, utilizing YOLOv7, incorporating CBAM attention mechanism, and
[135] 6500 images Images YOLOv7 Achieved an AP50 of 86.4% and an APL of 91.5%
applying SPPF+ and BiFPN modules to focus on small-scale forest fire smoke.
Improving YOLOv5s using K-means++ for anchor box clustering, adding a prediction head for
[136] 2554 images Images YOLOv5 and transfer small-scale smoke detection, replacing the backbone with PConv for efficiency, and incorporating Achieved an AP50 of 96% and an AP50:95 of 57.3%.
learning
coordinate attention for region focus.
Proposing an improved deformable DETR model with MCCL and DPPM modules to enhance
Achieved an improvement of mAP (mean average precision) of 4.2% and anAPS (AP for small objects)
[137] 10,250 images Images Deformable DETER low-contrast smoke detection. Implementing an iterative bounding box combination method for precise
of 5.1%.
localization and bounding of semi-transparent smoke.
Incorporating WIoUv3 into a bounding box regression loss, integrating BiFormer into the backbone Achieved an average precision (AP) of 79.4%, an average precision small (APS ) of 71.3%, and an average
[138] 6000 images Images YOLOv8
network, and using GSConv as a substitute for conventional convolution within the neck layer. precision large (AP L ) of 92.6%.
Information 2024, 15, 538 13 of 32

Table 2. Cont.
Ref Dataset Data Type Method Objective Achievement
Proposing a lightweight model. Using GSConv in the neck layer, embedding multilayer coordinate
[139] 5311 images Images YOLOv7 attention in the backbone, utilizing the CARAFE up-sampling operator, and applying the SIoU loss Achieved an accuracy of 80.2%.
function.
Proposing the FireFormer model. Using a shifted window self-attention module to extract patch
[140] 1664 images Images Transformer Achieved an OA, Recall, and F1-score of 82.21%, 86.635%, and 74.68%, respectively.
similarities in images. Applying GradCAM to analyze and visualize the contribution of image patches.
[141] 35,328 images Images EfficientDet Detecting distant smoke plumes several kilometers away using EfficientDet. Achieved an 80.4% true detection rate and a 1.13% false-positive rate.
Proposing a deformable convolution module. Introducing a multi-direction feature interaction module.
[142] 43,060 images Images LMINet Achieved a mIoU and pixel-level F-measure of 79.31% and 84.61%, respectively.
Implementing an adversarial learning-based loss term.
Utilizing non-binary pixel-level supervision to guide model training. Introducing DDAM to distinguish
[143] 77,910 images Images PSNet smoke and smoke-like targets, AFSM to enhance smoke-relevant features, and MCAM for enhanced Achieved a detection rate of 96.95%.
feature representation.
Optimizing a CNN model. Training MobileNet to classify satellite images using a cloud-based
[144] 614 images Images CNN development studio and transfer learning. Assessing the effects of input image resolution, depth Achieved a 95% accuracy.
multiplier, dense layer neurons, and dropout rate.
Introducing SmokeNet, a new model using spatial and channel-wise attention for smoke scene detection,
[145] 6225 images Satellite images CNN Achieved a 92.75% accuracy.
including a unique bottleneck gating mechanism for spatial attention.
[146] 975 images Satellite images FCN Presenting a deep FCN for a near-real-time prediction of fire smoke in satellite imagery. Achieved a 99.5% classification accuracy.
Designing a multi-scale basic block with parallel convolutional layers of different kernel sizes and
24,217 images Images Deep multi-scale
[147] merging outputs via addition to reduce dimension. Proposing a deep multi-scale CNN using a cascade Achieved a 95% accuracy.
CNN
of these basic blocks.
Presenting a smoke detection method using a dual DCNN. The first framework extracts image-based
[148] 20,000 images Images DCNN features like smoke color, texture, and edge detection. The second framework extracts motion-based Achieved an average accuracy of 97.49%.
features, such as moving, growing, and rising smoke regions.
Information 2024, 15, 538 14 of 32

Representative Publications:
The ACC score for all the publications falling in the category was determined and
is illustrated in Figure 6. Based on the plot, the two best performers were chosen from
this category. A notable publication [143] titled ‘Learning Discriminative Feature Repre-
sentation with Pixel-Level Supervision for Forest Smoke Recognition,’ focuses on forest
smoke recognition through using a Pixel-Level Supervision Neural Network. The research
employed non-binary pixel-level supervision to enhance model training, introducing a
dataset of 77,910 images. To improve the accuracy of smoke detection, the study integrated
the Detail-Difference-Aware Module to differentiate between smoke and smoke-like targets,
the Attention-based Feature Separation Module to amplify smoke-relevant features, and the
Multi-Connection Aggregation Method to enhance feature representation. The proposed
model achieved a remarkable detection rate of 96.95%.
The second representative publication, titled ‘SmokeNet: Satellite Smoke Scene Detec-
tion Using Convolutional Neural Network with Spatial and Channel-Wise Attention’ [145]
and published in 2019, aimed to detect wildfire smoke using a large-scale satellite imagery
dataset. It proposed a new CNN model, SmokeNet, which incorporates spatial and channel-
wise attention for enhanced feature representation. The USTC_SmokeRS dataset, consisting
of 6225 images across six classes (cloud, dust, haze, land, seaside, and smoke), served as the
benchmark. The SmokeNet model achieved the best accuracy rate of 92.75% and a Kappa
coefficient of 0.9130, outperforming other state-of-the-art models.

25
6.2

20
6.1

Std(ACC)
15 6.0
ACC

5.9
10

5.8
5
5.7
0
116 118 120 122 124 126 128 130 132 134 136 138 140 142 144 146 148
Ref. No
Figure 6. ACC and its standard deviation (- - -) for smoke.

4.3. Fire and Flame


Deep learning models can integrate multiple data sources to improve fire and flame
detection. In addition to visual data, other sources such as thermal imaging, infrared
sensors, or gas sensors can be used to provide complementary information. By fusing these
multi-modal inputs, the model can enhance its ability to detect fire and flame accurately,
even in challenging conditions. The existing work related to fire and flame detection is
presented in Table 3.
Information 2024, 15, 538 15 of 32

Table 3. List of the past works related to fire and flame detection.
Ref Dataset Data Type Method Objective Achievement
[149] 338 images Images FSCN and ISSA Improving the accuracy of fire recognition with a fast stochastic configuration network. Achieved a 94.87% accuracy on fire detection.
Unsupervised Achieving the early detection of wildfires and flames from still images by a new unsupervised
[150] 5 videos Videos Achieved a 93% accuracy on flame detection.
method method based on RGB color space.
[151] 14 videos Videos K-SVD Detecting wildfire flame using videos from pixel to semantic levels. Obtained a 94.1% accuracy on flame detection.
[152] 85 videos Videos ELM Performing a static and dynamic texture analysis of flame in forest fire detection. Attained an average detection rate of 95.65%.
[153] 101 images Images SVM Devising a new fire detection and identification method using a visual attention mechanism. Accomplished an accuracy of 82% for flame recognition.
Images & Applying YOLOv5 to detect forest fires from images captured by UAV and analyzing the flame
[154] 51,998 images and 6 videos YOLOv5n Achieved a detection speed of 1.4 ms/frame and an average accuracy of 91.4%.
Videos detection performance of YOLOv5.
Proposing wildfire image classification with Reduce-VGGnet and region detection using an
[155] 1900 images Images CNN Achieved an accuracy of 97.35%.
optimized CNN, combining spatial and temporal features.
Introducing a dual-encoding path with semantic and spatial units, integrating AFM, using an MAF Achieved a 90.69% and 80.25% Dice coefficient, as well as a 91.42% and 83.80% mIOU, on the
[156] 2603 images Images ADE-Net
module, proposing an AGE module, and finally employing a GCF module. FLAME and Fire_Seg datasets, respectively.
Proposing the following four-step algorithm: preprocessing input data, detecting flame regions
[157] 20 videos Videos Optic flow using HSV color space, modeling motion information with optimal mass transport optical flow Achieved a 96.6% accuracy.
vectors, and measuring the area of detected regions.
Encoder–decoder Proposing FlameTransNet. Implementing an encoder–decoder architecture. Selecting MobileNetV2
[158] 1000 images Images Achieved an IoU, Precision, and Recall of 83.72%, 91.88%, and 90.41%, respectively.
architecture for the encoder and DeepLabV3+ for the decoder.
Live data from cameras, Images,
Segmentation and Developing an image-based diagnostic system to enhance the understanding of wildfire spread Demonstrated that the flame volume measured through image processing can reliably substitute
[159] thermopile-type sensors, infrared, and reconstruction and providing tools for fire management through a 3D reconstruction of turbulent flames. fire thermal property measurements.
and anemometers ultrasonic
Proposing a fire image recognition method by integrating color space information into the SIFT
algorithm. Extracting fire feature descriptors using the SIFT from images, filtering noisy features
[160] * Images SVM Achieved a 97.16% testing accuracy.
using fire color space, and transforming descriptors into feature vectors. Using an Incremental
Vector SVM classifier to develop the recognition model.
Proposing a fire-flame detection model by defining the candidate fire regions through background
[161] 37 videos Videos SVM subtraction and color analysis. Modeling fire behavior using spatio-temporal features and dynamic Achieved detection rates of approximately 99%.
texture analysis. Classifying candidate regions using a two-class SVM classifier.
* Information not available.
Information 2024, 15, 538 16 of 32

Representative Publications:
Through an ACC graph for this category, as shown in Figure 7, only the best performer
was chosen. A representative publication [160], entitled ‘The fire recognition algorithm
using dynamic feature fusion and IV-SVM classifier’ and published in 2019, was chosen.
This work aimed to identify flame areas using a flame recognition model based on an
Incremental Vector SVM classifier. It introduces flame characteristics in color space and
employs dynamic feature fusion to remove image noise from SIFT features, enhancing
feature extraction accuracy. The SIFT feature extraction method incorporates flame-specific
color spatial characteristics, achieving a testing accuracy of 97.16%.

7.1
20
7.0

15 6.9

Std(ACC)
6.8
ACC

10
6.7

6.6
5
6.5

0
149 150 151 152 153 154 155 156 157 158 159 160 161
Ref. No
Figure 7. ACC and its standard deviation (- - -) for fire and flame.

4.4. Fire and Smoke


Deep learning models excel at learning hierarchical representations of data. They can
learn features at different levels of abstraction, enabling them to capture both local and
global patterns associated with fire and smoke. This enhances their ability to detect fire and
smoke under various environmental conditions and appearances. A total of twenty-three
publications have been identified in this category, as listed in Table 4.
Information 2024, 15, 538 17 of 32

Table 4. List of the past works related to fire and smoke detection.
Ref Dataset Data Type Method Objective Achievement
[162] 17,840 images Images CNN Detecting forest fire smoke in real-time through using deep convolutional neural networks. Achieved an accuracy of 95.7% on real-time forest fire smoke detection.
[163] 3000 images Images R-CNN Classifying smoke columns with object detection and a DL-based approach. Dropped the FPR to 88.7% (from 93.0%).
Improving fire and smoke recognition in still images by utilizing advanced convolutional Obtained an AUROC value of 0.949 with the test set that corresponded to a TPR and FPR of 85.3%
[164] 35,328 images Images Transfer learning
techniques to balance accuracy and complexity. and 3.1%, respectively.
[165] 1900 images Images GA-CNN Detecting fire occurrences with high accuracy in the environment. Achieved a 95% accuracy and 92% TPR.
Segmenting fire and smoke regions in high-resolution images based on a multi-resolution iterative
[166] 3630 images Images CNN Obtained a 95.9% accuracy on fire and smoke segmentation.
quad-tree search algorithm.
[167] 4326 images Images CNN Building an adaptive linear feature–reuse network for rapid forest fire smoke detection. Achieved an 87.26% mAP50 on fire and smoke detection.
[168] 15,909 images Images MVMNet Detecting fire based on a value conversion attention mechanism module. Obtained an mAP50 of 88.05% on fire detection.
Achieved an accuracy of 98.97% and an F1-score of 95.77% on fire and smoke detection,
[169] 14,402 images Videos CNN Wildfire detection through RGB images by the CNN model.
respectively.
[170] 7652 images Images R-CNN Forest fire and smoke recognition based on an anchor box adaptive generation method. Achieved an accuracy rate of 96.72% and an IOU of 78.96%.
1323 fire or smoke images Performing collaborative region detection and developing a grading framework for forest fire
[171] Images R-CNN Achieved a 99.6% detection accuracy and 70.2% segmentation accuracy.
and 3533 non-fire images smoke using weakly supervised fine segmentation and lightweight faster-RCNN.
[172] 400,000 images Images BNN and RCNN Constructing a model for early fire detection and damage area estimation for response systems. Achieved an mAP of 27.9 for smoke and fire.
[173] 23,500 images Images CNN and RNN Detecting forest fire through using a hybrid DL model. Accomplished fire detection with 99.62% accuracy.
Enhancing fire and smoke detection in still images through advanced convolutional methods to Achieved 84.36% and 81.53% mean test accuracy for the fire and fire and smoke recognition tasks,
[174] 16,140 images Images CNN
optimize accuracy and complexity. respectively.
14 fire and 17 non-fire R-CNN
[175] videos Videos Reducing FP detection by a smoke detection algorithm. Attained a 99.9% accuracy in performing smoke and fire detection.

[176] 49 large images Images CNN Performing active fire mapping using CNN. Achieved a 0.84 F1-score on fire detection.

5682 images Images Wavelet


[177] Detecting forest fire smoke using videos in a wavelet domain. Achieved a 94.04% accuracy on fire detection.
decomposition
Building a lightweight deep learning fire recognition algorithm that can be employed on Through experimental results, a significant reduction in the number of model parameters and
[178] 1844 images Images MobileNetV3
embedded hardware. inference time was achieved when compared to YOLOv4.
Using learning without forgetting (LwF) to train the network with a new task but keeping the An accuracy of 91.41% was achieved by Xception with LwF on the BowFire dataset and 96.89% on
[179] 999 images Satellite images Transfer learning
network’s preexisting abilities intact. the original dataset.
Images and Replacing the convolutional blocks in Super-SPPF by GhostConv and using the C3Ghost module
[180] * GS-YOLOv5 Achieved a detection accuracy of 95.9%.
videos instead of the C3 module in YOLOv5 to increase speed and reduce computational complexity.
Enhancing model performance by integrating the Convolutional Block Attention Module (CBAM),
[181] 3000 images Images YOLOv6 Achieved an mAP of 0.619.
employing the CIoU loss function, and utilizing AMP automatic mixed-precision training.
Integrating CA into YOLOv5, replacing YOLOv5’s SPPF module with an RFB module and Improved the forest fire and smoke detection model in terms of [email protected] by 5.1% compared with
[182] 450 images Images YOLOv5s
enhancing the neck structure by upgrading PANet to Bi-FPN. YOLOv5.
Proposing AERNet, a real-time fire detection network, optimizing for both accuracy and speed.
[183] 18,217 images Images YOLOv4 Utilizing SE-GhostNet for lightweight feature extraction and an MSD module for enhanced feature Achieved a 69.42% mAP50, 18.75 ms inference time, and 48 fps.
emphasis. Employing decoupled heads for class and location prediction.
Using an ensemble of XceptionNet, MobileNetV2, and ResNet-50 CNN architectures for early fire
[184] 39,375 images Images Ensemble CNN prediction. Implementing fire and smoke detection using YOLO architecture known for low The smoke detection model achieved an [email protected] of 0.85, while the combined model achieved an
[email protected] of 0.76.
latency and high fps.
* Information not available.
Information 2024, 15, 538 18 of 32

Representative Publications:
Based upon an ACC graph, as shown in Figure 8, the top performer in terms of
ACC in this category was the paper titled ’Forest fire and smoke detection using deep
learning-based learning without forgetting’ [179]. The authors utilized transfer learning to
enhance the analysis of forest smoke in satellite images. Their study introduced a dataset
of 999 satellite images and employed learning without forgetting to train the network on
a new task while preserving its pre-existing capabilities. In using the Xception model
with LwF, the research achieved an accuracy of 91.41% on the BowFire dataset and 96.89%
on the original dataset, demonstrating significant improvements in forest fire and smoke
detection accuracy.
Based on the plot, Ref. [168] was the second-best performer with the second-highest
score of almost thirty-five. This publication, entitled ‘Fast forest fire smoke detection using
MVMNet’, was published in 2022. The paper proposed multi-oriented detection based on a
value conversion-attention mechanism module and mixed-NMS for smoke detection. They
obtained the forest fire multi-oriented detection dataset, which includes 15,909 images. The
mAP and mAP50 achieved were 78.92% and 88.05%, respectively.

40 10.8
10.7
35
10.6
30 10.5

25 10.4

Std(ACC)
10.3
ACC

20
10.2
15 10.1
10.0
10
9.9
5
9.8
0
162 164 166 168 170 172 174 176 178 180 182 184
Ref. No
Figure 8. ACC and its standard deviation (- - -) for fire and smoke.

4.5. Applications of Robots in Fire Detection and Extinguishing


Robots equipped with cameras or vision sensors can capture images or video footage
of their surroundings. Deep learning models trained on fire datasets can analyze this visual
input, enabling the robot to detect the presence of fire. CNNs are commonly used for
image-based fire detection in robot systems.
Deep learning models can be employed to enhance the robot’s decision-making ca-
pabilities during fire extinguishing operations. By training the model on datasets that
include fire dynamics, robot behavior, and firefighting strategies, the robot can learn to
make informed decisions on approaches such as selecting the appropriate firefighting
equipment, assessing the fire’s intensity, or planning extinguishing maneuvers. There exist
very few examples where robots are utilized in actual fields for forest fire detection. To
highlight the potential of robots in fire detection and extinguishing, indoor and outdoor
scenarios, in addition to wildfires, are also included. Past research efforts related to fire
detection and extinguishing with the help of robots are listed in Table 5.
Information 2024, 15, 538 19 of 32

Table 5. List of the past works related to the utilization of robots in fire detection and extinguishing.
Ref Environment Robot Type Objectives Achievements
To build a four-drive articulated tracked fire extinguishing robot that can flexibly perform fire detection and fire Designed a firefighting robot that can be operated remotely to control its movements and can spray through its
[185] Outdoor UGV
extinguishing. cannon.
Building a firefighter intervention architecture that consists of several sensing devices, a navigation platform (an
[186] Indoor/outdoor UGV Achieved an accuracy of 73% and precision of 99% in detecting fire points.
autonomous ground wheeled robot), and a communication/localization network.
[187] Indoor/outdoor UGV Building a smart sensor network-based autonomous fire extinguish robot using IoT. Successfully demonstrated the robot working on nine different occasions.
[188] Indoor/outdoor UGV Developing a small wheel-foot hybrid firefighting robot for infrared visual fire recognition. Achieved an average recognition rate of 97.8% with the help of a flame recognition algorithm.
The robot, which is equipped with six flame sensors, can detect flame instantly and can extinguish fire with the help
[189] Buildings UGV Building an autonomous firefighter robot with a localized fire extinguisher.
of sand.
The autonomous firefighting robot equipped with a far infrared sensor and turret can detect and extinguish small fires
[190] Outdoor UGV Building an autonomous system for wildfire and forest fire early detection and control.
within range.
[191] Indoor/outdoor UGV Performing fire extinguishing without the need for firefighters. Extinguished fire at a maximum distance of 40 cm from the fire.
Building wildfire detection solution based on unmanned aerial vehicle-assisted Internet of Things (UAV-IoT)
[192] Forest UAV
networks. The rate of detecting a 2.5 km2 fire was more than 90%.

[193] Forest UAV Detecting forest fires through the use of a new color index. A detection precision of 96.82% is achieved.
[194] Outdoor UAV Exploring the potential of DL models, such as YOLO and R-CNN, for forest fire detection using drones. An [email protected]% of 90.57% and 89.45% were achieved by Faster R-CNN and YOLOv8n, respectively.
Proposing a low-cost UAV with extended MobileNet deep learning for classifying forest fires. Share fire detection and
[195] Outdoor UAV Achieved an accuracy of 97.26%.
GPS location with state forest departments for a timely response.
Proposing a novel wildfire identification framework that adaptively learns modality-specific and shared features. The proposed method achieved an average improvement of 6.41% and 3.39% in IoU and F1-score, respectively,
[196] Outdoor UAV
Utilizing parallel encoders to extract multiscale RGB and TIR features, integrating them into a fusion feature layer. compared to the second-best RGB-T semantic segmentation method.
Proposing a two-stage framework for fire detection and geo-localization. Compiling a large dataset from several
[197] Outdoor UAV Achieved an mAP50 of 0.71 and an F1-score of 0.68.
sources to capture the various visual contexts related to fire scenes. Investigating YOLO models.
Introducing the UAV platform “WILD HOPPER,” a 600-liter capacity system designed specifically for forest Achieved a payload capacity that addresses the common limitations of electrically powered drones, which are
[198] Outdoor UAV
firefighting. typically restricted to fire monitoring due to insufficient lifting power.
To explore the integration of fire extinguishing balls with drone and remote-sensing technologies as a complementary
[199] Outdoor UAV Controlled experiments were conducted to assess the effectiveness and efficiency of fire extinguishing balls.
system to traditional firefighting methods.
To promote the use of UAVs in firefighting by introducing a metal alloy rotary-wing UAV equipped with a payload
[200] Outdoor UAV Examined the potential of UAVs equipped with a payload drop mechanism for fire-fighting operations.
drop mechanism for delivering fire-extinguishing balls to inaccessible areas.
Developed a concept for utilizing drone swarms in firefighting, addressing issues reported by firefighters and
[201] Outdoor UAV To propose a concept of deploying drone swarms in fire prevention, surveillance, and extinguishing tasks.
enhancing both operational efficiency and safety.
To improve the Near-Field Computer Vision system for an intelligent fire robot to accurately predict the falling The system for intelligent fire extinguishing achieved a reduction in the average prediction error from 1.36 m to 0.1 m
[202] Outdoor UAV
position of jet trajectories during fire extinguishing. and a reduction in error variance from 1.58 m to 0.13 m in terms of predicting jet-trajectory falling positions.
Information 2024, 15, 538 20 of 32

Representative Publications:
The ACC for papers in this category is illustrated in Figure 9. Two papers were
chosen as representative publications from this category. One of the selected papers is
entitled ‘The Role of UAV-IoT Networks in Future Wildfire Detection’. In this paper, a
novel wildfire detection solution based on unmanned aerial vehicle-assisted Internet of
Things (UAV-IoT) networks was proposed [192]. The main objectives were to study the
performance and reliability of the UAV-IoT networks for wildfire detection and to present a
guideline to optimize the UAV-IoT network to improve fire detection probability under
limited system cost budgets. Discrete-time Markov chain analysis was utilized to compute
the fire detection and false-alarm probabilities. Numerical results suggested that, given
enough system cost, UAV-IoT-based fire detection can offer a faster and more reliable
wildfire detection solution than state-of-the-art satellite imaging techniques.
The second paper that was chosen is titled ’A Survey on Robotic Technologies for Forest
Firefighting: Applying Drone Swarms to Improve Firefighters’ Efficiency and Safety’ [201].
In this paper, a concept for deploying drone swarms in fire prevention, surveillance, and
extinguishing tasks was proposed. The objectives included evaluating the effectiveness
of drone swarms in enhancing operational efficiency and safety in firefighting missions,
as well as in addressing the challenges reported by firefighters. The system utilizes a
fleet of homogeneous quad-copters equipped for tasks such as surveillance, mapping, and
monitoring. Moreover, the paper discussed the potential of this drone swarm system to
improve firefighting operations and outlined challenges related to scalability, operator
training, and drone autonomy.

20 7.0

6.9

15
6.8

Std(ACC)
ACC

6.7
10
6.6

5 6.5

6.4
0
185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202
Ref. No
Figure 9. ACC and its standard deviation (- - -) for applications of robots in fire detection and extin-
guishing.

5. Discussion
Fire, smoke, and flame detection and their extinguishing are considered challenging
problems due to the complex behavior and dynamics of fire, which makes them difficult to
predict and control. Based on the literature, we identified the following important factors.

5.1. Variability in Fire, Smoke, and Flame Types and Appearances


In our analysis, almost all articles were found to have utilized modern resources and
technologies to make the proposed approaches as effective as possible. We found several
articles in the literature that focused on variation based on the type, color, size, and intensity
(Table 6).
Information 2024, 15, 538 21 of 32

Table 6. Methods of handling variations in fire, flame, and smoke.

Nature Methods
Infrared [57,188], convex hulls [86], deep learning [67,76,83,94,175], color probabilities and motion features [84],
Fire multi-task learning [66], ensemble learning [73], semantic [85], optimization [165], Markov chain [192], support vector
machine [53,59], visible infrared imaging [60], visible-NIR [159]
Deep learning [49,94], support vector machine [160], spatio-temporal features and SVM [161], infrared [190],
Flame
visible-NIR [159], spatio-temporal features and deep learning [175]
Smoke Deep learning [147,148,172], stereo camera [124], transformer [128]

Our analysis found that forest fire detection and extinguishing systems underscore the
significant advancements made in this field, particularly in leveraging modern resources
and technologies such as deep neural networks (DNNs). These technologies have proven
essential in addressing the variability in fire, smoke, and flame types; appearances; and
intensities, enabling more accurate detection and response.

5.2. Response Time


The ability to detect fires early is crucial for prompt intervention and minimizing
potential damage. Many studies have emphasized early detection, but there is often a lack
of concrete evidence regarding the computational efficiency and real-world effectiveness
of these methods, particularly in forest fire scenarios. A common issue is the lack of
practical testing and transparency. For instance, [62] tested a GMM to detect the smoke
signatures on plumes, achieving a detection rate of 18–20 fps, but they did not test it in
real forest fire scenarios, limiting practical evidence. Similarly, [78] conducted tests with
a controlled small fire but did not provide time metrics for real-time applicability. The
authors in [164] utilized a dataset collected over 274 days from nine real surveillance
cameras mentioning “early detection” without specific metrics, making it difficult to assess
it for practical effectiveness. In [78], the authors claimed to detect 78% of wildfires earlier
than the VIIRS active fire product, but they did not include explicit time measurements,
hindering a thorough evaluation of its early-detection capabilities.
Some studies provided more concrete data on the speed and efficiency of their de-
tection methods. For example, [73] used aerial image analysis with ensemble learning
to achieve an inference time of 0.018 s, showcasing rapid detection potential. The multi-
oriented detection method in [168] achieved a frame rate of 122 fps, which was higher than
YOLOv5 (156 fps), though with a lower mean average precision (mAP). Another study
used a dataset of 1135 images, reporting an inference time of 2 s for forest fire segmentation
using vision transformers [70]. The deep neural network-based approach (AddNet) saved
12.4% time compared to a regular CNN, and it was tested on a dataset of 4000 images [81].
The performance of EfficientDet, YOLOv3, SSD, and Faster R-CNN was evaluated on
a dataset of 17,840 images, with YOLOv3 being the fastest at 27 fps [162]. The method
in [174], evaluated with a dataset of 16,140 images, achieved a processing time per image
of 0.42 s, which was claimed to be four times faster than the compared models.
Although “early detection” is a frequently used term, specific, quantifiable metrics to
support these claims are often lacking. The reviewed studies highlight various methods
and technologies, but the need for comprehensive, real-world testing and transparent
reporting remains.

5.3. Environmental Contextual and Adaptability


The effectiveness of fire detection systems under various environmental conditions is
critical for their accuracy and reliability. Environmental factors such as weather, terrain,
and other influences can significantly impact performance, leading to false positives or
missed detection.
Environmental factors like cloud cover and weather conditions pose significant chal-
lenges for fire detection systems. For example, [75] achieved a 92% detection rate in clear
Information 2024, 15, 538 22 of 32

weather but only 56% in cloudy conditions using multi-sensor satellite imagery from
Sentinel-1 and Sentinel-2. Similarly, [78] utilized geostationary weather satellite data and
proposed max aggregation to reduce cloud and smoke interference, enhancing detection
accuracy. Not all studies addressed varying weather conditions comprehensively. Ref. [150]
used an unsupervised method without specific solutions for different forecast conditions,
demonstrating a lack of robustness in dynamic environments. Additionally, [115] high-
lighted that wildfire detection probability by MODIS is significantly influenced by factors
such as daily relative humidity, wind speed, and altitude, underscoring the need for
adaptable detection systems.
False positives are a critical issue in fire detection systems as they can lead to unneces-
sary alarms and resource deployment. Various strategies have been employed to mitigate
this issue. For instance, [72] proposed dividing detected regions into blocks and using
multidimensional texture features with a clustering approach to filter out false positives
accurately. This method focuses on improving the specificity of the detection system. Other
approaches include threshold optimization, as seen in [57], where fires with more than a
30% confidence level were selected to reduce false alarms in the MODIS14 dataset. Ref. [62]
attempted to discriminate between smoke, fog, and clouds by converting the RGB color
space to hue, saturation, and luminance; though the study lacked a thorough evaluation
and comparison of results.
Combining traditional and deep learning methods has shown promise in improv-
ing detection accuracy. Ref. [121] integrated a hand-designed smoke detection model
with a deep learning model, successfully reducing the false negative and false positive
rates, thereby enhancing smoke recognition accuracy. The authors in [147] addressed
the challenge of non-smoke images containing features similar to smoke, such as colors,
shapes, and textures, by proposing multiscale convolutional layers for scale-invariant
smoke recognition.
Detection in fog or dust conditions presents additional challenges. The authors in [151]
compared their approach with other methods, including SVM, Bayes classifier, fuzzy c-
means, and Back Propagation Neural Network, and they demonstrated the lowest false
alarm rate in wildfire smoke detection under heavy fog. Further advancements include the
use of quasi-dynamic features and dual tree-complex wavelet transform with elastic net
processing, as proposed by [177], to handle disturbances like fog and haze. Similarly, [148]
developed a deep convolutional neural network to address variations in image datasets,
such as clouds, fog, and sandstorms, achieving an average accuracy of 97%. However, they
noted a performance degradation when testing on wildfire smoke compared to nearby
smoke, indicating the need for more specialized training datasets.

5.4. Extinguishing Efficiency


Most of the development of firefighting robots has mainly focused on indoor and
smooth outdoor environments, limiting their use in rugged terrains like forests. These
robots are designed to assist in firefighting, but their effectiveness in actual forest environ-
ments is largely untested. Most existing firefighting UGVs are suited for smooth surfaces
and controlled conditions, such as urban areas, and are equipped with fire suppression
systems and sensors. However, they are not optimized for the unpredictable conditions
of forests.
Some pioneering efforts are being made to develop technologies specifically for forest
environments. For instance, a UAV platform with a 600-L payload capacity and equipped
with thermographic cameras and navigation systems has been proposed, but it has not
been fully tested in real-world conditions[198]. Another study explored the use of fire
extinguishing balls deployed from unmanned aircraft systems; though practical effective-
ness remained uncertain due to limited integration evidence [199,200]. Research has also
focused on robotized surveillance with conventional, multi-spectral, and thermal cameras,
primarily for situational awareness and early detection [201]. However, there is a gap in
Information 2024, 15, 538 23 of 32

integrating autonomous systems for direct fire suppression, with most efforts centered on
surveillance rather than active firefighting.
While there are promising developments, forest firefighting robots are still in the early
stages of research and development. Most current technologies are designed for controlled
environments and have not been extensively tested in forest conditions. Therefore, their
efficiency and practical effectiveness cannot be validated due to a lack of evidence and
comprehensive testing.

5.5. Compliance and Standards


The use of UAVs for forest fire detection and extinguishing offers advantages like
rapid deployment, real-time data acquisition, and access to hard-to-reach areas. How-
ever, integrating UAVs into these applications presents challenges, particularly regarding
compliance with regional regulations and safety standards. For instance, in Canada, UAV
operators must obtain a pilot license, maintain a line of sight with the UAV, and avoid
flying near forest fires [130]. These regulations, while essential for safety, can limit the
effectiveness and operational scope of UAV-based systems. Our review found a lack of
focus on developing UAV hardware that complies with these regulatory frameworks, high-
lighting the need for compliant technologies that can operate safely and legally across
different regions.

6. Recommendations for Future Research


A review of the current literature on forest fire detection and extinguishing systems
revealed several key areas where further research and development are needed. Addressing
these gaps will not only enhance the effectiveness of these systems but also ensure their
safe and compliant integration into existing fire management frameworks. Below are
three primary gaps that were identified, along with corresponding recommendations for
future research.

6.1. Recommendation 1: Integration of Real-Time Data Processing and Decision-Making Algorithms


Gap: Current research often focuses on the capabilities of UAV systems for data collection
but there is a lack of emphasis on the integration of real-time data processing and decision-
making algorithms [82,130]. This integration is crucial for enabling UAVs to respond promptly
and accurately to detecting fires, especially in rapidly changing environments.
Recommendation: Future research should concentrate on developing and integrating
advanced algorithms capable of real-time data processing [174] and decision making [202].
This includes machine learning and AI techniques that can analyze sensor data on-the-fly,
identify potential fire hazards, and make autonomous decisions regarding navigation
and intervention. Researchers should explore how these algorithms can be implemented
efficiently on UAV platforms, considering constraints like computational power and energy
consumption [110,169].

6.2. Recommendation 2: Effectiveness and Autonomy in Real-World Conditions


Gap: Although numerous UAV systems have been proposed for forest fire detection
and extinguishing, many have not been extensively tested or validated in real-world condi-
tions [65,73,198]. This lack of field testing raises concerns about the practical effectiveness,
functionality, and autonomy of these systems in the diverse and challenging environments
typical of forest fires.
Recommendation: There is a need for comprehensive field trials and simulations that
replicate the conditions of actual forest fires. Future research should focus on developing
and testing UAV systems in varied and dynamic environments to assess their perfor-
mance in detecting and responding to fires. This includes testing the systems’ navigation
capabilities, sensor accuracy, and overall operational reliability.
Information 2024, 15, 538 24 of 32

6.3. Recommendation 3: Human–Robot Interactions and Collaboration


Gap: While UAVs offer advanced surveillance and early detection capabilities, there
is limited research on how these systems can effectively interact and collaborate with
human firefighters. Our analysis found no article that discusses the HRI for the forest
fire. Ensuring seamless HRI is crucial for optimizing the use of UAVs in firefighting,
including coordinating actions with ground teams and ensuring the safety and efficiency
of operations.
Recommendation: Future research should explore the development of systems and
protocols that facilitate effective HRI in the context of forest firefighting. This includes
designing intuitive interfaces and communication systems that allow human operators to
easily control and monitor UAVs. Additionally, research should focus on developing col-
laborative frameworks where UAVs and human firefighters can work together, leveraging
each other’s strengths. For example, UAVs can provide real-time aerial data to ground
teams, enhancing situational awareness and guiding decision-making processes [58]. Stud-
ies should also address the psychological and ergonomic aspects of HRI, ensuring that
the introduction of UAVs does not overwhelm or distract human operators but rather
complements their efforts.

7. Conclusions
Automatic fire detection in forests is a critical aspect of modern wildfire management
and prevention. In this paper, through the PRISMA framework, we surveyed a total of
155 journal papers that concentrated on fire detection using image processing, computer
vision, deep learning, and machine learning for the time span of 2013–2023. The literature
review was mainly classified into four categories: fire, smoke, fire and flame, and fire and
smoke. We also categorized the literature based on their applications in real fields for
fire detection, fire extinguishing, or a combination of both. We observed an exponential
increase in the number of publications from 2018 onward; however, very limited research
has been conducted in the utilization of robots for the detection and extinguishing of fire in
hazardous environments. We predict that, with the increasing number of fire incidents in
the forests and with the increased popularity of robots, the trend of autonomous systems
for fire detection and extinguishing will thrive. We hope that this research work can be
used as a guidebook for researchers who are looking for recent developments in forest
fire detection using deep learning and image processing to perform further research in
this domain.

Author Contributions: B.Ö.: conceptualization, methodology, formal analysis, and writing–original


draft preparation; M.S.A.: methodology, investigation, visualization, and writing–review and editing;
M.U.K.: conceptualization, methodology, investigation, visualization, writing—review and editing,
and supervision. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflicts of interest.

Abbreviations
The following abbreviations are used in this study:

AAFLM Attention-Based Adaptive Fusion Residual Module


AAPF Auto-Organization, Adaptive Frame Periods
ADE-Net Attention-based Dual-Encoding Network
AERNet An Effective Real-time Fire Detection Network
AFM Attention Fusion Module
AFSM Attention-Based Feature Separation Module
AGE Attention-Guided Enhancement
AMP Automatic Mixed Precision
ANN Artificial Neural Network
ASFF Adaptively Spatial Feature Fusion
Information 2024, 15, 538 25 of 32

AUROC Area Under the Receiver Operating Characteristic


BNN Bayesian Neural Network
BiFPN Bidirectional Feature Pyramid Network
BPNN Back Propagation Neural Network
CA Coordinate Attention
CARAFE Content-Aware Reassembly of Features
CBAM Convolutional Block Attention Module
CCDC Continuous Change Detection and Classification
CEP Complex Event Processing
CIoU Complete Intersection over Union
CoLBP Co-Occurrence of Local Binary Pattern
DARA Dual Fusion Attention Residual Feature Attention
DBN Deep Belief Network
DCNN Deep Convolutional Neural Network
DDAM Detail-Difference-Aware Module
DETR Detection Transformer
DPPM Dense Pyramid Pooling Module
DTMC Discrete-Time Markov Chain
ECA Efficient Channel Attention
ELM Extreme Learning Machine
ESRGAN Enhanced Super-Resolution Generative Adversarial Network
FCN Fully Convolutional Network
FCOS Fully Convolutional One-Stage
FFDI Forest Fire Detection Index
FFDSM Forest Fire Detection and Segmentation Model
FILDA Firelight Detection Algorithm
FL Federated Learning
FLAME Fire Luminosity Airborne-based Machine Learning Evaluation
FSCN Fully Symmetric Convolutional–Deconvolutional Neural Network
GCF Global Context Fusion
GIS Geographic Information System
GLCM Gray Level Co-Occurrence Matrix
GMM Gaussian Mixture Model
GRU Gated Recurrent Unit
GSConv Ghost Shuffle Convolution
HRI Human–Robot Interaction
HDLBP Hamming Distance Based Local Binary Pattern
ISSA Improved Sparrow Search Algorithm
KNN K-Nearest Neighbor
K-SVD K-Singular Value Decomposition
LBP Local Binary Pattern
LMINet Label-Relevance Multi-Direction Interaction Network
LSTM Long Short-Term Memory Networks
LwF Learning without Forgetting
MAE-Net Multi-Attention Fusion
MCCL Multi-scale Context Contrasted Local Feature Module
MCAM Multi-Connection Aggregation Method
MQTT Message Queuing Telemetry Transport
MSD Multi-Scale Detection
MTL Multi-Task Learning
MWIR Middle Wavelength Infrared
NBR Normalized Burned Ratio
NDVI Normalized Difference Vegetation Index
PANet Path Aggregation Network
PConv Partial Convolution
POD Probability of Detection
POFD Probability of False Detection
PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses
PSNet Pixel-level Supervision Neural Network
PSO Particle Swarm Optimization
Information 2024, 15, 538 26 of 32

R-CNN Region-Based Convolutional Neural Network


RECAB Residual Efficient Channel Attention Block
RFB Receptive Field Block
ROI Region of Interest
RNN Recurrent Neural Network
RS Remote Sensing
SE-GhostNet Squeeze and Excitation–GhostNet
SHAP Shapley Additive Explanations
SIFT Scale Invariant Feature Transform
SIoU SCYLLA–Intersection Over Union
SPPF Spatial Pyramid Pooling Fast
SPPF+ Spatial Pyramid Pooling Fast+
SVM Support Vector Machine
TECNN Transformer-Enhanced Convolutional Neural Network
TWSVM Twin Support Vector Machine
USGS United States Geological Survey
ViT Vision Transformer
VHR Very High Resolution
VIIRS Visible Infrared Imaging Radiometer Suite
VSU Video Surveillance Unit
WIoU Wise–IoU
YOLO You Only Look Once

References
1. Brunner, I.; Godbold, D.L. Tree roots in a changing world. J. For. Res. 2007, 12, 78–82. [CrossRef]
2. Ball, G.; Regier, P.; González-Pinzón, R.; Reale, J.; Van Horn, D. Wildfires increasingly impact western US fluvial networks.
Nat. Commun. 2021, 12, 2484. [CrossRef] [PubMed]
3. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A review on early forest fire detection systems using optical
remote sensing. Sensors 2020, 20, 6442. [CrossRef] [PubMed]
4. Truong, C.T.; Nguyen, T.H.; Vu, V.Q.; Do, V.H.; Nguyen, D.T. Enhancing fire detection technology: A UV-based system utilizing
fourier spectrum analysis for reliable and accurate fire detection. Appl. Sci. 2023, 13, 7845. [CrossRef]
5. Geetha, S.; Abhishek, C.; Akshayanat, C. Machine vision based fire detection techniques: A survey. Fire Technol. 2021, 57, 591–623.
[CrossRef]
6. Alkhatib, A.A. A review on forest fire detection techniques. Int. J. Distrib. Sens. Netw. 2014, 10, 597368. [CrossRef]
7. Yuan, C.; Liu, Z.; Zhang, Y. Fire detection using infrared images for UAV-based forest fire surveillance. In Proceedings of the 2017
International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 567–572.
8. Yang, X.; Hua, Z.; Zhang, L.; Fan, X.; Zhang, F.; Ye, Q.; Fu, L. Preferred vector machine for forest fire detection. Pattern Recognit.
2023, 143, 109722. [CrossRef]
9. Yuan, C.; Liu, Z.; Zhang, Y. Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance.
J. Intell. Robot. Syst. 2019, 93, 337–349. [CrossRef]
10. Li, X.; Song, W.; Lian, L.; Wei, X. Forest fire smoke detection using back-propagation neural network based on MODIS data.
Remote Sens. 2015, 7, 4473–4498. [CrossRef]
11. Mahmoud, M.A.; Ren, H. Forest Fire Detection Using a Rule-Based Image Processing Algorithm and Temporal Variation.
Math. Probl. Eng. 2018, 2018, 7612487. [CrossRef]
12. Khan, A.; Hassan, B.; Khan, S.; Ahmed, R.; Abuassba, A. DeepFire: A novel dataset and deep transfer learning benchmark for
forest fire detection. Mob. Inf. Syst. 2022, 2022, 5358359. [CrossRef]
13. Rangwala, A.S.; Raghavan, V. Mechanism of Fires: Chemistry and Physical Aspects; Springer Nature: Berlin/Heidelberg, Germany,
2022.
14. Wu, D.; Zhang, C.; Ji, L.; Ran, R.; Wu, H.; Xu, Y. Forest fire recognition based on feature extraction from multi-view images.
Trait. Signal 2021, 38, 775–783. [CrossRef]
15. Qiu, X.; Xi, T.; Sun, D.; Zhang, E.; Li, C.; Peng, Y.; Wei, J.; Wang, G. Fire detection algorithm combined with image processing and
flame emission spectroscopy. Fire Technol. 2018, 54, 1249–1263. [CrossRef]
16. Dzigal, D.; Akagic, A.; Buza, E.; Brdjanin, A.; Dardagan, N. Forest fire detection based on color spaces combination. In
Proceedings of the 2019 11th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 28–30
November 2019; pp. 595–599. [CrossRef]
17. Khalil, A.; Rahman, S.U.; Alam, F.; Ahmad, I.; Khalil, I. Fire detection using multi color space and background modeling.
Fire Technol. 2021, 57, 1221–1239. [CrossRef]
18. Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video flame and smoke based fire detection algorithms: A literature review.
Fire Technol. 2020, 56, 1943–1980. [CrossRef]
Information 2024, 15, 538 27 of 32

19. Wu, H.; Wu, D.; Zhao, J. An intelligent fire detection approach through cameras based on computer vision methods. Process. Saf.
Environ. Prot. 2019, 127, 245–256. [CrossRef]
20. Khondaker, A.; Khandaker, A.; Uddin, J. Computer Vision-based Early Fire Detection Using Enhanced Chromatic Segmentation
and Optical Flow Analysis Technique. Int. Arab. J. Inf. Technol. 2020, 17, 947–953. [CrossRef]
21. He, Y. Smart detection of indoor occupant thermal state via infrared thermography, computer vision, and machine learning.
Build. Environ. 2023, 228, 109811. [CrossRef]
22. Mazur-Milecka, M.; Głowacka, N.; Kaczmarek, M.; Bujnowski, A.; Kaszyński, M.; Rumiński, J. Smart city and fire detection using
thermal imaging. In Proceedings of the 2021 14th International Conference on Human System Interaction (HSI), Gdańsk, Poland,
8–10 July 2021; pp. 1–7. [CrossRef]
23. Bouguettaya, A.; Zarzour, H.; Taberkit, A.M.; Kechida, A. A review on early wildfire detection from unmanned aerial vehicles
using deep learning-based computer vision algorithms. Signal Process. 2022, 190, 108309. [CrossRef]
24. Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [CrossRef]
25. Saponara, S.; Elhanashi, A.; Gagliardi, A. Real-time video fire/smoke detection based on CNN in antifire surveillance systems.
J. -Real-Time Image Process. 2021, 18, 889–900. [CrossRef]
26. Florath, J.; Keller, S. Supervised Machine Learning Approaches on Multispectral Remote Sensing Data for a Combined Detection
of Fire and Burned Area. Remote Sens. 2022, 14, 657. [CrossRef]
27. Mohammed, R. A real-time forest fire and smoke detection system using deep learning. Int. J. Nonlinear Anal. Appl. 2022,
13, 2053–2063. [CrossRef]
28. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA
statement. Ann. Intern. Med. 2009, 151, 264–269. [CrossRef] [PubMed]
29. Mahmoud, M.A.I.; Ren, H. Forest fire detection and identification using image processing and SVM. J. Inf. Process. Syst. 2019,
15, 159–168. [CrossRef]
30. Yuan, C.; Ghamry, K.A.; Liu, Z.; Zhang, Y. Unmanned aerial vehicle based forest fire monitoring and detection using image
processing technique. In Proceedings of the 2016 IEEE Chinese Guidance, Navigation and Control Conference (CGNCC), Miami,
FL, USA, 13–16 June 2016; pp. 1870–1875. [CrossRef]
31. Rahman, E.U.; Khan, M.A.; Algarni, F.; Zhang, Y.; Irfan Uddin, M.; Ullah, I.; Ahmad, H.I. Computer Vision-Based Wildfire Smoke
Detection Using UAVs. Math. Probl. Eng. 2021, 2021, 9977939. [CrossRef]
32. Almasoud, A.S. Intelligent Deep Learning Enabled Wild Forest Fire Detection System. Comput. Syst. Sci. Eng. 2023, 44. [CrossRef]
33. Chen, X.; Hopkins, B.; Wang, H.; O’Neill, L.; Afghah, F.; Razi, A.; Fulé, P.; Coen, J.; Rowell, E.; Watts, A. Wildland fire detection
and monitoring using a drone-collected RGB/IR image dataset. IEEE Access 2022, 10, 121301–121317. [CrossRef]
34. Dewangan, A.; Pande, Y.; Braun, H.W.; Vernon, F.; Perez, I.; Altintas, I.; Cottrell, G.W.; Nguyen, M.H. FIgLib & SmokeyNet:
Dataset and deep learning model for real-time wildland fire smoke detection. Remote Sens. 2022, 14, 1007. [CrossRef]
35. Zhou, Z.; Shi, Y.; Gao, Z.; Li, S. Wildfire smoke detection based on local extremal region segmentation and surveillance. Fire Saf. J.
2016, 85, 50–58. [CrossRef]
36. Zhang, Q.x.; Lin, G.h.; Zhang, Y.m.; Xu, G.; Wang, J.j. Wildland forest fire smoke detection based on faster R-CNN using synthetic
smoke images. Procedia Eng. 2018, 211, 441–446. [CrossRef]
37. Sudhakar, S.; Vijayakumar, V.; Kumar, C.S.; Priya, V.; Ravi, L.; Subramaniyaswamy, V. Unmanned Aerial Vehicle (UAV) based
Forest Fire Detection and monitoring for reducing false alarms in forest-fires. Comput. Commun. 2020, 149, 1–16. [CrossRef]
38. Hossain, F.A.; Zhang, Y.; Yuan, C.; Su, C.Y. Wildfire flame and smoke detection using static image features and artificial neural
network. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China,
23–27 July 2019. [CrossRef]
39. Ghamry, K.A.; Kamel, M.A.; Zhang, Y. Cooperative forest monitoring and fire detection using a team of UAVs-UGVs. In
Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016;
pp. 1206–1211. [CrossRef]
40. Akhloufi, M.A.; Couturier, A.; Castro, N.A. Unmanned aerial vehicles for wildland fires: Sensing, perception, cooperation and
assistance. Drones 2021, 5, 15. [CrossRef]
41. Battistoni, P.; Cantone, A.A.; Martino, G.; Passamano, V.; Romano, M.; Sebillo, M.; Vitiello, G. A cyber-physical system for wildfire
detection and firefighting. Future Internet 2023, 15, 237. [CrossRef]
42. Jiao, Z.; Zhang, Y.; Xin, J.; Mu, L.; Yi, Y.; Liu, H.; Liu, D. A deep learning based forest fire detection approach using UAV and
YOLOv3. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China,
23–27 July 2019. [CrossRef]
43. Ghali, R.; Akhloufi, M.A. Deep learning approaches for wildland fires using satellite remote sensing data: Detection, mapping,
and prediction. Fire 2023, 6, 192. [CrossRef]
44. Artés, T.; Oom, D.; De Rigo, D.; Durrant, T.H.; Maianti, P.; Libertà, G.; San-Miguel-Ayanz, J. A global wildfire dataset for the
analysis of fire regimes and fire behaviour. Sci. Data 2019, 6, 296. [CrossRef] [PubMed]
45. Sayad, Y.O.; Mousannif, H.; Al Moatassime, H. Predictive modeling of wildfires: A new dataset and machine learning approach.
Fire Saf. J. 2019, 104, 130–146. [CrossRef]
46. Zhang, G.; Wang, M.; Liu, K. Deep neural networks for global wildfire susceptibility modelling. Ecol. Indic. 2021, 127, 107735.
[CrossRef]
Information 2024, 15, 538 28 of 32

47. Zheng, S.; Zou, X.; Gao, P.; Zhang, Q.; Hu, F.; Zhou, Y.; Wu, Z.; Wang, W.; Chen, S. A forest fire recognition method based on
modified deep CNN model. Forests 2024, 15, 111. [CrossRef]
48. Zhang, L.; Wang, M.; Fu, Y.; Ding, Y. A Forest Fire Recognition Method Using UAV Images Based on Transfer Learning. Forests
2022, 13, 975. [CrossRef]
49. Qian, J.; Lin, H. A Forest Fire Identification System Based on Weighted Fusion Algorithm. Forests 2022, 13, 1301. [CrossRef]
50. Anh, N. Efficient Forest Fire Detection using Rule-Based Multi-color Space and Correlation Coefficient for Application in
Unmanned Aerial Vehicles. Ksii Trans. Internet Inf. Syst. 2022, 16, 381–404. [CrossRef]
51. Zhang, J.; Zhu, H.; Wang, P.; Ling, X. ATT Squeeze U-Net: A lightweight Network for Forest Fire Detection and Recognition.
IEEE Access 2021, 9, 10858–10870. [CrossRef]
52. Qi, R.; Liu, Z. Extraction and Classification of Image Features for Fire Recognition Based on Convolutional Neural Network.
Trait. Signal 2021, 38, 895–902. [CrossRef]
53. Chanthiya, P.; Kalaivani, V. Forest fire detection on LANDSAT images using support vector machine. Concurr. -Comput.-Pract.
Exp. 2021, 33, e6280. [CrossRef]
54. Sousa, M.; Moutinho, A.; Almeida, M. Thermal Infrared Sensing for Near Real-Time Data-Driven Fire Detection and Monitoring
Systems. Sensors 2020, 20, 6803. [CrossRef] [PubMed]
55. Chung, M.; Han, Y.; Kim, Y. A Framework for Unsupervised Wildfire Damage Assessment Using VHR Satellite Images with
PlanetScope Data. Remote Sens. 2020, 12, 3835. [CrossRef]
56. Wang, Y.; Dang, L.; Ren, J. Forest fire image recognition based on convolutional neural network. J. Algorithm. Comput. Technol.
2019, 13, 1748302619887689. [CrossRef]
57. Park, W.; Park, S.; Jung, H.; Won, J. An Extraction of Solar-contaminated Energy Part from MODIS Middle Infrared Channel
Measurement to Detect Forest Fires. Korean J. Remote Sens. 2019, 35, 39–55. [CrossRef]
58. Yuan, C.; Liu, Z.; Zhang, Y. Aerial Images-Based Forest Fire Detection for Firefighting Using Optical Remote Sensing Techniques
and Unmanned Aerial Vehicles. J. Intell. Robot. Syst. 2017, 88, 635– 654. [CrossRef]
59. Prema, C.; Vinsley, S.; Suresh, S. Multi Feature Analysis of Smoke in YUV Color Space for Early Forest Fire Detection. Fire Technol.
2016, 52, 1319–1342. [CrossRef]
60. Polivka, T.; Wang, J.; Ellison, L.; Hyer, E.; Ichoku, C. Improving Nocturnal Fire Detection With the VIIRS Day-Night Band.
IEEE Trans. Geosci. Remote Sens. 2016, 54, 5503–5519. [CrossRef]
61. Lin, L. A Spatio-Temporal Model for Forest Fire Detection Using HJ-IRS Satellite Data. Remote Sens. 2016, 8, 403. [CrossRef]
62. Yoon, S.; Min, J. An Intelligent Automatic Early Detection System of Forest Fire Smoke Signatures using Gaussian Mixture Model.
J. Inf. Process. Syst. 2013, 9, 621–632. [CrossRef]
63. Xue, Z.; Lin, H.; Wang, F. A Small Target Forest Fire Detection Model Based on YOLOv5 Improvement. Forests 2022, 13, 1332.
[CrossRef]
64. Seydi, S.T.; Saeidi, V.; Kalantar, B.; Ueda, N.; Halin, A.A. Fire-Net: A Deep Learning Framework for Active Forest Fire Detection.
J. Sensors 2022, 2022, 8044390. [CrossRef]
65. Lu, K.; Xu, R.; Li, J.; Lv, Y.; Lin, H.; Li, Y. A Vision-Based Detection and Spatial Localization Scheme for Forest Fire Inspection
from UAV. Forests 2022, 13, 383. [CrossRef]
66. Lu, K.; Huang, J.; Li, J.; Zhou, J.; Chen, X.; Liu, Y. MTL-FFDET: A Multi-Task Learning-Based Model for Forest Fire Detection.
Forests 2022, 13, 1448. [CrossRef]
67. Guan, Z.; Miao, X.; Mu, Y.; Sun, Q.; Ye, Q.; Gao, D. Forest Fire Segmentation from Aerial Imagery Data Using an Improved
Instance Segmentation Model. Remote Sens. 2022, 14, 3159. [CrossRef]
68. Li, W.; Lin, Q.; Wang, K.; Cai, K. Machine vision-based network monitoring system for solar-blind ultraviolet signal. Comput.
Commun. 2021, 171, 157–162. [CrossRef]
69. Kim, B.; Lee, J. A Bayesian Network-Based Information Fusion Combined with DNNs for Robust Video Fire Detection. Appl. Sci.
2021, 11, 7624. [CrossRef]
70. Ghali, R.; Akhloufi, M.; Jmal, M.; Mseddi, W.; Attia, R. Wildfire Segmentation Using Deep Vision Transformers. Remote Sens.
2021, 13, 3527. [CrossRef]
71. Toptas, B.; Hanbay, D. A new artificial bee colony algorithm-based color space for fire/flame detection. Soft Comput. 2020,
24, 10481–10492. [CrossRef]
72. Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early fire detection based on aerial 360-degree sensors, deep
convolution neural networks and exploitation of fire dynamic textures. Remote Sens. 2020, 12, 3177. [CrossRef]
73. Ghali, R.; Akhloufi, M.; Mseddi, W. Deep Learning and Transformer Approaches for UAV-Based Wildfire Detection and
Segmentation. Sensors 2022, 22, 1977. [CrossRef]
74. Zhang, Q.; Ge, L.; Zhang, R.; Metternicht, G.; Liu, C.; Du, Z. Towards a Deep-Learning-Based Framework of Sentinel-2 Imagery
for Automated Active Fire Detection. Remote Sens. 2021, 13, 4790. [CrossRef]
75. Rashkovetsky, D.; Mauracher, F.; Langer, M.; Schmitt, M. Wildfire Detection From Multisensor Satellite Imagery Using Deep
Semantic Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7001–7016. [CrossRef]
76. Pereira, G.; Fusioka, A.; Nassu, B.; Minetto, R. Active fire detection in Landsat-8 imagery: A large-scale dataset and a deep-learning
study. Isprs J. Photogramm. Remote Sens. 2021, 178, 171–186. [CrossRef]
Information 2024, 15, 538 29 of 32

77. Benzekri, W.; Moussati, A.; Moussaoui, O.; Berrajaa, M. Early Forest Fire Detection System using Wireless Sensor Network and
Deep Learning. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 496–503. [CrossRef]
78. Zhao, Y.; Ban, Y. GOES-R Time Series for Early Detection of Wildfires with Deep GRU-Network. Remote Sens. 2022, 14, 4347.
[CrossRef]
79. Hong, Z. Active Fire Detection Using a Novel Convolutional Neural Network Based on Himawari-8 Satellite Images. Front.
Environ. Sci. 2022, 10, 794028. [CrossRef]
80. Xu, R.; Lin, H.; Lu, K.; Cao, L.; Liu, Y. A Forest Fire Detection System Based on Ensemble Learning. Forests 2021, 12, 217.
[CrossRef]
81. Pan, H.; Badawi, D.; Zhang, X.; Cetin, A. Additive neural network for forest fire detection. Signal Image Video Process. 2020,
14, 675–682. [CrossRef]
82. Zhao, Y.; Ma, J.; Li, X.; Zhang, J. Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery. Sensors
2018, 18, 712. [CrossRef] [PubMed]
83. Zhang, A.; Zhang, A. Real-Time Wildfire Detection and Alerting with a Novel Machine Learning Approach A New Systematic
Approach of Using Convolutional Neural Network (CNN) to Achieve Higher Accuracy in Automation. Int. J. Adv. Comput. Sci.
Appl. 2022, 13, 1–6.
84. Wahyono.; Harjoko, A.; Dharmawan, A.; Adhinata, F.D.; Kosala, G.; Jo, K.H. Real-time forest fire detection framework based on
artificial intelligence using color probability model and motion feature analysis. Fire 2022, 5, 23. [CrossRef]
85. Phan, T.; Quach, N.; Nguyen, T.; Nguyen, T.; Jo, J.; Nguyen, Q. Real-time wildfire detection with semantic explanations. Expert
Syst. Appl. 2022, 201, 117007. [CrossRef]
86. Yang, X. Pixel-level automatic annotation for forest fire image. Eng. Appl. Artif. Intell. 2021, 104, 104353. [CrossRef]
87. Shamsoshoara, A.; Afghah, F.; Razi, A.; Zheng, L.; Fule, P.; Blasch, E. Aerial imagery pile burn detection using deep learning: The
FLAME dataset. Comput. Netw. 2021, 193, 108001. [CrossRef]
88. Liu, Z.; Zhang, K.; Wang, C.; Huang, S. Research on the identification method for the forest fire based on deep learning. Optik
2020, 223, 165491. [CrossRef]
89. Khurana, M.; Saxena, V. A Unified Approach to Change Detection Using an Adaptive Ensemble of Extreme Learning Machines.
IEEE Geosci. Remote Sens. Lett. 2020, 17, 794–798. [CrossRef]
90. Huang, X.; Du, L. Fire Detection and Recognition Optimization Based on Virtual Reality Video Image. IEEE Access 2020,
8, 77951–77961. [CrossRef]
91. Govil, K.; Welch, M.L.; Ball, J.T.; Pennypacker, C.R. Preliminary results from a wildfire detection system using deep learning on
remote camera images. Remote Sens. 2020, 12, 166. [CrossRef]
92. Ouni, S.; Ayoub, Z.; Kamoun, F. Auto-organization approach with adaptive frame periods for IEEE 802.15.4/zigbee forest fire
detection system. Wirel. Netw. 2019, 25, 4059–4076. [CrossRef]
93. Jang, E.; Kang, Y.; Im, J.; Lee, D.; Yoon, J.; Kim, S. Detection and Monitoring of Forest Fires Using Himawari-8 Geostationary
Satellite Data in South Korea. Remote Sens. 2019, 11, 271. [CrossRef]
94. Mao, W.; Wang, W.; Dou, Z.; Li, Y. Fire Recognition Based On Multi-Channel Convolutional Neural Network. Fire Technol. 2018,
54, 531–554. [CrossRef]
95. Zheng, S.; Gao, P.; Zhou, Y.; Wu, Z.; Wan, L.; Hu, F.; Wang, W.; Zou, X.; Chen, S. An accurate forest fire recognition method based
on improved BPNN and IoT. Remote Sens. 2023, 15, 2365. [CrossRef]
96. Liu, T.; Chen, W.; Lin, X.; Mu, Y.; Huang, J.; Gao, D.; Xu, J. Defogging Learning Based on an Improved DeepLabV3+ Model for
Accurate Foggy Forest Fire Segmentation. Forests 2023, 14, 1859. [CrossRef]
97. Reis, H.C.; Turk, V. Detection of forest fire using deep convolutional neural networks with transfer learning approach. Appl. Soft
Comput. 2023, 143, 110362. [CrossRef]
98. Pang, Y.; Wu, Y.; Yuan, Y. FuF-Det: An Early Forest Fire Detection Method under Fog. Remote Sens. 2023, 15, 5435. [CrossRef]
99. Lin, J.; Lin, H.; Wang, F. A semi-supervised method for real-time forest fire detection algorithm based on adaptively spatial
feature fusion. Forests 2023, 14, 361. [CrossRef]
100. Akyol, K. Robust stacking-based ensemble learning model for forest fire detection. Int. J. Environ. Sci. Technol. 2023,
20, 13245–13258. [CrossRef]
101. Niu, K.; Wang, C.; Xu, J.; Yang, C.; Zhou, X.; Yang, X. An Improved YOLOv5s-Seg Detection and Segmentation Model for the
Accurate Identification of Forest Fires Based on UAV Infrared Image. Remote Sens. 2023, 15, 4694. [CrossRef]
102. Sarikaya Basturk, N. Forest fire detection in aerial vehicle videos using a deep ensemble neural network model. Aircr. Eng.
Aerosp. Technol. 2023, 95, 1257–1267. [CrossRef]
103. Rahman, A.; Sakif, S.; Sikder, N.; Masud, M.; Aljuaid, H.; Bairagi, A.K. Unmanned aerial vehicle assisted forest fire detection
using deep convolutional neural network. Intell. Autom. Soft Comput 2023, 35, 3259–3277. [CrossRef]
104. Ghali, R.; Akhloufi, M.A. CT-Fire: A CNN-Transformer for wildfire classification on ground and aerial images. Int. J. Remote Sens.
2023, 44, 7390–7415. [CrossRef]
105. Abdusalomov, A.B.; Islam, B.M.S.; Nasimov, R.; Mukhiddinov, M.; Whangbo, T.K. An improved forest fire detection method
based on the detectron2 model and a deep learning approach. Sensors 2023, 23, 1512. [CrossRef]
106. Supriya, Y.; Gadekallu, T.R. Particle swarm-based federated learning approach for early detection of forest fires.
Sustainability 2023, 15, 964. [CrossRef]
Information 2024, 15, 538 30 of 32

107. Khennou, F.; Akhloufi, M.A. Improving wildland fire spread prediction using deep U-Nets. Sci. Remote Sens. 2023, 8, 100101.
[CrossRef]
108. Peruzzi, G.; Pozzebon, A.; Van Der Meer, M. Fight fire with fire: Detecting forest fires with embedded machine learning models
dealing with audio and images on low power iot devices. Sensors 2023, 23, 783. [CrossRef]
109. Barmpoutis, P.; Kastridis, A.; Stathaki, T.; Yuan, J.; Shi, M.; Grammalidis, N. Suburban Forest Fire Risk Assessment and Forest
Surveillance Using 360-Degree Cameras and a Multiscale Deformable Transformer. Remote Sens. 2023, 15, 1995. [CrossRef]
110. Almeida, J.S.; Jagatheesaperumal, S.K.; Nogueira, F.G.; de Albuquerque, V.H.C. EdgeFireSmoke++: A novel lightweight algorithm
for real-time forest fire detection and visualization using internet of things-human machine interface. Expert Syst. Appl. 2023,
221, 119747. [CrossRef]
111. Zheng, H.; Dembele, S.; Wu, Y.; Liu, Y.; Chen, H.; Zhang, Q. A lightweight algorithm capable of accurately identifying forest fires
from UAV remote sensing imagery. Front. For. Glob. Chang. 2023, 6, 1134942. [CrossRef]
112. Shahid, M.; Chen, S.F.; Hsu, Y.L.; Chen, Y.Y.; Chen, Y.L.; Hua, K.L. Forest fire segmentation via temporal transformer from aerial
images. Forests 2023, 14, 563. [CrossRef]
113. Ahmad, K.; Khan, M.S.; Ahmed, F.; Driss, M.; Boulila, W.; Alazeb, A.; Alsulami, M.; Alshehri, M.S.; Ghadi, Y.Y.; Ahmad, J.
FireXnet: An explainable AI-based tailored deep learning model for wildfire detection on resource-constrained devices. Fire Ecol.
2023, 19, 54. [CrossRef]
114. Wang, X.; Pan, Z.; Gao, H.; He, N.; Gao, T. An efficient model for real-time wildfire detection in complex scenarios based on
multi-head attention mechanism. J.-Real-Time Image Process. 2023, 20, 66. [CrossRef]
115. Ying, L.X.; Shen, Z.H.; Yang, M.Z.; Piao, S.L. Wildfire Detection Probability of MODIS Fire Products under the Constraint of
Environmental Factors: A Study Based on Confirmed Ground Wildfire Records. Remote Sens. 2019, 11, 31. [CrossRef]
116. Liu, T. Video Smoke Detection Method Based on Change-Cumulative Image and Fusion Deep Network. Sensors 2019, 19, 5060.
[CrossRef] [PubMed]
117. Bugaric, M.; Jakovcevic, T.; Stipanicev, D. Adaptive estimation of visual smoke detection parameters based on spatial data and
fire risk index. Comput. Vis. Image Underst. 2014, 118, 184–196. [CrossRef]
118. Xie, J.; Yu, F.; Wang, H.; Zheng, H. Class Activation Map-Based Data Augmentation for Satellite Smoke Scene Detection.
IEEE Geosci. Remote Sens. Lett. 2022, 19, 6510905. [CrossRef]
119. Zhu, G.; Chen, Z.; Liu, C.; Rong, X.; He, W. 3D video semantic segmentation for wildfire smoke. Mach. Vis. Appl. 2020, 31, 50.
[CrossRef]
120. Li, X.; Chen, Z.; Wu, Q.; Liu, C. 3D Parallel Fully Convolutional Networks for Real-Time Video Wildfire Smoke Detection.
IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 89–103. [CrossRef]
121. Peng, Y.; Wang, Y. Real-time forest smoke detection using hand-designed features and deep learning. Comput. Electron. Agric.
2019, 167, 105029. [CrossRef]
122. Lin, G.; Zhang, Y.; Xu, G.; Zhang, Q. Smoke detection on video sequences using 3D convolutional neural networks. Fire Technol.
2019, 55, 1827–1847. [CrossRef]
123. Gao, Y.; Cheng, P. Forest Fire Smoke Detection Based on Visual Smoke Root and Diffusion Model. Fire Technol. 2019, 55, 1801–1826.
[CrossRef]
124. Jakovcevic, T.; Bugaric, M.; Stipanicev, D. A Stereo Approach to Wildfire Smoke Detection: The Improvement of the Existing
Methods by Adding a New Dimension. Comput. Inform. 2018, 37, 476–508. [CrossRef]
125. Jia, Y.; Yuan, J.; Wang, J.; Fang, J.; Zhang, Y.; Zhang, Q. A Saliency-Based Method for Early Smoke Detection in Video Sequences.
Fire Technol. 2016, 52, 1271–1292. [CrossRef]
126. Chen, S.; Li, W.; Cao, Y.; Lu, X. Combining the Convolution and Transformer for Classification of Smoke-Like Scenes in Remote
Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4512519. [CrossRef]
127. Guede-Fernandez, F.; Martins, L.; Almeida, R.; Gamboa, H.; Vieira, P. A Deep Learning Based Object Identification System for
Forest Fire Detection. Fire 2021, 4, 75. [CrossRef]
128. Yazdi, A.; Qin, H.; Jordan, C.; Yang, L.; Yan, F. Nemo: An Open-Source Transformer-Supercharged Benchmark for Fine-Grained
Wildfire Smoke Detection. Remote Sens. 2022, 14, 3979. [CrossRef]
129. Shi, J.; Wang, W.; Gao, Y.; Yu, N. Optimal Placement and Intelligent Smoke Detection Algorithm for Wildfire-Monitoring Cameras.
IEEE Access 2020, 8, 72326–72339. [CrossRef]
130. Hossain, F.A.; Zhang, Y.M.; Tonima, M.A. Forest fire flame and smoke detection from UAV-captured images using fire-specific
color features and multi-color space local binary pattern. J. Unmanned Veh. Syst. 2020, 8, 285–309. [CrossRef]
131. Li, T.; Zhao, E.; Zhang, J.; Hu, C. Detection of Wildfire Smoke Images Based on a Densely Dilated Convolutional Network.
Electronics 2019, 8, 1131. [CrossRef]
132. Cao, Y.; Yang, F.; Tang, Q.; Lu, X. An Attention Enhanced Bidirectional LSTM for Early Forest Fire Smoke Recognition.
IEEE Access 2019, 7, 154732–154742. [CrossRef]
133. Prema, C.; Suresh, S.; Krishnan, M.; Leema, N. A Novel Efficient Video Smoke Detection Algorithm Using Co-occurrence of Local
Binary Pattern Variants. Fire Technol. 2022, 58, 3139–3165. [CrossRef]
134. Toulouse, T.; Rossi, L.; Campana, A.; Celik, T.; Akhloufi, M.A. Computer vision for wildfire research: An evolving image dataset
for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [CrossRef]
Information 2024, 15, 538 31 of 32

135. Kim, S.Y.; Muminov, A. Forest fire smoke detection based on deep learning approaches and unmanned aerial vehicle images.
Sensors 2023, 23, 5702. [CrossRef]
136. Yang, H.; Wang, J.; Wang, J. Efficient Detection of Forest Fire Smoke in UAV Aerial Imagery Based on an Improved Yolov5 Model
and Transfer Learning. Remote Sens. 2023, 15, 5527. [CrossRef]
137. Huang, J.; Zhou, J.; Yang, H.; Liu, Y.; Liu, H. A small-target forest fire smoke detection model based on deformable transformer
for end-to-end object detection. Forests 2023, 14, 162. [CrossRef]
138. Saydirasulovich, S.N.; Mukhiddinov, M.; Djuraev, O.; Abdusalomov, A.; Cho, Y.I. An improved wildfire smoke detection based
on YOLOv8 and UAV images. Sensors 2023, 23, 8374. [CrossRef]
139. Chen, G.; Cheng, R.; Lin, X.; Jiao, W.; Bai, D.; Lin, H. LMDFS: A lightweight model for detecting forest fire smoke in UAV images
based on YOLOv7. Remote Sens. 2023, 15, 3790. [CrossRef]
140. Qiao, Y.; Jiang, W.; Wang, F.; Su, G.; Li, X.; Jiang, J. FireFormer: An efficient Transformer to identify forest fire from surveillance
cameras. Int. J. Wildland Fire 2023, 32, 1364–1380. [CrossRef]
141. Fernandes, A.M.; Utkin, A.B.; Chaves, P. Automatic early detection of wildfire smoke with visible-light cameras and EfficientDet.
J. Fire Sci. 2023, 41, 122–135. [CrossRef]
142. Tao, H. A label-relevance multi-direction interaction network with enhanced deformable convolution for forest smoke recognition.
Expert Syst. Appl. 2024, 236, 121383. [CrossRef]
143. Tao, H.; Duan, Q.; Lu, M.; Hu, Z. Learning discriminative feature representation with pixel-level supervision for forest smoke
recognition. Pattern Recognit. 2023, 143, 109761. [CrossRef]
144. James, G.L.; Ansaf, R.B.; Al Samahi, S.S.; Parker, R.D.; Cutler, J.M.; Gachette, R.V.; Ansaf, B.I. An Efficient Wildfire Detection
System for AI-Embedded Applications Using Satellite Imagery. Fire 2023, 6, 169. [CrossRef]
145. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite smoke scene detection using convolutional neural network with
spatial and channel-wise attention. Remote Sens. 2019, 11, 1702. [CrossRef]
146. Larsen, A.; Hanigan, I.; Reich, B.J.; Qin, Y.; Cope, M.; Morgan, G.; Rappold, A.G. A deep learning approach to identify smoke
plumes in satellite imagery in near-real time for health risk communication. J. Expo. Sci. Environ. Epidemiol. 2021, 31, 170–176.
[CrossRef]
147. Yuan, F.; Zhang, L.; Wan, B.; Xia, X.; Shi, J. Convolutional neural networks based on multi-scale additive merging layers for visual
smoke recognition. Mach. Vis. Appl. 2019, 30, 345–358. [CrossRef]
148. Pundir, A.S.; Raman, B. Dual deep learning model for image based smoke detection. Fire Technol. 2019, 55, 2419–2442. [CrossRef]
149. Wu, H.; Zhang, A.; Han, Y.; Nan, J.; Li, K. Fast stochastic configuration network based on an improved sparrow search algorithm
for fire flame recognition. Knowl.-Based Syst. 2022, 245, 108626. [CrossRef]
150. Buza, E.; Akagic, A. Unsupervised method for wildfire flame segmentation and detection. IEEE Access 2022, 10, 55213–55225.
[CrossRef]
151. Zhao, Y.; Tang, G.; Xu, M. Hierarchical detection of wildfire flame video from pixel level to semantic level. Expert Syst. Appl. 2015,
42, 4097–4104. [CrossRef]
152. Prema, C.; Vinsley, S.; Suresh, S. Efficient Flame Detection Based on Static and Dynamic Texture Analysis in Forest Fire Detection.
Fire Technol. 2018, 54, 255–288. [CrossRef]
153. Zhang, H.; Zhang, N.; Xiao, N. Fire detection and identification method based on visual attention mechanism. Optik 2015,
126, 5011–5018. [CrossRef]
154. Liu, H.; Hu, H.; Zhou, F.; Yuan, H. Forest flame detection in unmanned aerial vehicle imagery based on YOLOv5. Fire 2023,
6, 279. [CrossRef]
155. Wang, L.; Zhang, H.; Zhang, Y.; Hu, K.; An, K. A deep learning-based experiment on forest wildfire detection in machine vision
course. IEEE Access 2023, 11, 32671–32681. [CrossRef]
156. Kong, S.; Deng, J.; Yang, L.; Liu, Y. An attention-based dual-encoding network for fire flame detection using optical remote
sensing. Eng. Appl. Artif. Intell. 2024, 127, 107238. [CrossRef]
157. Kaliyev, D.; Shvets, O.; Györök, G. Computer Vision-based Fire Detection using Enhanced Chromatic Segmentation and Optical
Flow Model. Acta Polytech. Hung. 2023, 20, 27–45. [CrossRef]
158. Chen, B.; Bai, D.; Lin, H.; Jiao, W. Flametransnet: Advancing forest flame segmentation with fusion and augmentation techniques.
Forests 2023, 14, 1887. [CrossRef]
159. Morandini, F.; Toulouse, T.; Silvani, X.; Pieri, A.; Rossi, L. Image-based diagnostic system for the measurement of flame properties
and radiation. Fire Technol. 2019, 55, 2443–2463. [CrossRef]
160. Chen, Y.; Xu, W.; Zuo, J.; Yang, K. The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier.
Clust. Comput. 2019, 22, 7665–7675. [CrossRef]
161. Dimitropoulos, K.; Barmpoutis, P.; Grammalidis, N. Spatio-temporal flame modeling and dynamic texture analysis for automatic
video-based fire detection. IEEE Trans. Circuits Syst. Video Technol. 2014, 25, 339–351. [CrossRef]
162. Zheng, X.; Chen, F.; Lou, L.; Cheng, P.; Huang, Y. Real-time detection of full-scale forest fire smoke based on deep convolution
neural network. Remote Sens. 2022, 14, 536. [CrossRef]
163. Martins, L.; Guede-Fernandez, F.; Almeida, R.; Gamboa, H.; Vieira, P. Real-Time Integration of Segmentation Techniques for
Reduction of False Positive Rates in Fire Plume Detection Systems during Forest Fires. Remote Sens. 2022, 14, 2701. [CrossRef]
Information 2024, 15, 538 32 of 32

164. Fernandes, A.; Utkin, A.; Chaves, P. Automatic Early Detection of Wildfire Smoke With Visible light Cameras Using Deep
Learning and Visual Explanation. IEEE Access 2022, 10, 12814–12828. [CrossRef]
165. Jiang, Y.; Wei, R.; Chen, J.; Wang, G. Deep Learning of Qinling Forest Fire Anomaly Detection Based on Genetic Algorithm
Optimization. Univ. Politeh. Buchar. Sci. Bull. Ser.-Electr. Eng. Comput. Sci. 2021, 83, 75–84.
166. Perrolas, G.; Niknejad, M.; Ribeiro, R.; Bernardino, A. Scalable Fire and Smoke Segmentation from Aerial Images Using
Convolutional Neural Networks and Quad-Tree Search. Sensors 2022, 22, 1701. [CrossRef]
167. Li, J. Adaptive linear feature-reuse network for rapid forest fire smoke detection model. Ecol. Inform. 2022, 68, 101584. [CrossRef]
168. Hu, Y.; Zhan, J.; Zhou, G.; Chen, A.; Cai, W.; Guo, K.; Hu, Y.; Li, L. Fast forest fire smoke detection using MVMNet. Knowl.-Based
Syst. 2022, 241, 108219. [CrossRef]
169. Almeida, J.; Huang, C.; Nogueira, F.; Bhatia, S.; Albuquerque, V. EdgeFireSmoke: A Novel lightweight CNN Model for Real-Time
Video Fire-Smoke Detection. IEEE Trans. Ind. Inform. 2022, 18, 7889–7898. [CrossRef]
170. Zhao, E.; Liu, Y.; Zhang, J.; Tian, Y. Forest Fire Smoke Recognition Based on Anchor Box Adaptive Generation Method. Electronics
2021, 10, 566. [CrossRef]
171. Pan, J.; Ou, X.; Xu, L. A Collaborative Region Detection and Grading Framework for Forest Fire Smoke Using Weakly Supervised
Fine Segmentation and lightweight Faster-RCNN. Forests 2021, 12, 768. [CrossRef]
172. Tran, D.; Park, M.; Jeon, Y.; Bak, J.; Park, S. Forest-Fire Response System Using Deep-Learning-Based Approaches With CCTV
Images and Weather Data. IEEE Access 2022, 10, 66061–66071. [CrossRef]
173. Ghosh, R.; Kumar, A. A hybrid deep learning model by combining convolutional neural network and recurrent neural network
to detect forest fire. Multimed. Tools Appl. 2022, 81, 38643–38660. [CrossRef]
174. Ayala, A.; Fernandes, B.; Cruz, F.; Macedo, D.; Zanchettin, C. Convolution Optimization in Fire Classification. IEEE Access 2022,
10, 23642–23658. [CrossRef]
175. Lee, Y.; Shim, J. False Positive Decremented Research for Fire and Smoke Detection in Surveillance Camera using Spatial and
Temporal Features Based on Deep Learning. Electronics 2019, 8, 1167. [CrossRef]
176. Higa, L. Active Fire Mapping on Brazilian Pantanal Based on Deep Learning and CBERS 04A Imagery. Remote Sens. 2022, 14, 688.
[CrossRef]
177. Wu, X.; Cao, Y.; Lu, X.; Leung, H. Patchwise dictionary learning for video forest fire smoke detection in wavelet domain. Neural
Comput. Appl. 2021, 33, 7965–7977. [CrossRef]
178. Wang, S.; Zhao, J.; Ta, N.; Zhao, X.; Xiao, M.; Wei, H. A real-time deep learning forest fire monitoring algorithm based on an
improved Pruned plus KD model. J.-Real-Time Image Process. 2021, 18, 2319–2329. [CrossRef]
179. Sathishkumar, V.E.; Cho, J.; Subramanian, M.; Naren, O.S. Forest fire and smoke detection using deep learning-based learning
without forgetting. Fire Ecol. 2023, 19, 9. [CrossRef]
180. Chen, Y.; Li, J.; Sun, K.; Zhang, Y. A lightweight early forest fire and smoke detection method. J. Supercomput. 2024, 80, 9870–9893.
[CrossRef]
181. Wang, A.; Liang, G.; Wang, X.; Song, Y. Application of the YOLOv6 Combining CBAM and CIoU in Forest Fire and Smoke
Detection. Forests 2023, 14, 2261. [CrossRef]
182. Li, J.; Xu, R.; Liu, Y. An improved forest fire and smoke detection model based on yolov5. Forests 2023, 14, 833. [CrossRef]
183. Sun, B.; Wang, Y.; Wu, S. An efficient lightweight CNN model for real-time fire smoke detection. J.-Real-Time Image Process. 2023,
20, 74. [CrossRef]
184. Bahhar, C.; Ksibi, A.; Ayadi, M.; Jamjoom, M.M.; Ullah, Z.; Soufiene, B.O.; Sakli, H. Wildfire and smoke detection using staged
YOLO model and ensemble CNN. Electronics 2023, 12, 228. [CrossRef]
185. Zhao, J.; Zhang, Z.; Liu, S.; Tao, Y.; Liu, Y. Design and Research of an Articulated Tracked Firefighting Robot. Sensors 2022,
22, 5086. [CrossRef]
186. Rodriguez-Sanchez, M.; Fernandez-Jimenez, L.; Jimenez, A.; Vaquero, J.; Borromeo, S.; Lazaro-Galilea, J. HelpResponder-System
for the Security of First Responder Interventions. Sensors 2021, 21, 2614. [CrossRef]
187. Radha, D.; Kumar, M.; Telagam, N.; Sabarimuthu, M. Smart Sensor Network-Based Autonomous Fire Extinguish Robot Using
IoT. Int. J. Online Biomed. Eng. 2021, 17, 101–110. [CrossRef]
188. Guo, A.; Jiang, T.; Li, J.; Cui, Y.; Li, J.; Chen, Z. Design of a small wheel-foot hybrid firefighting robot for infrared visual fire
recognition. Mech. Based Des. Struct. Mach. 2021, 51, 4432–4451. [CrossRef]
189. Yahaya, I.; Yeong, G.; Zhang, L.; Raghavan, V.; Mahyuddin, M. Autonomous Safety Mechanism for Building: Fire Fighter Robot
with Localized Fire Extinguisher. Int. J. Integr. Eng. 2020, 12, 304–313.
190. Ferreira, L.; Coimbra, A.; Almeida, A. Autonomous System for Wildfire and Forest Fire Early Detection and Control. Inventions
2020, 5, 41. [CrossRef]
191. Aliff, M.; Yusof, M.; Sani, N.; Zainal, A. Development of Fire Fighting Robot (QRob). Int. J. Adv. Comput. Sci. Appl. 2019,
10, 142–147. [CrossRef]
192. Bushnaq, O.; Chaaban, A.; Al-Naffouri, T. The Role of UAV-IoT Networks in Future Wildfire Detection. IEEE Internet Things J.
2021, 8, 16984–16999. [CrossRef]
193. Cruz, H.; Eckert, M.; Meneses, J.; Martinez, J. Efficient Forest Fire Detection Index for Application in Unmanned Aerial Systems
(UASs). Sensors 2016, 16, 893. [CrossRef]
Information 2024, 15, 538 33 of 32

194. Yandouzi, M.; Grari, M.; Berrahal, M.; Idrissi, I.; Moussaoui, O.; Azizi, M.; Ghoumid, K.; Elmiad, A.K. Investigation of combining
deep learning object recognition with drones for forest fire detection and monitoring. Int. J. Adv. Comput. Sci. Appl 2023,
14, 377–384. [CrossRef]
195. Namburu, A.; Selvaraj, P.; Mohan, S.; Ragavanantham, S.; Eldin, E.T. Forest fire identification in uav imagery using x-mobilenet.
Electronics 2023, 12, 733. [CrossRef]
196. Rui, X.; Li, Z.; Zhang, X.; Li, Z.; Song, W. A RGB-Thermal based adaptive modality learning network for day–night wildfire
identification. Int. J. Appl. Earth Obs. Geoinf. 2023, 125, 103554. [CrossRef]
197. Choutri, K.; Lagha, M.; Meshoul, S.; Batouche, M.; Bouzidi, F.; Charef, W. Fire Detection and Geo-Localization Using UAV’s
Aerial Images and Yolo-Based Models. Appl. Sci. 2023, 13, 11548. [CrossRef]
198. Pena, P.F.; Ragab, A.R.; Luna, M.A.; Ale Isaac, M.S.; Campoy, P. WILD HOPPER: A heavy-duty UAV for day and night firefighting
operations. Heliyon 2022, 8, e09588. [CrossRef]
199. Aydin, B.; Selvi, E.; Tao, J.; Starek, M.J. Use of fire-extinguishing balls for a conceptual system of drone-assisted wildfire fighting.
Drones 2019, 3, 17. [CrossRef]
200. Soliman, A.M.S.; Cagan, S.C.; Buldum, B.B. The design of a rotary-wing unmanned aerial vehicles-payload drop mechanism for
fire-fighting services using fire-extinguishing balls. Appl. Sci. 2019, 1, 1259. [CrossRef]
201. Roldán-Gómez, J.J.; González-Gironda, E.; Barrientos, A. A survey on robotic technologies for forest firefighting: Applying drone
swarms to improve firefighters’ efficiency and safety. Appl. Sci. 2021, 11, 363. [CrossRef]
202. Zhu, J.; Pan, L.; Zhao, G. An Improved Near-Field Computer Vision for Jet Trajectory Falling Position Prediction of Intelligent
Fire Robot. Sensors 2020, 20, 7029. [CrossRef] [PubMed]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like