A Novel Pairwise Based Convolutional Neural Network For Image Preprocessing Enhancement
A Novel Pairwise Based Convolutional Neural Network For Image Preprocessing Enhancement
Corresponding Author:
Chaitra Ravi
Department of Computer Science and Engineering, Ramaiah Institute of Technology
Belagavi, India
Email: [email protected]
1. INTRODUCTION
Wildfire is one of the most critical and highly spread natural disasters globally that leads to high
environmental and economic loss in forest resources and human society [1]. Wildfires spread quickly and is
hard to control, causing damage to surroundings, humans, and resources [2]. Due to the escalating frequency
of wildfires attributed to climate change, management grapples with substantial annual expenses [3]. Wildfires
occur worldwide, with forest fires posing a considerable risk and the potential for extensive destruction that
impacts social and economic progress [4]. Unlike other types of fires, forest fires inflict significant damage
due to their unique environmental context [5]. In an open environment and with enough oxygen, fires are much
more likely to start and spread in forests, leading to serious safety risks and economic loss [6]. Early fire
detection is an efficient way to minimize harmful forest fires. Hence, experiments on early forest fire detection
and warning attract wide attention [7].
The wildfire detection techniques are classified into three types namely, sensor detection methods,
image processing techniques, and object detection techniques [8], [9]. Fire alarm sensors like temperature,
optical, infrared and gas sensors attain superior performance, however, their high coverage of cost and
vulnerability to exterior factors like wind and ambient temperature, limit their use in external environments
[10], [11]. Image processing techniques are utilized to detect fires by manually choosing the fire features [12].
Even though those techniques are hugely utilized, they extract simple features and lack scalability, making
them insufficient in critical wildfire detection scenes [13], [14]. The manual choosing of every fire feature
demands more time and expertise, limiting its practical application in the detection of wildfire [15], [16].
Wang et al. [17] implemented a reduce VGGnet for wildfire image classification and optimized
convolutional neural network (CNN) for wildfire detection using fire luminosity airborne-based machine
learning evaluation (FLAME) dataset. The implemented method is divided into two stages where, reduce
VGGnet was utilized for wildlife image classification, and optimized CNN method was utilized for wildlife
detection through the integration of temporal and spatial features. In the implemented method, method causes
more convergence, and the convergence speed of method was faster. Even then, the method had complexity of
noise data like smoke and clouds.
Zhang et al. [18] introduced an FBC-Anet structure that combined the boundary enhancement and
context-aware modules for lightweight encoder-decoder method for wildfire detection using the FLAME
dataset. The introduced method extracted deep semantic features from the dataset and improved the shallow
edge features. The method utilized Xception network to the encoder for extracting features in various scales
from the dataset. Then, by transferring the extracted features by confidentiality, integrity, and availability (CIA)
module, the method’s featuring learning capacity for fire pixels was improved and made feature extraction
much robust. The introduced method combined a decoder with a boundary enhancement module (BEM)
mechanism to improve shallow edge feature extraction. The introduced method found a little fire areas, and
the complexity of the background segmented more accurately. Even then, the introduced technique was not
directly applied to the unmanned aerial vehicle (UAV) images.
Zhang et al. [19] developed an fine tuning depth on residual network 50 (FT-ResNet 50) method based
on transfer learning for wildfire detection in the FLAME dataset. The developed method transferred the dataset
and initialized the parameters for dataset identification of wildfire. Integrated with features of the target dataset,
adam functions were utilized for fine-tuning 3 conv blocks of ResNet, focal loss function and network
architecture were updated for optimizing the ResNet network for extracting efficient semantic data from
datasets. However, the developed method needed a huge amount of quality images to obtain many detection
results.
Jain et al. [20] suggested color-based technique for wildfire detection called CIELAB in FLAME
dataset. The suggested technique detected fire based on fire color in CIELAB color space and trained by CNN
for detecting the fire. CNN and image processing had balancing strengths and integrated their strengths to
develop an ensemble model. The suggested method utilized two CNNs and the CIELAB method, and
performed major voting to predict whether or not the fire was present. The suggested method was performed
based on pixel-wise classification of fire, so it was utilized for segmentation of fire flame, and the method
reduced the training time. However, the suggested method ignored certain images in the detection process.
Chen et al. [21] presented a CNN for wildfire detection in the FLAME dataset. The presented method
developed a hybrid model through a deep learning classifier and localization of fire detection for frames labeled
as fire by performing dual-free imagery. The deep learning-based methods were implemented with red, green,
and blue (RGB) thermal fusion for efficient fire. The presented method had fewer parameters and utilized less
time for training. However, the method had a higher chance of misclassification because the method was only
suitable for certain weather and light conditions.
Ghali et al. [22] implemented an ensemble method that integrated EfficientNet-B5 and DenseNet-201
methods for detection and classification of wildfires in the FLAME dataset. The implemented method
optimized the deep learning methods for detecting wildfire in the early phase. Furthermore, two vision
transformers such as TransUNet, TransFire, and EfficientSeg were activated for wildfire region segmentation,
which identified exact fire regions. The implemented method eliminated the problem of vanishing-gradient,
enhanced feature propagation and the count of parameters was reduced. However, the method lacked of fire
annotation image samples.
Wang et al. [23] introduced a novel method for wildfire detection in complex scenes named as
FireDetn method. The introduced method utilized four various detection heads for FireDetn detection in various
size frame objects. Next, the combined transformer encoder blocked with multi-head attention in FireDetn for
improving their capacity to capture global features and contextual data, which enhanced the average precision
in complex scenes. Lastly, the integration of spatial pyramid pooling architecture had advantages in the
detection of multiple scale fame objects. The introduced technique attained balance among average precision
and detection speed. Yet, the introduced technique had a restricted quantity of energy and required data
performing.
Wang et al. [24] developed a class activation map (CAM) based on a non-local attention method for
wildfire detection. The developed method explored weakly supervised fire detection (WSFD) that provided
image-level annotations. Particularly, the deep neural networks were trained by non-local attention as a
classifier to find fire and non-fire images. Next, a classifier was utilized to develop CAM for all fire images in
the phase of inference and produced a consistent bounding box following every integrated CAM domain. The
method accurately detected the wildfire in images. However, the CAM was noisy and lost certain spatial
information.
In present times, deep learning-based detection techniques have enhanced their capacity to
automatically learn and extract the complex features from image datasets. The existing methods had limitations
like the complexity of noise data, requiring a huge quantity of quality images to obtain high detection accuracy,
and ignoring certain images in the detection process. Then, existing methods doesn’t detect a small region of
fire in dataset images.
In this research, new region-based method is proposed for wildfire detection. The dataset taken for
research is the FLAME dataset, where normalization and hue, saturation, and lightness (HSV) color space
techniques are taken for data pre-processing. The pairwise region-based convolutional neural network
(PR-CNN) is proposed to detect wildfire and the soft non-maximum suppression (NMS), a post-processing
technique, is utilized to eliminate the duplicate detection from PR-CNN method. The functionality of the
research is given as: i) the normalization and HSV technique are taken for preprocessing of images, which
improves the image quality and converts the RGB image to HSV image; ii) the selective search algorithm
proposes pairwise region proposal method for segmentation process in PR-CNN to detect wildfires; and
iii) the post-processing technique like soft-NMS is utilized to eliminate the duplicate detection and choose
much related bounding boxes which corresponds to the detected images.
This research paper is in the following format: section 2 describes the proposed method. Section 3
gives the explanation of the research methodology. Section 4 provides the results and discussion of the research
method, and section 5 provides the conclusion of this research.
2. PROPOSED METHODOLOGY
In this research, new region-based method is proposed for wildfire detection. The dataset taken for
research is the FLAME dataset, and is given as input to normalization and HSV color space techniques for data
pre-processing. After that, the PR-CNN is presented to detect wildfire, while the detected image is given to
soft-NMS that is a post-processing technique to eliminate the duplicate detection in PR-CNN method.
Figure 1 depicts the process of the proposed wildfire detection method.
2.1. Dataset
The dataset taken for wildfire detection in this research is the FLAME dataset [25]. The dataset is a
fire video dataset gathered by various types of cameras and UAVs on sediment burning in the Arizona pine
forest. The dataset has raw videos, image files, and mask data format. In Table 1, the description of various
data in the FLAME dataset is given, and the sample images of the FLAME dataset are represented in Figure 2.
A novel pairwise based convolutional neural network for image preprocessing enhancement (Chaitra Ravi)
4098 ISSN: 2252-8938
2.2. Pre-processing
Pre-processing is an essential phase in image processing field due to which, the quality of images
from the dataset is improved. The pre-processing methods taken are normalization and HSV color space
techniques. Normalization is a procedure that modifies the pixel intensity scores range [26]. The HSV
technique is less affected by dissimilarities in lighting conditions, wherein their independent color order gives
much more reliable and accurate color image detection [27].
2.1.1. Normalization
Normalization is a technique used in data preprocessing, helpful for training and testing of region-
based convolutional neural network (R-CNN) technique. In the dataset taken, all pixels of the image have RGB
colors and intensities ranging from 0 𝑡𝑜 255. The input image is scaled between the range of 0 and 1. The
numerical expression for normalization is represented as given in (1).
𝑥𝑖 −𝑥𝑚𝑖𝑛
𝑥𝑖′ = (1)
𝑥𝑚𝑎𝑥 −𝑥𝑚𝑖𝑛
where, 𝑥𝑖 defines the input data, 𝑥𝑖′ defines the output data, 𝑥𝑚𝑖𝑛 refers to the minimum (0) probable value, and
𝑥𝑚𝑎𝑥 refers to the maximum (255) probable value.
0𝑖𝑓𝑚𝑎𝑥 = 𝑚𝑖𝑛
𝑜 𝑔−𝑏
(60 × + 0𝑜 ) 𝑚𝑜𝑑360𝑜 𝑖𝑓𝑚𝑎𝑥 = 𝑟
𝑚𝑎𝑥−𝑚𝑖𝑛
𝐻= 𝑜 𝑏−𝑟
60 × + 120𝑜 𝑖𝑓𝑚𝑎𝑥 = 𝑔
𝑚𝑎𝑥−𝑚𝑖𝑛
𝑟−𝑔
{ 60𝑜 × + 240𝑜 𝑖𝑓𝑚𝑎𝑥 = 𝑏
𝑚𝑎𝑥−𝑚𝑖𝑛
0𝑖𝑓𝑚𝑎𝑥 = 0
𝑠 = {𝑚𝑎𝑥−𝑚𝑖𝑛 𝑚𝑖𝑛
=1− 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑚𝑎𝑥 𝑚𝑎𝑥
𝑣 = 𝑚𝑎𝑥 (2)
Figure 3. PR-CNN
A novel pairwise based convolutional neural network for image preprocessing enhancement (Chaitra Ravi)
4100 ISSN: 2252-8938
neighboring components within two elements. The results of the predicate are compared with inter-component
variants to intra element variants and adaptive in relation to data local characteristics.
The internal variance of element 𝐶 ⊆ 𝑉 is highest weight in the less spanning tree element 𝑀𝑆𝑇(𝐶, 𝐸).
The mathematical formula of internal difference is given as (3). The mathematical formula of difference is
given as (4).
𝑚𝑎𝑥
∫(𝐶) = 𝑒 ∈ 𝑀𝑆𝑇(𝐶, 𝐸)𝑤(𝑐) (3)
𝑚𝑖𝑛
𝐷𝑖𝑓(𝐶1 , 𝐶2 ) = 𝑣 ∈ 𝐶 , 𝑣 ∈ 𝐶 , (𝑣 , 𝑣 ) ∈ 𝐸 𝑤 ((𝑣𝑖 , 𝑣𝑗 )) (4)
𝑖 1 𝑗 2 𝑖 𝑗
Where 𝐷𝑖𝑓(𝐶1 , 𝐶2 ) = ∞ when there is no edge connecting. The calculation variance is in problematic principle
that reflects only a little edge weight among two elements. Furthermore, the definition of calculation variance
is modified to attain median weight to make it much more robust for outliers. A small modification to the
segmentation technique causes many changes. The region comparison predicate estimates that there is a
boundary among pairs by checking the variance among components 𝐷𝑖𝑓(𝐶1 , 𝐶2 ) which represents related to
internal differences. The threshold function is utilized for controlling the degree differences. The mathematical
formula for pairwise comparison prediction is given as (5). Where 𝑀𝐼𝑛𝑡 represents minimum internal
difference. The mathematical formula of minimum internal difference is given as (6).
Where 𝑇 represents the threshold function that controls degree of variance among two elements to be higher
than the internal differences to ensure the presence of a boundary among them. For less elements, ∫(𝐶) is not
a superior evaluation of local characteristics of information. In exterior case, |𝐶| = 1, ∫(𝐶) = 0. So, a threshold
function is used based on its component size, while the mathematical formula is given as (7).
Where |𝐶| represents the size of 𝐶, 𝑘 represents the parameter, and 𝜏 represents the threshold function. Small
elements are allowed while there is sufficient huge difference among neighboring elements. The non-negative
function of a single element is utilized for 𝜏 without modifying algorithm results. For example, it is probable
to have a segmentation technique that prefers elements of some shapes by determining 𝜏 for huge components
which is not suitable for non-desires and small shapes. But, the pairwise region proposal algorithm combines
the anchor boxes which are not in the desired, small shapes. The smaller regions into are combined into larger
regions and this method considers three types of similarities such as color, texture, and size for combining the
regions. This selective search algorithm produces 2,000 region proposals approximately, and these region
proposals are fed into CNN architecture to compute the CNN features.
while certain overlapping images are missed. To overcome this issue, the NMS technique that sets the
intersection over union (IoU) threshold for a particular class of image is utilized, bounding box M has the
highest score and that is chosen from the generated bounding box B series and kept in the last detection output
R. The NMS technique repeated the aforementioned process till B is empty and at last D results. The
mathematical formula of NMS is given as (8).
𝑠 𝐼𝑜𝑈(𝑀, 𝐵𝑖 ) < 𝑝
𝑠𝑖 = { 𝑖 (8)
0𝐼𝑜𝑈(𝑀, 𝐵𝑖 ) ≥ 𝑝
Where 𝑝 represents the IoU threshold, 𝑠𝑖 represents the result of NMS, M and 𝐵𝑖 represent the bounding boxes.
From the aforementioned (8), the NMS technique directly modifies the score of bounding boxes of adjacent
class to 0, which causes certain overlapping images to be missed. So, the soft-NMS technique which is an
enhancement of a NMS technique and rescored a bounding box is utilized. If the bounding box is overlapped
with M, it will provide a lower score. If the overlap degree is low, the score is constant. The mathematical
formula of soft-NMS is given as (9).
𝑠𝑖 𝐼𝑜𝑈(𝑀, 𝐵𝑖 ) < 𝑝
𝑠𝑖 = { (9)
𝑠𝑖 × (1 − 𝐼𝑜𝑈(𝑀, 𝐵𝑖 ))𝐼𝑜𝑈(𝑀, 𝐵𝑖 ) ≥ 𝑝
At the same time, a bounding box with a high score is M, while the bounding boxes with high scores
are recombined. The position of remerged bounding boxes is weighted and meaned through respective score
weights to the actual bounding box. Figure 4 represents the results of the detected wildfire images in FLAME
dataset.
A novel pairwise based convolutional neural network for image preprocessing enhancement (Chaitra Ravi)
4102 ISSN: 2252-8938
(𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒+𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒)
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (10)
𝑇𝑜𝑡𝑎𝑙𝐼𝑛𝑠𝑡𝑎𝑛𝑐𝑒𝑠
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = (𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑𝐼𝑛𝑠𝑡𝑎𝑛𝑐𝑒𝑠=𝑇𝑟𝑢𝑒) (11)
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒
𝑅𝑒𝑐𝑎𝑙𝑙 = (12)
𝐴𝑐𝑡𝑢𝑎𝑙𝑛𝑢𝑚𝑏𝑒𝑟𝑜𝑓𝐼𝑛𝑠𝑡𝑎𝑛𝑐𝑒𝑠𝑎𝑠𝑇𝑟𝑢𝑒
2∗𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛∗𝑅𝑒𝑐𝑎𝑙𝑙
𝐹1𝑀𝑒𝑎𝑠𝑢𝑟𝑒 = (13)
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑅𝑒𝑐𝑎𝑙𝑙
The research utilized post-processing methods such as soft-NMS technique and its performance is
evaluated with other post-processing techniques like robust and efficient post processing (REPP), NMS, and
adaptive non-maximum suppression (ANMS). The utilized soft-NMS technique obtains an accuracy of
92.03%, precision of 91.83%, recall of 91.58%, and f1-score of 90.98% which is superior to the existing
techniques. Figure 6 represents the Execution of post-processing technique.
The performance of the proposed method with post processing is evaluated with another classification
method that utilizes post processing technique. The existing methods such as R-CNN-REPP, R-CNN-NMS,
and R-CNN-ANMS are utilized for evaluating the proposed method. The proposed PR-CNN soft-NMS
technique attains an accuracy of 97.44%, precision of 97.32%, recall of 97.31%, and f1-score of 96.67%, which
is superior to the existing techniques. Figure 7 illustares the performance of the proposed method with post-
processing technique.
From Figure 7, it is evident that the proposed PR-CNN method outperforms by accomplishing higher
accuracy of 97.44%, which is better than that of the other existing methods. The post-processing technique
utilized for the research also performs superiorly and attains more preferable accuracy of 92.03% which is
higher than that of the existing methods. From the above experimental analysis, it is evident that the introduced
method outperforms other methods in terms of accuracy, precision, recall, and f1-score.
4.3. Discussion
In present times, deep learning-based detection techniques have enhanced their capacity to
automatically learn and extract complex features from image datasets. The existing methods have limitations
like the complexity of noise data, requiring a huge quantity of quality images to obtain high detection accuracy,
and ignoring certain images in the detection process. Moreover, they do not find small regions of fire in dataset
images. This section illustrates the proposed method's advantages and existing methods' limitations. The
existing model Reduce VGGNet [16] had some limitations of having complexity in noise data. The FBC-Anet
[17] method needed huge quantity of high-quality images to obtain many detection results. The ensemble
method [21] did not identify the small regions of fire in the image. To overcome these limitations, the PR-CNN
is proposed in a regional proposal layer of RCNN. The proposed method effectively detects wildfire in images
and merges the small regions which are in non-desired shape. It obtains superior accuracy of 97.44%, precision
A novel pairwise based convolutional neural network for image preprocessing enhancement (Chaitra Ravi)
4104 ISSN: 2252-8938
of 97.32%, recall of 97.31%, and f1-score of 96.67%, thereby being comparatively superior to the existing
algorithms like reduce VGGNet [16], FBC-Anet [17], and ensemble method [21].
5. CONCLUSION
Wildfire detection is a process of identifying the fire pixel according to the temperature difference
between surface energy emitting and ambient temperature. In this manuscript, a PR-CNN is proposed for
wildfire detection. The dataset taken for this research is the FLAME dataset, then normalization and HSV color
space techniques are utilized as pre-processing methods for improving the image quality. The detection process
is performed by R-CNN with a pairwise algorithm in the regional proposal layer. After detecting the wildfire
in dataset images, PR-CNN output is given as input to post-processing techniques like soft-NMS to eliminate
duplicate detection in PR-CNN results. The proposed method efficaciously detects the wildfire in images much
more accurately, alongside small fire regions in images. In the future, various CNN-based methods and
pre-processing methods can be developed for improving the accuracy and speed of wildfire region detection.
REFERENCES
[1] F. Khennou and M. A. Akhloufi, “Improving wildland fire spread prediction using deep U-Nets,” Science of Remote Sensing, vol.
8, 2023, doi: 10.1016/j.srs.2023.100101.
[2] A. Zhang and A. S. Zhang, “Real-time wildfire detection and alerting with a novel machine learning approach: a new systematic
approach of using convolutional neural network (CNN) to achieve higher accuracy in automation,” International Journal of
Advanced Computer Science and Applications, vol. 13, no. 8, pp. 1–6, 2022, doi: 10.14569/IJACSA.2022.0130801.
[3] M. Grari, I. Idrissi, M. Boukabous, O. Moussaoui, M. Azizi, and M. Moussaoui, “Early wildfire detection using machine learning
model deployed in the fog/edge layers of IoT,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 27, no. 2,
pp. 1062–1073, 2022, doi: 10.11591/ijeecs.v27.i2.pp1062-1073.
[4] A. Khan, B. Hassan, S. Khan, R. Ahmed, and A. Abuassba, “Deepfire: a novel dataset and deep transfer learning benchmark for
forest fire detection,” Mobile Information Systems, vol. 2022, 2022, doi: 10.1155/2022/5358359.
[5] M. Prakash, S. Neelakandan, M. Tamilselvi, S. Velmurugan, S. B. Priya, and E. O. Martinson, “Deep learning-based wildfire image
detection and classification systems for controlling biomass,” International Journal of Intelligent Systems, vol. 2023, 2023, doi:
10.1155/2023/7939516.
[6] S. T. Seydi, V. Saeidi, B. Kalantar, N. Ueda, and A. A. Halin, “Fire-net: a deep learning framework for active forest fire detection,”
Journal of Sensors, vol. 2022, 2022, doi: 10.1155/2022/8044390.
[7] J. S. Almeida, C. Huang, F. G. Nogueira, S. Bhatia, and V. H. C. D. Albuquerque, “Edgefiresmoke: a novel lightweight CNN model
for real-time video fire-smoke detection,” IEEE Transactions on Industrial Informatics, vol. 18, no. 11, pp. 7889–7898, 2022, doi:
10.1109/TII.2021.3138752.
[8] R. A. Aral, C. Zalluhoglu, and E. Akcapinar Sezer, “Lightweight and attention-based CNN architecture for wildfire detection using
UAV vision data,” International Journal of Remote Sensing, vol. 44, no. 18, pp. 5768–5787, 2023, doi:
10.1080/01431161.2023.2255349.
[9] H. Yar, T. Hussain, M. Agarwal, Z. A. Khan, S. K. Gupta, and S. W. Baik, “Optimized dual fire attention network and medium-
scale fire classification benchmark,” IEEE Transactions on Image Processing, vol. 31, pp. 6331–6343, 2022, doi:
10.1109/TIP.2022.3207006.
[10] F. M. Talaat and H. ZainEldin, “An improved fire detection approach based on YOLO-v8 for smart cities,” Neural Computing and
Applications, vol. 35, no. 28, pp. 20939–20954, 2023, doi: 10.1007/s00521-023-08809-1.
[11] S. Jana and S. K. Shome, “Hybrid ensemble-based machine learning for smart building fire detection using multi modal sensor
data,” Fire Technology, vol. 59, no. 2, pp. 473–496, 2023, doi: 10.1007/s10694-022-01347-7.
[12] R. Zhang, W. Zhang, Y. Liu, P. Li, and J. Zhao, “An efficient deep neural network with color-weighted loss for fire detection,”
Multimedia Tools and Applications, vol. 81, no. 27, pp. 39695–39713, 2022, doi: 10.1007/s11042-022-12861-9.
[13] G. Tzoumas, L. Pitonakova, L. Salinas, C. Scales, T. Richardson, and S. Hauert, “Wildfire detection in large-scale environments
using force-based control for swarms of UAVs,” Swarm Intelligence, vol. 17, no. 1–2, pp. 89–115, 2023, doi: 10.1007/s11721-022-
00218-9.
[14] K. Ahmad et al., “FireXnet: an explainable AI-based tailored deep learning model for wildfire detection on resource-constrained
devices,” Fire Ecology, vol. 19, no. 1, 2023, doi: 10.1186/s42408-023-00216-0.
[15] M. Mukhiddinov, A. B. Abdusalomov, and J. Cho, “A wildfire smoke detection system using unmanned aerial vehicle images based
on the optimized YOLOv5,” Sensors, vol. 22, no. 23, 2022, doi: 10.3390/s22239384.
[16] T. Bhandarkar, V. K, N. Satish, S. Sridhar, R. Sivakumar, and S. Ghosh, “Earthquake trend prediction using long short-term memory
RNN,” International Journal of Electrical and Computer Engineering (IJECE), vol. 9, no. 2, pp. 1304-1312, 2019, doi:
10.11591/ijece.v9i2.pp1304-1312.
[17] L. Wang, H. Zhang, Y. Zhang, K. Hu, and K. An, “A deep learning-based experiment on forest wildfire detection in machine vision
course,” IEEE Access, vol. 11, pp. 32671–32681, 2023, doi: 10.1109/ACCESS.2023.3262701.
[18] L. Zhang, M. Wang, Y. Ding, T. Wan, B. Qi, and Y. Pang, “FBC-ANet: a semantic segmentation model for UAV forest fire images
combining boundary enhancement and context awareness,” Drones, vol. 7, no. 7, 2023, doi: 10.3390/drones7070456.
[19] L. Zhang, M. Wang, Y. Fu, and Y. Ding, “A forest fire recognition method using UAV images based on transfer learning,” Forests,
vol. 13, no. 7, 2022, doi: 10.3390/f13070975.
[20] Y. Jain, V. Saxena, and S. Mittal, “Ensembling deep learning and CIELAB color space model for fire detection from UAV images,”
ACM International Conference Proceeding Series, 2022, doi: 10.1145/3564121.3564130.
[21] X. Chen et al., “Wildland fire detection and monitoring using a drone-collected RGB/IR image dataset,” IEEE Access, vol. 10, pp.
121301–121317, 2022, doi: 10.1109/ACCESS.2022.3222805.
[22] R. Ghali, M. A. Akhloufi, and W. S. Mseddi, “Deep learning and transformer approaches for UAV-based wildfire detection and
segmentation,” Sensors, vol. 22, no. 5, 2022, doi: 10.3390/s22051977.
[23] X. Wang, Z. Pan, H. Gao, N. He, and T. Gao, “An efficient model for real-time wildfire detection in complex scenarios based on
multi-head attention mechanism,” Journal of Real-Time Image Processing, vol. 20, no. 4, 2023, doi: 10.1007/s11554-023-01321-8.
[24] W. Wang, L. Lai, J. Chen, and Q. Wu, “CAM-based non-local attention network for weakly supervised fire detection,” Service
Oriented Computing and Applications, vol. 16, no. 2, pp. 133–142, 2022, doi: 10.1007/s11761-022-00336-6.
[25] A. Shamsoshoara, F. Afghah, A. Razi, L. Zheng, P. Fulé, and E. Blasch, “The FLAME dataset: aerial imagery pile burn detection
using drones (UAVs),” IEEE Dataport, 2020, doi: 10.21227/qad6-r683.
[26] A. K. Z. R. Rahman, S. M. N. Sakif, N. Sikder, M. Masud, H. Aljuaid, and A. K. Bairagi, “Unmanned aerial vehicle assisted forest
fire detection using deep convolutional neural network,” Intelligent Automation and Soft Computing, vol. 35, no. 3, pp. 3259–3277,
2023, doi: 10.32604/iasc.2023.030142.
[27] J. Ryu and D. Kwak, “A method of detecting candidate regions and flames based on deep learning using color-based pre-
processing,” Fire, vol. 5, no. 6, 2022, doi: 10.3390/fire5060194.
BIOGRAPHIES OF AUTHORS
Siddesh Gaddadevara Matt has completed his Ph.D. in Computer Science and
Engineering. Presently, he is a Professor and Head in the Department of Computer Science
and Engineering (Artificial Intelligence and Machine Learning) at MS Ramaiah Institute of
Technology in Bangalore. His significant research output is evident in the numerous papers
he has published at both international conferences and journals. He is an expert in the fields
of Data Science and Distributed Computing. His research work has been published in more
than 50 peer-reviewed international journal/conferences. He has authored several books in
his areas of expertise that have gained widespread recognition and appreciation. His
contributions to the field of computer science have been significant, and he continues to
inspire and mentor students and researchers in the pursuit of excellence. He can be contacted
at email: [email protected].
A novel pairwise based convolutional neural network for image preprocessing enhancement (Chaitra Ravi)