A Deep Learning-Based Experiment On Forest Wildfire Detection in Machine Vision Course
A Deep Learning-Based Experiment On Forest Wildfire Detection in Machine Vision Course
ABSTRACT As an interdisciplinary course, Machine Vision combines AI and digital image processing
methods. This paper develops a comprehensive experiment on forest wildfire detection that organically
integrates digital image processing, machine learning and deep learning technologies. Although the research
on wildfire detection has made great progress, many experiments are not suitable for students to operate.
Also, the detection with high accuracy is still a big challenge. In this paper, we divide the task of forest
wildfire detection into two modules, which are wildfire image classification and wildfire region detection.
We propose a novel wildfire image classification algorithm based on Reduce-VGGnet, and a wildfire region
detection algorithm based on the optimized CNN with the combination of spatial and temporal features. The
experimental results show that the proposed Reduce-VGGNet model can reach 91.20% in accuracy, and the
optimized CNN model with the combination of spatial and temporal features can reach 97.35% in accuracy.
Our framework is a novel way to combine research and teaching. It can achieve good detection performance
and can be used as a comprehensive experiment for Machine Vision course, which can provide the support
for talent cultivation in machine vision area.
INDEX TERMS Machine vision, computer science education, wildfire detection, comprehensive
experiment, CNN.
I. INTRODUCTION The camera firstly obtains the images, then we can recog-
With the rapid development of computer technology and nize the target object through the computer’s visual recog-
the popularity of cameras, machine vision technology based nition algorithm, and finally the image processing device
on artificial intelligence (AI) and digital image processing can output the target recognition result through the termi-
has been applied to increasing fields, such as face detec- nal [3]. At present, machine vision has become one of the
tion [1], wildfire detection [2], object measurement [3] and essential skills of image and video processing practitioners,
surface defect detection [4]. As an interdisciplinary course, and is also an important professional course in intelligent
Machine Vision combines AI and digital image processing. manufacturing, computer science and technology, and other
With the development of AI, machine vision can replace majors. With the rapid development of AI in recent years,
human beings with intelligent programs for some automated there is an increasing demand for talents in two main appli-
operations and measurements [5]. A complete machine vision cation fields, natural language processing and digital image
system includes a camera and an image processing device. processing.
In recent years, experts at home and abroad have been
The associate editor coordinating the review of this manuscript and exploring the reform of Machine Vision course. For exam-
approving it for publication was Senthil Kumar . ple, Min and Lu [6] focused on the production practice
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
VOLUME 11, 2023 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ 32671
L. Wang et al.: Deep Learning-Based Experiment on Forest Wildfire Detection in Machine Vision Course
and proposed multimedia teaching and guided interactive such extent. The main contributions about this paper can be
teaching. They also suggested that experiments should not summarized as follows:
only be closely related to classroom teaching, but also in (1) We propose a novel wildfire image classification algo-
accordance with practical application needs, and can arouse rithm based on Reduce-VGGnet, which can reduce the train-
students’ interest. Wang et al. [7], aiming at the principle and ing parameters of VGGnet and achieve 91.20% in accuracy.
application of machine vision in the postgraduate curricu- (2) We propose a novel wildfire region detection algorithm
lum, integrated scientific researches, teaching and practical based on the optimized CNN with the combination of spatial
projects into the classroom. In this way, students can asso- and temporal features. The experimental results on FLAME
ciate the project research with the development of products. dataset show the effectiveness of our method.
Shao et al. [8] designed cocoon sorting in the field of machine (3) We combine wildfire image classification module and
vision in order to cultivate intelligent manufacturing talents wildfire region detection module to be a comprehensive
under the background of new engineering subjects. Han and experiment for Computer Vision course. It is a novel way
Liu [9] designed a machine vision experiment platform with to effectively combine teaching and scientific research, and
multiple modules using Tensorflow and Opencv library to incorporates teachers’ research into the teaching process.
solve the problems of insufficient experiments related to The rest of our paper is structured as follows. In Section II,
machine vision, unreasonable experimental design and lack we describe the background of the task of wildfire region
of practical data. The reform of Machine Vision course in for- detection. The design of our framework is presented in
eign countries focuses on social awareness education and the Section III. The results and analysis are presented in
reform of teaching methods of basic technology. For example, Section IV. Finally, the conclusion is provided in Section V.
Sigut et al. [10] believed that the teaching theme of machine
vision depends on the use of new technologies. To enable II. EXPERIMENT REQUIREMENTS AND BACKGROUND
students to better understand the concept, this paper devel- A. EXPERIMENT REQUIREMENTS
oped an application for Android operating system that can The comprehensive experiment designed in this paper can
perform real-time presentation of Opencv image processing be applied to undergraduate/postgraduate Machine Vision
technology to help students better understand the concepts courses, so that students can understand the popular tech-
related to image processing. Cote and Albu [11] advocated nology of machine vision, and cultivate the students’ ability
integrating the social awareness module into the Machine to develop some novel algorithms on image processing and
Vision course, so as to study the social impact of technology automatic recognition. Students need to understand the fol-
and the technology itself. Spurlock and Duvall [12], in order lowing knowledge: ① Digital image processing. For example,
to expand the educational audience of machine vision, that is, the video reading technology is used to read the image frame,
not limited to postgraduates or doctoral students, increased the image enhancement technology is used to achieve image
the development of practical cases in the field of machine contrast expansion, and the image segmentation technology
vision applications and reduced the derivation of mathemati- is used to achieve the target object segmentation. ② Machine
cal formulas to better adapt to undergraduate teaching. From learning. It includes basic principles and experimental eval-
the above reform research, it can be found that most teaching uation of traditional Machine learning algorithms, such as
methods focus on the combination of theory and practice, SVM (Support Vector Machine) and Adaboost, etc. ③ Deep
while in practice, they focus on how to design experiments learning. It includes the fundamentals, principles and popular
that have industrial practicality and can arouse students’ deep network models of deep learning, such as VGGNet
interests. (Visual Geometry Group Network), CNN (Convolutional
Machine Vision is one of the courses that closely Neural Network) and RNN (Recurrent Neural Network).
link theory with practice. However, at present, most uni-
versities’ comprehensive experiments for undergraduates/ B. EXPERIMENT BACKGROUND
postgraduates have problems such as outdated design, lack The occurrence of forest wildfires often causes great damage
of practicality, and most of them only use traditional machine to national economic property. In recent years, the number
learning for experiments [13], [14]. Although the research on of trees near the electricity transmission areas has increased
wildfire detection has made great progress, detection with dramatically. In addition, extreme weather such as drought
high accuracy is still a big challenge. In order to solve has greatly increased the frequency and intensity of forest
the above problems, this paper designs an automatic forest fires [15]. If the machine vision technology can be used to
wildfire detection framework that can also be used as a monitor the wildfire in real time, it can effectively avoid the
comprehensive experiment for Computer Vision course. This serious damage caused by the fire spreading, thus effectively
framework uses image processing, machine learning, and reducing the direct loss caused thereby.
deep learning technology to achieve automatic detection and Because the efficiency of manual fire detection is
annotation of forest wildfire regions, which is a novel way to extremely low, an increasing number of researchers use mod-
combine research and teaching. To the best of our knowledge, ern means to detect fire, such as UAV-IoT [16], satellite
no previous work has explored the preceding problems to remote sensing, wireless sensor, and image fire detectors.
C. RELATED WORKS
There are two ways of flame detection: static and dynamic
flame detection. Static flame detection aims at a single image
and detects the flame region through image processing and
machine learning. The dynamic flame detection aims at a
video image sequence, which uses the static spatial features
and time sequence information of the images to detect the
fire target. The current research about these two detection
methods is introduced below.
(1) Static flame detection. It often detects the flame by
extracting the color and texture features of the image. For
example, Jia and Jiong [19] proposed a method combining
saturation and Otsu threshold segmentation for flame detec-
tion, which was judged based on SVM by combining the
shape features of the flame area with LBP texture analysis.
Tan et al. [20] used RGB and HSI dual color spaces to detect
FIGURE 1. The framework of forest wildfire region detection.
flame objects. Hossain et al. [21] proposed a novel algorithm
that was capable of detecting both flame and smoke from
a single image using block-based color features and texture flame object in real time. As shown above, most research on
features. The static detection methods mainly rely on color, flame-based detection focuses on dynamic detection, which
texture, or shape features, but there are background noise can achieve better results than static detection [28].
and interference signals in many images, such as the sun Besides the above research, lost of studies use machine
and sunset glow. Therefore, these features cannot effectively learning and deep learning algorithms to detect wildfire.
detect the flame objects, and the detection accuracy is unsat- Most of the recent detection algorithms use Convolu-
isfactory. tional Neural Networks (CNNs), including different ver-
(2) Dynamic flame detection. This method combines the sions of YOLO, U-Net, and DeepLab [33]. For example,
temporal information of video with the static features of the Rashkovetsky et al. [34] proposed a single-input convolu-
image for detection. Schultze et al. [22] proposed to use tional neural network based on the well-known U-Net
spectrogram and acoustic spectrogram to obtain flame fea- architectures to detect wildfire area in satellite images.
tures according to flame flicker and movement direction. This Sousa et al. [2] proposed a transfer learning-based method
method can also monitor the movement direction of flame. and data augmentation for wildfire early warning. Treneska
Xie et al. [23] used dynamic features and deep image features and Stojkoska [36] also utilized transfer learning by fine-
for recognition. Shahid et al. [24] obtained the candidate tuning the ResNet50 to detect wildfire on UAV collected
flame regions by combining the shape features and the motion images, which can obtain 88% accuracy. Jindal et al. [35] uti-
stroboscopic features of the flame and then used the classifier lized an algorithm based on YOLOv3 and YOLOv4 to detect
to identify them. Zhang et al. [25] improved the target detec- forest smoke. The results show that YOLOv3 outperforms
tion network YOLOv5 by combining the static and dynamic YOLOv4 in all evaluation metrics. But, the above methods
features of the flame, and solved the problem of unbalanced cannot obtain satisfactory detection results, which may be
positive and negative samples. Yuan et al. [26] detected the further improved by parameter optimization and different
forest wildfire by combining the deep static spatial features data augmentation techniques.
with deep dynamic features, and achieved good detection
results. Wang et al. [27] proposed to integrate the bottom III. DESIGN OF THE EXPERIMENT
color features and motion features of the flame to design a To enable students to master both traditional machine learn-
multistage flame detection method, but failed to detect the ing and deep learning and enhance the accuracy of wildfire
detection, this paper divides the forest wildfire detection task to accelerate the convergence speed of the algorithm.
into two modules, namely, wildfire image classification and x − xmin
wildfire region detection (see Figure 1). At the same time, x′ = (1)
xmax − xmin
we propose a novel wildfire image classification algorithm
based on Reduce-VGGnet, and a wildfire detection algorithm where x ′ denotes the data after normalization, x denotes the
based on the optimized CNN with the combination of spa- extracted feature, xmin denotes the minimum value of one
tial and temporal features. The wildfire image classification feature, and xmax denotes the maximum value of one feature.
module extracts the video image frame, and then extracts the Finally, we input the normalized features to SVM, and
shape, texture and color features of the images and normal- compare the performance of SVM and Reduce-VGGNet.
izes them. Then, we design a wildfire image classification
algorithm based on traditional machine learning (SVM) and 1) SVM
Reduce-VGGNet. Finally, the wildfire region detection mod- SVM is a very popular classifier in recent years. It can realize
ule requires further annotating the fire regions on the classi- nonlinear segmentation of feature vectors. Kernel functions in
fied wildfire images, and uses the Vibe algorithm to detect SVM can simplify the number of inner product calculations,
the candidate fire region, and needs to design an optimized reduce the running time, and can convert the inner product
CNN to extract temporal and spatial features respectively for of high-dimensional space to a low-dimensional space. The
wildfire region detection. This comprehensive experiment is performance of support vector machine mainly depends on
designed hierarchically from different angles, which is con- the selection of kernel function, and the selection of kernel
sistent with the students’ gradual understanding and cognitive function depends on the actual dataset. In addition, students
process of image processing and image recognition. are required to determine the penalty factor c that affects the
generalization ability of the classifier and the parameters of
the kernel function g through cross-validation. The parame-
A. WILDFIRE IMAGE CLASSIFICATION ters with the best performance on the training set are selected
The purpose of this module is to let students understand and as the final parameters of the model.
master the image preprocessing, image feature extraction, The Kernel functions in our experiment mainly include
and image classification. Before classification, we need to Radial Basis Function (RBF), Polynomial Kernel and Sig-
extract the image frames and extract features from the images. moid kernel. We use the package Libsvm to establish the
In this experiment, we propose the Reduce-VGGNet module classification model. The process of SVM classification is
to classify forest wildfire images, and require students to shown in the figure below:
compare this method with the traditional machine learning
algorithm SVM.
Preprocessing First, we extract the video image frames by
the OpenCV module. OpenCV has powerful video editing
FIGURE 2. The flow chart of the SVM classification.
capabilities, and encapsulates many image processing API
functions, including image reading, scanning and face recog-
nition. To extract meaningful information from a video or 2) REDUCE-VGGNET
image, the CV module, VideoCapture(File_path), read() and The Reduce-VGGNet model proposed in this experiment
imwrite(filename, img[, params]) functions can be used for takes VGG-16 as the basic network structure. VGG16 net-
video reading, and the image frame can be saved to a specified work consists of 13 convolutional layers and 3 full connected
file. layers. After each group of convolutional layers, a max pool-
Feature extraction Next, we extract image features based ing layer is connected, followed by ReLU activation function
on color, texture and shape features. To extract the color to solve the gradient dispersion problem.
features, we transform the RGB image to a gray image and As a deep convolution neural network, VGGNet has been
extract the gray histogram features, including the mean and widely used in image classification tasks. Based on the idea
standard deviation of brightness and the probability of gray of transfer learning, this experiment transfers the weight
value. To extract the texture features, we extract the gray parameters obtained from the training set of the network to
co-occurrence matrix, and extract seven invariant moments the wildfire image set. As shown in Figure 3, the weight
based on the co-occurrence matrix. For the shape features, coefficients of the first 13 layers are transferred, the original
the area, roundness, boundary circumference and boundary three full connected layers are removed, and two full con-
roughness of fire region are extracted. The above feature nected layers and softmax are used instead. The number of
extraction belongs to the digital image processing part of neurons of two full connected layers are set to 1024 and 2
Machine Vision. Students can further deepen their under- respectively. We use the forest wildfire image dataset to train
standing of the basic knowledge of digital image processing the full connected layers and Softmax classifier, and fine tune
through this module. the VGG16 for classification. The purpose of this design is to
Normalization Due to the different ranges of different reduce the training parameters and training time in VGGNet
features, it is necessary to normalize them to the range of [0,1] model.
In this experiment, stochastic gradient descent (SGD) and for classification due to their ability of extracting high repre-
Momentum are combined to train the model. We set the epoch sentative features, and also, the forest wildfires are occurring
to 100, the batch_size to 64, and the momentum parameter to in different spatial and temporal scales, we design new CNNs
β1 = 0.9, β2 = 0.999, the initial value of the learning rate is to extract both spatial and temporal features in this paper.
set to 0.001. We use cross entropy loss function to train the The specific steps of this module are designed as follows:
model: we set the moving objects detected by the Vibe algorithm as
T candidate wildfire regions, then we use 16 ∗ 16 blocks to tra-
1X
Logloss = − (yt log(ŷt ) + (1 − yt ) log(1 − ŷt )) (2) verse these regions, and optimize the CNN network through
T appropriate network depth and multiple convolution kernel
t=1
where T represents the training sample, yt is the expected sizes to extract the spatial features of each block and classify
category and ŷt is the predicted category. them. If it is classified as a flame block, the region will be
The learning rate of this experiment is updated by the annotated. Furthermore, we extract the temporal features of
exponential decay method: candidate flame blocks by an optimized CNN model to anno-
tate the wildfire region. We finally detect the wildfire region
η = η0 · α (⌊l/d⌋) (3) through the combination of temporal and spatial features.
where η represents the updated learning rate, η0 is the initial
learning rate, α is the decay coefficient, and ⌊l/d⌋ denotes 1) VIBE ALGORITHM
the downward rounding of the quotient of the number of Vibe is an effective object detection algorithm proposed by
iterations and the decay step size. In the training process, Barnich and Van Droogenbroeck [29]. First, we initialize
the loss value is calculated after each learning rate is set. the background model. Vibe algorithm needs to initialize
When the loss value is stable, the learning rate can be reduced, the background model with the first frame of the video. For
so that the minimum learning rate can be obtained by repeated each pixel, considering that its adjacent pixels may have
experiments. similar pixel values, the pixel value of its neighborhood is
randomly selected as its sample value. Then, the background
B. WILDFIRE REGION DETECTION modeling and foreground detection are carried out. The main
This module is designed to enable students to understand and idea of this algorithm is to determine whether a pixel is the
master how to build a deep neural network model to extract background point. Background modeling stores a sample set
spatial and temporal features. Thus, we can use these features M (x) = {v1 , v2 , . . . , vn } for each background point x, n is
to detect wildfire regions. the size of the sample set. The pixel value of its neighbor
The wildfire region detection contains two stages, the first point is randomly selected as its sample value. For each new
stage can detect moving objects by Vibe algorithm, the sec- pixel, we calculate the distance between the pixel and each
ond stage uses 16∗ 16 blocks to traverse the moving object value in the sample set. When this number is greater than the
and classify each of these blocks to be wildfire blocks or threshold T , the new pixel is considered as the background,
non-wildfire blocks. The detection of wildfire regions can be otherwise it is considered as the foreground:
considered as a binary classification problem in our work.
Considering that deep CNN architectures are the best choice #{d(NR (x), {v1 , v2 , . . . , vn })} ≥ T (4)
the softmax layer. The ‘‘size’’ column in the table means the
size of convolutional kernel. The ‘‘step’’ column means the
stride step. The ‘‘input’’ column means the shape of the input
tensor, while the ‘‘output’’ column means the shape of the
output tensor.
FIGURE 5. An example of the optical flow field.
3) TEMPORAL FEATURE EXTRACTION
This module intends to extract optical flow sequence features
as the input of CNN model to extract dynamic temporal fea- only the structures of layer 1 to 6 are different, and the
tures. The optical flow represents the instantaneous speed of parameters in the other layers are the same.
a moving object. The optical flow is defined as the displace-
ment vector field d t of anterior and posterior frames, where IV. RESULTS AND ANALYSIS
d t (m, n) represents the displacement vector of the pixel (m, n) For the experiment of wildfire image classification, the accu-
from time t to time t + 1, d xt represents the horizontal com- racy is used to evaluate our model [26].
y
ponent and d t represents the vertical component. In order to Npos
show its temporal feature, for a group of continuous L-frame accuracy = (5)
Ntotal
optical flows, the horizontal component d xt and the vertical
y where Npos represents the number of images that are classified
component d t are concatenated as a 2L-channel optical flow
x,y correctly, Ntotal represents the total number of images. In case
sequence, which is denoted as d t ∈ Rw∗h∗2L .
of wildfire region detection, we use precision, recall and
Figure 5 shows a wildfire image from two contin-
accuracy to evaluate. The calculation of accuracy is the same
uous images and its corresponding optical flow fields.
as Eq. (5), which can also be denoted as:
Figure (b) represents the optical flow field in the horizontal
direction, and Figure (c) represents the optical flow field in TP + TN
accuracy = (6)
the vertical direction. It can be seen that the optical flow TP + TN + FP + FN
field in the flame region varies greatly, but the optical flow The precision and recall can be denoted as follows:
field in other regions with little change is relatively smooth.
TP
The flame movement is generally reflected in the vertical precision = (7)
direction, so the change of optical flow field in the verti- TP + FP
cal direction is more significant than that in the horizontal TP
recall = (8)
direction. TP + FN
x,y
We input d t ∈ Rw∗h∗2L to an optimized CNN network to where TP is True Positive, TN is True Negative, FP is False
extract the temporal features and classify them. The value of Positive, FN is False Negative.
L is set as 5 in our experiment. The CNN network structure In order to verify our experiment, we used the dataset
designed is similar to that in Table 1. As shown in Table 2, from FLAME, a forest wildfire dataset opened by Northern
2) RESULTS OF REDUCE-VGGNET
Figure 8 shows the experimental results of Reduce-VGGNet
network. When the number of iterations is close to 100, the
network reaches the convergence state, and the accuracy on
TABLE 8. The results of the combination of spatial and temporal features.
the test set reaches 91.20%. It can be seen that the experimen-
tal result of this method is superior to that of SVM algorithm.
Figure 9 shows the loss curves in model training and testing.
The larger the number of iterations, the more the model tends
to converge, and the convergence speed of this model is
very fast.
[16] O. M. Bushnaq, A. Chaaban, and T. Y. Al-Naffouri, ‘‘The role of UAV- [39] L. Wang, Y. Zhang, and K. Hu, ‘‘FEUI: Fusion embedding for user iden-
IoT networks in future wildfire detection,’’ IEEE Internet Things J., vol. 8, tification across social networks,’’ Int. J. Speech Technol., vol. 52, no. 7,
no. 23, pp. 16984–16999, Dec. 2021. pp. 8209–8225, May 2022.
[17] G. Saldamli, S. Deshpande, K. Jawalekar, P. Gholap, L. Tawalbeh, and [40] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and
L. Ertaul, ‘‘Wildfire detection using wireless mesh network,’’ in Proc. 4th A. C. Berg, ‘‘SSD: Single shot multibox detector,’’ in Proc. Eur. Conf.
Int. Conf. Fog Mobile Edge Comput. (FMEC), Jun. 2019, pp. 229–234. Comput. Vis. Cham, Switzerland: Springer, Sep. 2016, pp. 21–37.
[18] N. T. Toan, P. Thanh Cong, N. Q. V. Hung, and J. Jo, ‘‘A deep learning
approach for early wildfire detection from hyperspectral satellite images,’’
in Proc. 7th Int. Conf. Robot Intell. Technol. Appl. (RiTA), Nov. 2019,
pp. 38–45. LIDONG WANG (Member, IEEE) was born in
[19] L. Jia and Y. Jiong, ‘‘Multi-information fusion flame detection based
Wenzhou, Zhejiang, China, in 1982. She received
on Ohta color space,’’ J. East China Univ. Sci. Technol., vol. 45, no. 6,
pp. 962–969, 2019.
the Ph.D. degree from the College of Com-
[20] Y. Tan, L. Xie, H. Feng, L. Peng, and Z. Zhang, ‘‘Flame detection algo- puter Science and Technology, Zhejiang Univer-
rithm based on image processing technology,’’ Laser Optoelectron. Prog., sity, in 2013.
vol. 56, no. 16, 2019, Art. no. 161012. She was a Visiting Scholar with Tongji Uni-
[21] F. M. A. Hossain, Y. Zhang, C. Yuan, and C.-Y. Su, ‘‘Wildfire flame and versity, in 2022. She is currently an Associate
smoke detection using static image features and artificial neural network,’’ Professor with Hangzhou Normal University. She
in Proc. 1st Int. Conf. Ind. Artif. Intell. (IAI), Jul. 2019, pp. 1–6. has published more than 30 articles on social net-
[22] T. Schultze, T. Kempka, and I. Willms, ‘‘Audio-video fire-detection of open work analysis, data mining, pattern recognition,
fires,’’ Fire Saf. J., vol. 41, no. 4, pp. 311–314, Jun. 2006. and computer education. Her current research interests include image pro-
[23] Y. Xie, J. Zhu, Y. Cao, Y. Zhang, D. Feng, Y. Zhang, and M. Chen, cessing, machine learning, and text mining.
‘‘Efficient video fire detection exploiting motion-flicker-based dynamic
features and deep static features,’’ IEEE Access, vol. 8, pp. 81904–81917,
2020.
[24] M. Shahid, I.-F. Chien, W. Sarapugdi, L. Miao, and K.-L. Hua, ‘‘Deep
HUIXI ZHANG received the master’s degree
spatial–temporal networks for flame detection,’’ Multimedia Tools Appl.,
vol. 80, nos. 28–29, pp. 35297–35318, Nov. 2021. in circuit and system from Zhejiang University,
[25] D. Zhang, H. Xiao, J. Wen, and Y. Xu, ‘‘Real-time fire detection method in 2005. She joined Hangzhou Normal University
with multi-feature fusion on YOLOv5,’’ Pattern Recognit. Artif. Intell., as a Lecturer. Her research interests include signal
vol. 35, no. 6, pp. 548–561, 2022. processing, system design, the Internet of Things
[26] J. Yuan, L. Wang, P. Wu, C. Gao, and L. Sun, ‘‘Detection of wildfires along technology.
transmission lines using deep time and space features,’’ Pattern Recognit.
Image Anal., vol. 28, no. 4, pp. 805–812, Oct. 2018.
[27] Z. Wang, D. Wei, and X. Hu, ‘‘Research on two stage flame detection
algorithm based on fire feature and machine learning,’’ in Proc. Int. Conf.
Robot., Intell. Control Artif. Intell., Sep. 2019, pp. 574–578.
[28] J. Ryu and D. Kwak, ‘‘Flame detection using appearance-based pre-
processing and convolutional neural network,’’ Appl. Sci., vol. 11, no. 11,
YIN ZHANG was born in Lanzhou. She received
p. 5138, May 2021.
[29] O. Barnich and M. Van Droogenbroeck, ‘‘ViBe: A universal background the Ph.D. degree in computer science from Zhe-
subtraction algorithm for video sequences,’’ IEEE Trans. Image Process., jiang University. She is currently an Assistant
vol. 20, no. 6, pp. 1709–1724, Jun. 2011. Professor with the College of Computer Science
[30] A. Shamsoshoara, F. Afghah, A. Razi, L. Zheng, P. Z. Fulé, and E. Blasch, and Technology, Zhejiang University. Her current
‘‘Aerial imagery pile burn detection using deep learning: The FLAME research interests include knowledge discovery,
dataset,’’ Comput. Netw., vol. 193, Jul. 2021, Art. no. 108001. machine learning, digital library, and information
[31] Y. H. Habiboğlu, O. Günay, and A. E. Çetin, ‘‘Covariance matrix-based and knowledge management.
fire and flame detection method in video,’’ Mach. Vis. Appl., vol. 23, no. 6,
pp. 1103–1113, 2012.
[32] S. H. Oh, S. W. Ghyme, S. K. Jung, and G. W. Kim, ‘‘Early wildfire
detection using convolutional neural network,’’ in Proc. Int. Workshop
Frontiers Comput. Vis. Singapore: Springer, 2020, pp. 18–30.
KEYONG HU (Member, IEEE) received the Ph.D.
[33] A. Bouguettaya, H. Zarzour, A. M. Taberkit, and A. Kechida, ‘‘A review
on early wildfire detection from unmanned aerial vehicles using deep degree in mechatronic engineering from the Zhe-
learning-based computer vision algorithms,’’ Signal Process., vol. 190, jiang University of Technology, Hangzhou, China,
Jan. 2022, Art. no. 108309. in 2016. He is currently a Teacher of electronic
[34] D. Rashkovetsky, F. Mauracher, M. Langer, and M. Schmitt, ‘‘Wildfire information engineering with Qianjiang College,
detection from multisensor satellite imagery using deep semantic segmen- Hangzhou Normal University. His research inter-
tation,’’ IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, ests include artificial intelligence and new energy
pp. 7001–7016, 2021. technology.
[35] P. Jindal, H. Gupta, N. Pachauri, V. Sharma, and O. P. Verma, ‘‘Real-
time wildfire detection via image-based deep learning algorithm,’’ in Soft
Computing: Theories and Applications (Advances in Intelligent Systems
and Computing), vol. 2, Singapore: Springer, 2021, pp. 539–550.
[36] S. Treneska and B. R. Stojkoska, ‘‘Wildfire detection from UAV collected
KANG AN (Member, IEEE) received the mas-
images using transfer learning,’’ in Proc. 18th Int. Conf. Informat. Inf.
Technol., Skopje, North Macedonia, 2021, pp. 6–7. ter’s degree in circuit and systems from the Guilin
[37] A. A. Abdelhamid, E.-S.-M. El-Kenawy, B. Alotaibi, G. M. Amer, University of Electronic Technology, in 2007.
M. Y. Abdelkader, A. Ibrahim, and M. M. Eid, ‘‘Robust speech emotion He joined Hangzhou Normal University as an
recognition using CNN+LSTM based on stochastic fractal search opti- Associate Professor. His research interests include
mization algorithm,’’ IEEE Access, vol. 10, pp. 49265–49284, 2022. the Internet of Things technology and machine
[38] N. Lopac, F. Hrzic, I. P. Vuksanovic, and J. Lerga, ‘‘Detection of learning.
non-stationary GW signals in high noise from Cohen’s class of time-
frequency representations using deep learning,’’ IEEE Access, vol. 10,
pp. 2408–2428, 2022.