0% found this document useful (0 votes)
40 views

Image Processing Based Forest Fire Detection Using

The document discusses a method for detecting forest fires using infrared cameras. It captures images using an infrared camera, processes the images to identify fire pixels using RGB and YCbCr color models, and analyzes the fire images using wavelet transforms. The method was tested on images containing fires and able to successfully detect forest fires.

Uploaded by

duybak
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Image Processing Based Forest Fire Detection Using

The document discusses a method for detecting forest fires using infrared cameras. It captures images using an infrared camera, processes the images to identify fire pixels using RGB and YCbCr color models, and analyzes the fire images using wavelet transforms. The method was tested on images containing fires and able to successfully detect forest fires.

Uploaded by

duybak
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Journal of Physics: Conference Series

PAPER • OPEN ACCESS

Image Processing Based Forest Fire Detection using Infrared Camera


To cite this article: Norsuzila Ya’acob et al 2021 J. Phys.: Conf. Ser. 1768 012014

View the article online for updates and enhancements.

This content was downloaded from IP address 158.46.171.186 on 11/02/2021 at 14:25


ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

Image Processing Based Forest Fire Detection using Infrared


Camera

Norsuzila Ya’acob, Mohammad Syamirza Mohd Najib, Noraisyah Tajudin, Azita


Laily Yusof and Murizah Kassim

Faculty of Electrical Engineering, Universiti Teknologi MARA, 404500 Shah Alam


Selangor, Malaysia.
Wireless Communication Technology (WiCoT), Faculty of Electrical Engineering,
UniversitiTeknologi MARA, 404500 Shah Alam Selangor, Malaysia.

[email protected],[email protected],[email protected],
[email protected],[email protected]

Abstract. When time goes by, human beings are advancing in technology, artificial and natural
disasters are drastically increasing. The forest fire is one of the hazards. Forest fire incinerates
trees that provide us with oxygen and if it is not detected early, it is very elusive to stop a forest
fire from continue burns. The project's objective is to capture infrared image of forest fire
detection using the appropriate camera, detect fire with RGB and YCbCr colour model to isolate
fire pixels from the background and separate luminance and chrominance from the original
image, and filter image using MATLAB Analyzer to process images. The method is tested on a
selected image, which captured by the camera that contains fire. Next method is used for
calculating and analysing the fire image, which to differentiate between fire detection or false
detection. Other method is used to process the fire image, which the image will compute and
shown in terminal nodes and graphs by using Wavelet Analyzer 5.0. The results of this system
are achieved fire detection and obtain data for the fire images.

1. Introduction
Forest fires are unexpected fires that happens in nature and cause heavy damage to human resources
and balance of nature. Forest fires destroyed forests, wipe out accommodation, and can lead to high
rates of human death near populate regions. Human carelessness, lightning and full exposure to
extreme heat are main causes of forest fires. Fires are known to be part of the forest ecosystem in
some cases and are important for the life cycle of forest habitats [1]. Forest fire is causing considerable
damage to the forest, that can affect economic loss. Nowadays, all related sensors are used by the fire
detection system. The sensor's quality, precision, and position distribution determine the system's
performance. In the case of outdoor applications, abundance of sensors is required for high accuracy of
fire detection systems. Every sensor also requires large battery capacity in a large open space to keep
it operate. If and only if it is close to fire, sensors detect fire, but this will damage the sensor instead
[2].
The detection mechanism used today is like watching towers, satellite imagery, video recording
over long distances, etc. However, these do not provide a solution to improve the effectiveness
for the detection of forest fire. Video type detection is a low-cost device, but due to environmental
factors such as fog, rain, dust and human activities, it likely to create false alarm. [3]. The satellite
imagery concept, that a lot in common in this project. The project stated that the image was being
captured by infrared camera which work as satellite camera that captured fire region image from the
above. Infrared camera

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

able to capture image during night-time which forest fire does happens unexpectedly. According to
Thunga Saikumar, [4] studied the IoT Enabled Forest Fire Detection and Altering the Authorities, uses
a system tracking and alarming for the protection of trees against forest fire with IoT devices and sensors
allow the monitoring of difference environmental variables. The system was built with fire detector
using Arduino UNO interfaced with a temperature sensor, a smoke sensor, and a buzzer. The IoT-based
forest fire detection system is to detect the fire by monitoring the values of carbon dioxide level and
temperature. The difference of the project, which are use sensors to detect carbon dioxide and
temperature and send alarms via application to the mobile phone and LCD display will show of fire
presence. The method uses visual camera that equipped with infrared lighting that can captured image
during nighttime to take snapshot of the fire region. The images obtained using this infrared camera is
processed and store, then it will be processed with a MATLAB code. These fire images are used to seek
fire pixel and intensity by comparing it with the raw images and from internet sources during night
images. The perks of this method are that the system can be programmed to take information and data
of the environmental conditions and the effect of smoke plumes can be removed. The significant
disadvantage is that it probably can sometime do not anticipate the fire considering the images are due
to environmental conditions. The standardized RGB colour model surpasses the results of illumination
difference to some extent by using YCbCr colour model that took out chrominance and luminance,
further improvement can be achieved. [5]. This method showed YCbCr colour model for fire pixel
classification using fire image statistical feature such as mean and standard deviation because the
relationship between pixels is more compared to other colour models in the YCbCr colour model. The
centre of the flame is as white as the cloud. [6]. The segmentation of fire region and fire centre, which
applied Rule I, II, III and IV to decide the image either fire or not. The Wavelet Analyzer, which to
analyse and process the selected original images that contain fires and smokes. This application can get
figures like terminal nodes and graphs based on selected fire images and give out the values of
percentage of retained energy and number of zeros. The signals displayed are 2-D colour or gray-scale
images, for which the time domain is a pixel's spatial location, and the frequency domain is a pixel's
intensity or varsity of colour [7]. A significant transformation shows compact intensity or varsity of
colour in a small part of the coefficients of transformation. A picture can be stored more systematically
and analysed easier with reduced components for representing an image.
The objective of this research is to capture image using suitable camera with infrared lighting. The fire
image described by using the properties of its colour. The RGB colour model is used to differentiate red
image data. The RGB colour model changing into YCbCr colour model with formulated equation and
analysis carried out with a specific output to differentiate fire pixels. The 2-dimensional wavelet
analyses the image characteristics, influenced with pixel of the fire.

2. Methodology

2.1. Framework
Figure 1 below shows the overall project’s flow chart of the system. Basically, for the hardware which
is the Raspberry Pi Zero W (Wireless). The Pi connects with the compatible camera which is Raspberry
Pi Camera Module V2. The camera will capture the image based on the angle facing it and the camera
comes with 8-megapixel lens. The image will be saved in the microSD that on the board and can access
it in the SD card, either to transfer or remove the images. For the software, divided by two which are
RGB image and Wavelet Analyzer. The raw image will be enhanced to RGB colour model and then into
Y chrominance and luminance. The fire region is segmented by using RGB and YCbCr colour model
method and the Rule (I) and (II) are implemented in this process. Rule (I) and (II), apply for
segmentation of fire region which involving the intensity of RGB and YCbCr colour model. Next, the
Rule (III) and (IV) are also implemented to fulfil the requirement of fire detection. Rule (III) and (IV),
used for segmentation of fire centre which involving mathematical concept such as mean and standard
deviation to calculate segmented region of YCbCr colour model. The fire detection process will decide
either fire detected or not. If not, it will go back to insert the raw image. Next, the usage of Wavelet
Analyzer. By using Wavelet Packet 2-D for the image processing, the image will analyse and compress
the image and gives out statistics for graphs and histogram figure.

2
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

2.2. Hardware
The system operates with connection board as shown in Figure 2. To build this system, the connection
board simulation is needed. It is designed with Fritzing software to show the connection between Pi
and camera. In Figure 2, a Pi Camera Module V2 and Raspberry Pi Zero W and are needed. The camera
connected with flexible flat cable (FFC) to the camera port in the Pi and the Raspberry Pi NoIR Camera
Board V2. This camera is a high-quality, custom designed 8-megapixel Sony IMX219 image sensor
for Raspberry Pi with a fixed focus lens. Because of its infrared illumination, it can also capture picture
clearly at night. It is connected to the Pi through one of the small sockets on the upper surface of the
panel and uses the specially designed CSI interface for camera interfacing. The Pi also connected with
5V power supply to the ‘PWR’ port to work as source and can also use power bank when in mobility.
The SD card slot equipped with 16GB microSD card with NOOBS (New Out of Box Software) for
RPI installed with operating system package that able to boot multiple OSes and to access the save file
in the card. In Figure 3, the schematic of Pi Camera Module V2 that contain around 65 pins. For this
system, it does not use all the pin because replace with flexible flat cable. The characteristics of this
camera’s pad open size around 75µm x 75 µm and the pad pitch is about 120 µm. The GPO (General
Purpose Output) must be connected to GND, when GPO function is not enabled [8].

Figure 1. Flowchart of Forest Fire Detection System.

3
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

Figure 2. Connection Board. Figure 3. IR Camera Module Schematic

2.3. Software

2.3.1 Proteus
The software used for this schematic is Proteus 8 Professional. This software is a proprietary tool suite
that is primarily used for automation of electronic design. To produce printed circuit boards, the software
is used to create schematics and electronic prints. Image above designed in the schematic capture, which
all the pins relate to already integrated with Pi camera, and it was used default program file to simulate
it [9].

2.3.2 MATLAB Source Code


The RGB model colour algorithm is based on the colour flame model value of R, G, B. Component Red
value extracted higher than another colour component that cause improvement of the system's fire pixel
classification rate performance. [10]. In contrast to other result-based colour spaces, the ability to detect
chrominance and luminance data more detail, YCbCr colour model is used. With a specific result of
making Y, Cb, Cr parts from the obtained and processed RGB image, both colour model transformation
were used to alter each RGB component into turn a Y, Cb, Cr component from Y channel, Cb channel
and Cr channel comparison. [11].

2.3.2.1 Segmentation of Fire Region


The conversion of colour space from RGB to YCbCr is shown below.

Mean values of the three components below can be found in the YCbCr colour model for the received
image

Rule I:
This rule established in RGB colour model like R > G > B and turned into YCbCr colour model such
as Y > Cb such as Y component intensity is greater than Cb component intensity [12]. Rule I can
therefore be explained as follows.

4
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

Rule II:
When the brightest region in the image inside the fire region and it is because of there is more
chrominance red component in the fire region than another component, the mean of Y chrominance
and Cr values in the image contain data values [13]. From the information, it is possible to formulate
Rule II from these observations. Rule II shall apply to the pixels that comply with Rule I.

2.3.2.2 Segmentation of Fire Centre


Mean of the image, the standard image deviation can be found in the component Y, Cb and Cr. The
method uses Cr component's standard deviation. The following can be determined.

Rule III:
It is showed that the luminance Y component of the fire centre is much higher than the red component
of chrominance and the blue component of chrominance is higher than the component of luminance Y
[14]. Rule III can be formulated as follows based on test images.

Rule IV:
Cloud and smoke are the white coloured region that are segmented from the raw image when
segmenting the fire centre based on luminance and chrominance. The surface of the fire region is also
integrated to overcome this problem. Fire and the cloud-like without fires have multiple surfaces [15].
Rule IV is laid down as

2.3.3 Wavelet Analyzer


Wavelets represent and interpret multi-resolution images in a more general way. These wavelets can
be applied to the 1-D signal and are very useful for compression of the image and noise removal [16].
For the wavelet 2-D, it can alter and compute the transform such as de-noising and compression. The
wavelet 3-D used for analysing volumetric data using redundant and critically sampled wavelet
transform.

3. Result

3.1 Hardware Implementation


For the hardware implementation as shown in Figure 4, the raw image of fire burning the forest. This
image was taken with the Pi Camera Module V2. The test must be carried out in an open area, because
to get the proper image to use for the image processing in the next procedure. The image must get enough
fire pixel and that why must take proper and centre of the fire region. So, in order to achieve better
results, action must be taken to consider these aspects.

5
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

Figure 4. Camera Prototype

3.2 Filtering Implementation


3.2.1 RGB Colour Model
In algorithms, sub-images of different colour components are obtained from fire-borne sample image
frames. In Figure 5(i) the raw image is inserted, and the image will be entered enhancement technique
which is to amplify the fire pixel and quality of the image. Figure 5(i) has a red component, a blue
component, and green component, and the source code used to extract the RGB components.

(i) (ii)

6
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

Figure 5. (i) Raw image segmented with green line indicates fire, Red component, Green component,
Blue component with infrared camera (ii) Raw image, Red component, Green component, Blue
component from internet source.

Mean values of R, G, and B components for fire regions labelled from the table below.

Table 1. Mean values of R, G, and B components

Row
Mean R Mean G Mean B
Index
1 113.54 41.33 41.33
2 113.85 58.26 58.26

Table 1 above shows that the mean of input image that shown in Figure 9(i) and (ii). Each image
compares with by using camera and taken from the internet. In the image, both gives output that show
the red component is highest than another component by using source code in MATLAB. The mean
value is calculated for the segmented fire region of the original images. The results are shown in Table
1. From the table, it shows that the average the fire pixels show the characteristics that their R intensity
value is greater than G and B intensity. This colour model used to classify the fire pixel. Sometimes it
is not necessarily correct because the concentration of the colour model makes changes to the output
of Mean RGB.

3.2.2 YCbCr Colour Model


Figure 6 shows YCbCr colour model used for classification of fire pixels. The conversion from RGB
to YCbCr colour model is linear. This model is picked to satisfy the ability to separate enlightment
data from chrominance more frequently than other colour spaces. The luminance data that to identify
with the force is anticipated that would be significant for a fire pixel. From the fire images if the more
contrast if Y and Cb of a pixel, the higher probability that it is a fire pixel.

7
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

(i) (ii)
Figure 6. (i) Y component, Chrominance of Blue, Chrominance of Red with camera (ii) Y component,
Chrominance of Blue, Chrominance of Red from internet source

Mean values of Y, Cb, and Cr components for fire regions labelled from the table below.
Table 2. Mean values of Y, Cb, and Cr components
Row
Mean Y Mean Cb Mean Cr
Index
(i) 70.05 117.29 159.72
(ii) 80.03 119.76 152.41

The Table 2 shows that the values of mean for each chrominance in the image. This method detects
fire with high true detection rate and low false detection rate. Both images are the comparison
between raw image from camera and taken from the internet. Both shows that the highest mean value
on the Cr component that shows the flame region is the brightest region in the scene, Cr components
in the fire region is more, so it will be highest from another components. The values stated in each
box tells that the concentration of chrominance and it is calculate by the source code of MATLAB.

3.2.3 Segmentation of Fire Region


Based on Figure 7, Rule I and II are involved for the segmentation of fire region. As the formula
stated before, the unwanted component gets discarded and maintain the fire region in raw image.
Figure 7 shows the image of the RGB input and another components Y, Cb and Cr. From the fire
areas, the concept of Rule I can be easily observed. It is observed that Y component frequency is
higher than the Cb component intensity. Figure 7(i) and (ii) in row 4 is the pixel in the input picture
that satisfies Rule I and Rule II. A pixel in the input picture that combined Rule I and II is known to
be a flame pixel. By applying Rule I and Rule II, Figure 7(i) and (ii) row 4 shows segmented fire
pixel generated.

8
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

(i) (ii)

Figure 7. (i) Raw fire image, Segmented fire region by using Rule I, Segmented fire region by
using Rule II, Segmented fire region by using Rule I and Rule II with infrared camera (ii) Raw
fire image, Segmented fire region by using Rule I, Segmented fire region by using Rule II,
Segmented fire region by using Rule I and Rule II from internet.

3.2.4 Segmentation of Fire Centre


Refer on Figure 8, Rule III and IV involved for the segmentation of fire centre. The centre of the fire
region labelled in white at high temperatures and it can also detect smoke plumes in the picture. Image
with very bright and near to fire are eligible to satisfy Rule III and IV. Various images are tested with
different lighting and illumination. In addition, fire image that far from the camera will not satisfy the
Rule IV. For example, fire centre like colour (cloud) is created by cloud in the image. Figure 8(i) row
4 illustrates fire centre segmentation using Rule III and IV. The pixel is labelled as a fire centre pixel
if it satisfies Rule III and Rule IV. Fire centre region and fire area must be combined in order to achieve
the true fire region. It is possible to obtain the true fire image by merging the images that satisfy Rule
III and IV. Figure 8(i) row 4 uses all two rules to show the segmentation of the true fire region from
the obtained image.

9
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

(i) (ii)

Figure 8. (i) Raw image, Segmented fire region by satisfying Rule III, Segmented fire region
satisfying Rule IV, Segmented fire region by adding Rule III and Rule IV by using infrared
camera (ii) Raw image, Segmented fire region by satisfying Rule III, Segmented fire region
satisfying Rule IV, Segmented fire region by adding Rule III and Rule IV from internet source

3.2.1 Image Analysis

3.2.1.1 Wavelet Analysis Simulation


Figure 9 shows that the analysed image being extracted from the original fire image. This Wavelet
Analysis Simulation is simulating by MATLAB Wavelet Analyzer 5.0 that need to use Wavelet
Packet 2-D as to analyse 2D images. From the image below, x-axis representing as height of image
whereas the y-axis representing width of the image. So, this image height and width is 2464 x 3280.
Figure 10 indicates that the decomposition tree. Image decomposition means to decompose the
image or to divide, to separate the components of an image. It can be done by wavelet
decomposition [17]. The trees show that its separate from an image into 15 parts.

10
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

Figure 9. Analysed Image Figure 10. Decomposition Tree


Figure 11 shows that the coloured coefficients for terminal nodes. The original image being separated
into 15 parts. Each part has different coefficient that showed various output. As go down lower to these
terminal nodes, the brightest pixel turned dark and the rest changed to black and white. This analysis
used haar wavelet level 2 and the entropy is threshold that is level 2.

Figure 11. Coloured Coefficients for Terminal Figure 12. Compression Graph
Nodes

Figure 12 shows that the compression graph of the 2-D wavelet. This graph is based from extracted
image that being uploaded from the source. The x-axis of this graph indicates that percentage of
retained energy and number of zeros which calculated from the image. The y-axis tells about the global
threshold of the image, the maximum threshold is about 1008 that calculated from this compression
method. As shown below, the energy went lower as the threshold increases, this happened because
from the image, the intensity colour of pixel decreases as it went to another region. The data is collected
by analysing not only one image. About five images are being analyse, record, and tabulate the data
for comparison. The data was collected by referred in the Compression Graph that showed the value
(in percentage) of retained energy and number of zeros. The turquoise coloured line indicates that the
value of retained energy whereas for pink coloured line stated the number of zeros. The global
threshold, which can be set to any value until 1008 as an indicator at the graph.
Figure 13 indicates that the parameters of thresholding method. These parameters are related with Figure
10. The thresholding method used is balance sparsity-norm, this method used for denoising processing
algorithm for removing quantum noise in DT was demonstrated to be effective for certain classes of
structures with high-frequency component features [18]. The images are compared to the denoising
result achieved with MATLAB Analyzer 5.0 from the MATLAB toolbox. All computations were
performed using a Haar wavelet decomposition. For denoising in MATLAB, it is needed to use a

11
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

Balanced sparsity-norm thresholding method with a non-white noise model. This method requires test
conduct between fire images on denoising to show the performance of this method, focusing on model
accuracy and inference time [19]. The reason usage of this sparsity pattern, Balanced sparsity-norm is
to achieve both high model accuracy and high efficiency. The global thresholding can be set any value
range starting from 0 to 1008. Every range will be shown the percentage of retained energy and number
of zeros that according to the compression graph.

Figure 13. Parameters of Thresholding Method

4. Conclusion
For the identification, mapping and tracking of forest fire and burn wounds, cameras is needed. The
strengths and disadvantages of each, depends of these lenses. The choice of cameras depends on the
applications that are intended. At present, forest fire occurs at night, which can serve all the fire
monitoring and mapping requirements. The best strategy is to use camera fitted with infrared lighting
capable of capturing and providing infrared filters during the night. It is possible to detect active fires
at night from the sun, light and smoke plumes associated with the fires. Hot spots observed in satellite
imagery of low resolution provide data on general locations, spatial distributions and temporal fire
evolution. The RGB and YCbCr colour models helps to enhance the image from raw to differentiate
either fire detected or not. There was formula constructed in the MATLAB source code as it applied
in the coding the gives the output according the colour intensity of the image. The segmentation of
fire region and fire centre introduce the application of four rule which known as Rule I, II, III and IV.
Each rule derived from the formula that can be calculate using MATLAB source code. Rule I and II
helps to filter fire region and to find the brightest region scene whereas Rule III and IV works as the
presence of cloud and smoke and the concentration between luminance. For analysis, Wavelet
Analysis helped a lot which the haar analysis help to decompose and separate the image into 15 parts.
All item in the application formula was to find even for small fire and smoke the small changes in
total wavelet energy. On the other hand, not all changes in wavelet energy are shown by a motive
object with the same colour as fire because the quantity of fire pixel has not changed.

5. Acknowledgement
The authors would like to thank Faculty of Electrical Engineering, Universiti Teknologi MARA
(UiTM) for their valuable support. This research is partly funded by the Malaysian Government through
UiTM under 600-IRMI/5/3/LESTARI (0035/2019).

12
ICeSSAT 2020 IOP Publishing
Journal of Physics: Conference Series 1768 (2021) 012014 doi:10.1088/1742-6596/1768/1/012014

References
[1] Bousack, H, Towards Improved Airborne Fire Detection Systems Using Beetle Inspired
Infrared Detection and Fire Searching Strategies. Micromachines 6(6), pp. 718–746, 2015.
[2] W. Phillips III, M. Shah, and N. V. Lobo, Flame recognition in video, Pattern Recognition
Letters, vol. 23(1-3), pp. 319–327, 2015.
[3] Yan, X, Real-Time Identification of Smouldering and Flaming Combustion Phases in Forest
Using a Wireless Sensor Network-Based Multi-Sensor System and Artificial Neural Network.
Sensors 16(8), pp. 1228, 2016.
[4] Saikumar, T. and Sriramya, P. IoT Enabled Forest Fire Detection and Altering the Authorities.
International Journal of Recent Technology and Engineering (IJRTE), ISSN: 2277-3878,
Volume-7, Issue-6S4, pp. 100–105, 2019.
[5] Premal, C.E. and S. Vinsley. Image processing-based forest fire detection using YCbCr colour
mode. in Circuit, Power and Computing Technologies (ICCPCT), 2014 International
Conference. IEEE, pp. 255–260, 2014.
[6] Poobalan, K. and S.-C. Liew. Fire detection algorithm using image processing techniques. in
Proceedings of the 3rd International Conference on Artificial Intelligence and Computer
Science (AICS2015), pp. 130–137, 2015.
[7] Gutierrez-Giles and M. A. Arteaga-Perez, GPI based velocity/force observer design for robot
manipulators, IEEE Transient., vol. 53, no. 4, pp.929–938, 2014.
[8] T. Chen, and Y. Yin, Shi-Feng Huang, and Yan- Ting Ye, The smoke detection for early fire-
alarming system based on video processing, International Conference on Intelligent Information
Hiding and Multimedia Signal Processing, pp. 427–430, 2016.
[9] Millan-Garcia, L, An early fire detection algorithm using IP cameras, Sensors 12(5): pp. 5670–
5686, 2014.
[10] B. U. Toreyin, Y. Dedeoglu, U. Gudukbay and A. E. Cetin, Computer Vision based method for
real time fire and flame detection, Pattern Recognition Lett.27(1) pp. 49–58, 2016.
[11] S. Rasouli, O. C. Granmo, and J. Radianti, A methodology for fire data analysis based on pattern
recognition towards the disaster management, 2nd International Conference on Information and
Communication Technologies for Disaster Management (ICT-DM), pp. 130–137, 2015.
[12] T. Celik, H. Ozkaramanlt, and H. Demirel, Fire Pixel Classification using Fuzzy Logic and
Statistical Color Model, IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), pp. 1205–1208, 2014.
[13] W. Homg, J. Peng and C. Chen, A new image based real time flame detection method using
colour analysis, Proc. of IEEE Network sensing and Control (ICNSC), pp. 100–105, 2015.
[14] W. Phillips, M. Shah, N. Viktoria, Flame recognition in video, Pattern Recognition Letters 23(1-
3), pp. 319–327, 2014.
[15] Chen Z. B, Hu L. H, Huo R, Zhu S, Flame Oscillation Frequency Based on Image Correlation,
Journal of Combustion Science and Technology 14(4), pp. 367–371, 2015.
[16] Kinsbury, N. G, Complex wavelets for shift invariant analysis and filtering of signals, Journal
of Applied and Computational Harmonic analysis, pp. 234–253, 2017.
[17] H. Ibrahim and N. Kong, “Brightness preserving dynamic histogram equalization for image
contrast enhancement,” IEEE Trans. Consum. Electron., vol. 53, no. 4, pp. 1752–1758, Nov.
2014.
[18] Fgee, E.B., Phillips, W.J. and Robertson, W.” Comparing Audio Compression using Wavelets
with other Audio Compression Schemes”, IEEE Canadian Conference on Electrical and
Computer Engineering, IEEE, Edmonton, Canada, 2009, pp. 698–701.
[19] Wen, W. Wu, C. Wang, Y. and Li, H. Learning structured sparsity in deep neural networks. In
Advances in Neural Information Processing Systems, pp. 2074–2082, 2016.

13

You might also like