ETRI Journal - 2010 - Celik - Fast and Efficient Method For Fire Detection Using Image Processing
ETRI Journal - 2010 - Celik - Fast and Efficient Method For Fire Detection Using Image Processing
See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fast and Efficient Method for
Fire Detection Using Image Processing
Turgay Celik
ETRI Journal, Volume 32, Number 6, December 2010 © 2010 Turgay Celik 881
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
using the rule-based color model as in Chen and others, degree of burning of fire into classes, such as ‘no fire,’ ‘small,’
Toreyin and others [2] used a mixture of Gaussian models in ‘medium,’ and ‘big’ fires. They report a 96.94% detection rate
RGB space which is obtained from a training set of fire pixels. including false positives and false negatives for their algorithms.
In a recent paper, the authors employed Chen’s fire pixel However, there is no attempt to reduce the false positives and
classification method along with motion information and false negatives by changing their threshold values. Phillips and
Markov field modeling of the fire flicker process [3]. Celik and others [9] proposed a sophisticated method for recognizing
others [4] used background subtraction to segment changed fires in color video. They used both motion and color
foreground objects and three rules of RGB color components information. However, their methods require a look-up table
to detect fire pixels. The overall system can result in very high generation at the beginning of system start-up. This adds the
false alarm rates when intensity changes are considered, and it drawback of dependency on an operator to detect fire in video
is very sensitive to the tuning parameters employed in sequences. Moreover, the approach is too complicated to
background subtraction. process in real-time.
Celik and others [5] used normalized RGB (rgb) values for a A good color model for fire modeling and robust moving
generic color model for fire. The normalized RGB is proposed pixel segmentation are essential because of their critical role in
in order to alleviate the effects of changing illumination. The computer vision-based fire detection systems. In this paper, we
generic model is obtained using statistical analysis carried out propose an algorithm that models the fire pixels using the CIE
in r-g, r-b, and g-b color planes. Due to the distribution of the L*a*b* color space. The motivation for using CIE L*a*b*
sample fire pixels in each plane, three lines are used to specify color space is because it is perceptually uniform color space,
a triangular region representing the region of interest for the fire thus making it possible to represent color information of fire
pixels. Therefore, triangular regions in respective r-g, r-b, and better than other color spaces. The moving pixels are detected
g-b planes are used to classify a pixel. A pixel is declared to be by applying a background subtraction algorithm together with
a fire pixel if it falls into three of the triangular regions in r-g, r- a frame differencing algorithm on the frame buffer filled with
b, and g-b planes. Krull and others [6] used low-cost CCD consecutive frames of input video to separate the moving
cameras to detect fires in the cargo bay of long range passenger pixels from non-moving pixels. The moving pixels which are
aircraft. This method uses statistical features based on grayscale also detected as a fire pixel are further analyzed in consecutive
video frames, which include mean pixel intensity, standard frames to raise a fire alarm.
deviation, and second-order moments as well as non-image This paper is organized as follows. Section II presents the
features, such as humidity and temperature to detect fire in the essentials of color modeling for fire detection and introduces
cargo compartment. The system is commercially used in the different types of concepts to reduce the false alarm rate.
parallel with standard smoke detectors to reduce the number of The moving pixel detection algorithm is also presented in
false alarms caused by the smoke detectors, and it also provides section II. Section III provides experimental results and
visual inspection capability which helps the aircraft crew comparisons with the state-of-the-art fire detection algorithm.
confirm the presence or absence of fire. However, the statistical The paper concludes in section IV.
image features are not considered to be used as part of a
standalone fire detection system.
II. Fire Detection
Marbach and others [7] used the YUV color model for the
representation of video data, where time derivative of This section covers the details of the fire detection algorithm.
luminance component Y was used to declare the candidate fire Figure 1 shows the flow chart of the proposed algorithm for
pixels, and the chrominance components U and V were used to fire detection in a video. It is assumed that the image
classify whether or not the candidate pixels were in the fire acquisition device produces its output in RGB format. The
sector. In addition to luminance and chrominance, they also algorithm consists of three main stages: fire pixel detection
incorporated motion into their work. They report that their using color information, detecting moving pixels, and
algorithm detects less than one false alarm per week; however, analyzing dynamics of moving fire pixels in consecutive
there is no mention of the number of tests conducted. Horng frames. In following, each part is described in detail.
and others [8] used the HSI color model to roughly segment the
fire-like regions for brighter and darker environments. The
1. RGB to CIE L*a*b* Color Space Conversion
initial segmentation is followed by removing the lower
intensity and the lower saturation pixels to eliminate spurious The first stage in our algorithm is the conversion from RGB
fire-like regions, such as smoke. A metric based on binary to CIE L*a*b* color space. Most of the existing CCTV video
contour difference images is also introduced to measure the cameras provide output in RGB color space, but there are also
882 Turgay Celik ETRI Journal, Volume 32, Number 6, December 2010
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Input RGB
The range of fire color can be defined as an interval of color
image values between red and yellow. Since the color of fire is
generally close to red and has high illumination, we can use
RGB to CIE
L*a*b* conversion this property to define measures to detect the existence of fire
in an image. For a given image in CIE L*a*b* color space, the
Apply color-based Apply background
Update following statistical measures for each color channel are
data modeling to subtraction to
background
detect moving
detect fire pixels
pixels
model defined as
1
Detect moving fire
pixel regions
L*m =
N ∑∑ L ( x, y ),
x y
*
1
Analyze the dynamics of regions
and raise fire alarm according to
the region dynamics
am* =
N ∑∑ a
x y
*
( x, y ),
1
Fig. 1. Flow chart of proposed algorithm for fire detection in bm* =
N ∑∑ b ( x, y ) ,
* (2)
image sequences. x y
b∗ = 200 × ( f (Y Yn ) − f ( Z Z n ) ) ,
⎧⎪1, if b∗ ( x, y ) ≥ bm∗ ,
R3 ( x, y ) = ⎨ (5)
⎪⎩0, otherwise,
⎪⎧t ,
13
if t > 0.008856,
f (t ) = ⎨ (1)
⎪⎩7.787 × t + 16 116, otherwise, ⎧⎪1, if b∗ ( x, y ) ≥ a∗ ( x, y ) ,
R 4 ( x, y ) = ⎨ (6)
⎪⎩0, otherwise,
where Xn, Yn, and Zn are the tri-stimulus values of the reference
color white. The data range of RGB color channels is between where R1, R2, R3, and R4 are binary images which represent
0 and 255 for 8-bit data representation. Meanwhile, the data the existence of fire in a spatial pixel location (x, y) by 1 and the
ranges of L*, a*, and b* components are [0, 100], [–110, 110], non-existence of fire by 0. R1(x, y), R2(x, y), and R3(x, y) are
and [–110, 110], respectively. calculated from global properties of the input image. R4(x, y)
represents the color information of fire; for example, fire has a
reddish color. Figure 3 shows sample images from Fig. 2(a),
2. Color Modeling for Fire Detection
and binary images created using (3)-(6). Figure 3(f) shows a
A fire in an image can be described by using its visual combination of these binary images with the binary AND
properties. These visual properties can be expressed using operator. Figure 3(g) displays the segmented fire image.
simple mathematical formulations. In Fig. 2, we show sample In order to find the correlation between L*, a*, and b* values
images which contain fire and their CIE L*a*b* color channels of fire pixels, the following strategy was applied. A set of 500
(L*, a*, b*). Figure 2 gives some clues about the way CIE RGB images was collected from the Internet. Then, each
L*a*b* color channel values characterize fire pixels. Using image was manually segmented to identify all fire regions.
such visual properties, we develop rules to detect fire using Segmented fire regions are converted to L*, a*, and b* color
CIE L*a*b* color space. space. A histogram of fire pixels is created for each of the 3
ETRI Journal, Volume 32, Number 6, December 2010 Turgay Celik 883
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
(a) (b) (c) (d)
Fig. 2. Sample RGB images containing fire and their CIE L*a*b* color channels: (a) RGB image, (b) L* color channel, (c) a* color
channel, and (d) b* color channel. For visualization purposes, responses in different color channels are normalized into interval
[0, 1].
Fig. 3. Applying (3)-(6) to input images: (a) original RGB image, (b) binary image using (3), (c) binary image using (4), (d) binary
image using (5), (e) binary image using (6), (f) combining results of (b)-(e) by binary AND operator, and (g) segmented fire
region.
different color planes, that is, (L*-a*), (L*-b*), and (a*-b*). where L*, a*, and b* channels are quantized into 24 levels, and
Figure 4 shows the histograms of three different color planes 6,223,467 pixels are used to create each histogram. The
884 Turgay Celik ETRI Journal, Volume 32, Number 6, December 2010
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
1.0
1.0
P(a*, b*)
1.0 0.5
P(L*, a*)
P(L*, b*)
0.5
0.5 0
0 0
1.0 0 0 0.2
0.8 0 0.2
0 0.2 0.4
0.6 0.2 0.4
0.4 0.4 0.6 0
0.4 0.6 0.2
a* 0.6 0.6 0.8 0.4
0.2 L* L* 0.8 0.8 a* 0.6
0.8 b* 0.8 b*
0 1.0 1.0 1.0 1.01.0
(a) (b) (c)
Fig. 4. Distributions of labeled fire pixels in fire images: (a) (L*, a*) color channels, (b) (L*, b*) color channels, and (c) (a*, b*) color
channels.
where P(L*, a*), P(L*, b*), and P(a*, b*) are the likelihoods
that (L*, a*), (L*, b*), and (a*, b*) belong to a fire, respectively.
The likelihood of being fire as defined by (7) can be used to
detect a fire pixel by using simple thresholding:
(
⎧⎪1, if P L* ( x, y ) , a* ( x, y ) , b* ( x, y ) ≥ α ,
R 5 ( x, y ) = ⎨ (8)
)
⎪⎩0, otherwise,
where α is a threshold value. Figure 5 shows input RGB
images, corresponding likelihood images P(L*, a*, b*)
resulted from (7), and corresponding R5 images resulted from (a) (b) (c)
(8) for α = 0.005. The pixel value P(L*(x, y), a*(x, y), b*(x, y))
Fig. 5. Calculating P(L*, a*, b*) and thresholding it with α =
of likelihood image P(L*, a*, b*) is a measure in the range of 0.005: (a) RGB input image which contains fire, (b)
[0, 1] for which a higher value of P(L*(x, y), a*(x, y), b*(x, y)) corresponding likelihood image P(L*, a*, b*) computed
means that there is a higher likelihood that the corresponding according to (7), and (c) thresholded P(L*, a*, b*)
computed according to (8).
pixel belongs to a fire.
The optimum value of α can be estimated using receiver
operating characteristic (ROC) analysis. The labeled image set original image. Similarly, false detection is defined as any pixel
is used in estimating the value of α along with the following detected as a fire pixel using (8) but is not in the manually
evaluation criterion. For each value of α, the likelihood in (8) is labeled fire regions. For each value of α, the average rate of
calculated and binarized for each image in the dataset. Using correct detection and false detection is evaluated on a training
the ground truth regions which were manually labeled as a fire image set and used in the ROC curve. Figure 6 shows the ROC
in the training images, the number of correct detections and curve. Using the ROC curve, a threshold value for α can be
false detections are calculated for the whole image set. The easily selected for the fire detection algorithm with a pre-
correct detection is defined as any pixel detected as a fire pixel defined correct detection versus false detection rates.
using (8) which is also manually labeled as a fire pixel in the Different values of α result in different system performances.
ETRI Journal, Volume 32, Number 6, December 2010 Turgay Celik 885
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Using (3)-(6) and (8), a final fire pixel detection equation can
1.0
be defined as
0.9 ⎧ 5
0.8
α =5×10–6
⎪1, if
F ( x, y ) = ⎨ ∑
i =1
Ri ( x, y ) = 5,
(9)
⎪
Correct detection rate
0.7
⎩0, otherwise,
where F(x, y) is the final decision on whether a pixel located at
0.6
spatial location (x, y) results from fire or not. Equation (9)
0.5 means that if inequalities defined in (3)-(6) and (8) give 1 as
their output for spatial location (x, y), then there is a fire in that
0.4 spatial location.
α =0.01
Figure 7 shows the performance of fire segmentation using
0.3
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 (9) on sample RGB images. Figure 7(a) is the original image,
False detection rate Fig. 7(b) is the result of applying (9), and Fig. 7(c) is the
Fig. 6. ROC curve for variable α ranging in [0, 0.01]. segmented image using binary map in Fig. 7(b). It is clear that
the proposed fire color model can adequately detect fire pixels
under different conditions. For instance, the illumination shows
high diversity in between input images (see Fig. 7(a)), and the
proposed fire color model can still detect fire regions.
886 Turgay Celik ETRI Journal, Volume 32, Number 6, December 2010
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
time t and the previous frame L*(x, y, t–1) at time t–1 is are computed as TFD = τ(Ft, t–1) and TBD = τ(Bt, t–1), where
computed and thresholded to create a binary frame difference Ft, t–1 = │L*(x, y, t) – L*(x, y, t–1)│is the interframe difference,
map, FD, at time t, that is, and Bt, t–1 = │L*(x, y, t) – BG(x, y, t–1)│ is the background
⎧⎪1, if L* ( x, y, t ) − L* ( x, y, t − 1) ≥ T , difference.
FD ( x, y, t ) = ⎨ FD
(10) In order to construct reliable background information from
⎪⎩0, otherwise, the video sequence, a background registration step is applied
where TFD is a threshold value. using the binary frame difference map and the binary
Similar to the FD, the background difference is applied background difference map. Using the FD, pixels not moving
between the current frame and the background image to for a long time are considered as reliable background pixels.
generate a binary background difference map, BD, that is, For each pixel (x, y), a stationary index (SI) is kept to count
how long it is detected as non-moving pixel, that is,
⎧⎪1, if L* ( x, y, t ) − BG ( x, y, t − 1) ≥ T ,
BD ( x, y, t ) = ⎨ BD
(11) ⎪⎧ SI ( x, y , t − 1) + 1, if, FD ( x, y, t ) = 0,
⎪⎩0, otherwise, SI ( x, y, t ) = ⎨ (13)
⎪⎩0, otherwise.
where BG(x, y, t–1) is background image pixel value at spatial
location (x, y) at time t–1, and TBD is a threshold value. Using the SI, the background image BG is updated as
The values of TFD and TBD should be selected carefully to follows:
generate adequate binary maps. The fixed-threshold setting ⎧⎪ L* ( x, y, t ) , if SI ( x, y, t ) = TSI ,
provides a computationally inexpensive solution for the BG ( x, y, t ) = ⎨ (14)
⎪⎩ BG ( x, y, t − 1) , otherwise,
threshold selection problem. However, it generally suffers from
changes in illumination as well as the fact that noise inherently where TSI is a threshold which identifies how long it takes for
exists in image acquisition systems. In order to overcome these each pixel to be considered as a non-moving pixel. The value
drawbacks, a dynamic threshold selection method is developed. of threshold TSI is equal to the number of frames that can be
Let DI be the difference image representing either interframe processed by the system per second. The initial values of SI
difference or background difference image. Using the and BG are all set to 0.
difference image DI, the dynamic threshold value τ(DI) is The FD and the BD are considered together to compute a
computed as follows: resultant binary moving pixel map M according to
⎧
⎪ μ ( DI ) + σ ( DI ) , if μ ( DI ) + σ ( DI ) ≥ 10,
τ ( DI ) = ⎪⎨ (12) M ( x, y ) = FD ( x, y, t ) ⊕ BD ( x, y, t ) , (15)
⎪
⎩10,
⎪ otherwise,
where ⊕ is binary OR operator.
where μ(DI) and σ(DI) are the mean and the standard deviation
of the difference image pixels, respectively, that is, 4. Analyzing Fire Regions in Consecutive Frames
∑∑ DI ( x, y ) The pixels which are detected as both fire using (9) and
x y
μ ( DI ) = , moving using (15) are employed to detect a candidate fire pixel
N
(CF) as
∑∑ ( DI ( x, y ) − μ ( DI ) )
2
CF ( x, y, t ) = F ( x, y ) ⊗ M ( x, y ) , (16)
σ ( DI ) =
x y
.
N where ⊗ is binary AND operator. The connected components
Equation (12) automatically determines a dynamic threshold of binary image CF are further analyzed in consecutive frames.
value according to mean and standard deviation of pixel values The connected components of a size one pixel are labeled as
in the difference image DI. In order to reduce false detections noise and are not considered for further analysis. Let O(t) and
resulting from the lower values of dynamic threshold which NO(t) be the set of all pixels which compose one of the
happens in the case of no motion occurring in between connected components of CF(t) and the number of pixels of
consecutive video frames, a lower limit value of 10 is used. the connected component O(t) at time t, respectively. The
The lower limit value can be tuned according to the system connected component O(t) is tracked in time to make a further
dynamics or other external effects under consideration. For decision by considering the behavior of fire. A fire grows
example, if the level of noise is high, then the lower limit value spatially at its early stage and shows flickering behavior. In
should be selected high enough to reduce the false alarm rate, order to quantify this behavior, O(t) is observed in consecutive
and vice versa. Using (12), the threshold values TFD and TBD frames. The counter CGO(t) is generated for O(t) with
ETRI Journal, Volume 32, Number 6, December 2010 Turgay Celik 887
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
⎧⎪CGO ( t − 1) + 1, if NO ( t ) ≥ NO ( t − 1) ,
CGO ( t ) = ⎨ (17) Table 1. Experimental results of fire detection using color
⎪⎩CGO ( t − 1) , otherwise information.
Sample
to count the number of times that the connected component Video Ft Ff Fc FP FN Rd (%)
picture
O(t) at time t has higher number of pixels than that of the
1 432 432 432 0 0 100.00
same connected component O(t–1) at time t–1. The number
of detected fire pixels at the initial stage of fire is less than the
2 408 408 408 0 0 100.00
number of detected fire pixels at later stages when fire grows
spatially. The counter CGO(t) is designed by considering this 3 345 345 345 0 0 100.00
behavior of fire. The counter increases its value at time t
when the number of detected fire pixels is higher than that of 4 332 332 332 0 0 100.00
detected pixels at time t–1. However, because of the
flickering behavior, it is possible to detect less fire pixels at 5 407 407 407 0 0 100.00
time t with respect to time t–1. In this case, (16) does not
update its value. The initial value of the counter in (16) is set 6 362 362 362 0 0 100.00
to 0 and updated in time.
The value of counter CGO(t) is used to measure the temporal 7 2,237 1,358 2,219 11 7 99.20
behavior of fire at early stages. That is, at the early stages of a
8 2,736 2,064 2,734 0 2 99.93
fire, the fire should be spatially growing, hence the number of
detected fire pixels should be increasing. Let FPS be the frames
9 2,294 1,403 2,287 3 4 99.69
per second that our system can process. Using (17), we define a
metric D(t) to decide if the connected O(t) is a fire region or not 10 3,461 2,351 3,460 1 0 99.97
by
Total 13,014 9,462 12,986 15 13 99.88
CGO ( t ) − CGO ( t − FPS + 1)
D (t ) = . (18)
FPS
nighttime, indoor, and outdoor. Equation (9) is applied to each
The value of D(t) is near to 1 when the fire is spatially frame of each video. The fire alarm is raised if the number of
growing, which happens at the early stages of fire. On the other connected fire pixels detected is greater than five. The
hand, the value of D(t) approaches 0 when the fire region is not experimental results of the proposed fire detection method are
having considerable spatial motion. Each connected shown in Table 1. Ft is the number of frames of a video
component is tracked in consecutive frames, and the value of sequence, and Ff is the number of frames containing fire in a
D(t) is computed. It is empirically found that D(t)≥0.4 provides video sequence. Fc is the number of frames (including fire and
a good tradeoff between correct fire region identification and non-fire frames) that correctly classify fire pixels by the
false alarm reduction, and a fire alarm is raised when D(t)≥0.4. proposed algorithm. FP and FN refer to the number of frames
Note that D(t) starts to produce its values after FPS frames pass that are classified as false positive and false negative,
from the first time the corresponding connected component respectively. False positive means that the system recognizes
starts being processed. The metric D(t) automatically fire in an image frame when there is no fire. Similarly, false
eliminates the false alarms caused by fire-like colored moving negative means that the system does not detect fire in an image
but not spatially growing objects. For such objects, the value of frame when there is indeed fire. The detection rate, Rd, of a
D(t) is much smaller than 0.4. The above procedure is applied video is defined as
to each connected component of CFs. Fc
Rd = . (19)
Ft
III. Tests and System Performance The average detection rate that can be achieved is more than
1. Tests on Proposed Fire Color Model 99.88% with the test sequences of samples shown in Table 1.
The false negative detection rates are due to very small fire
First, the performance of the proposed fire color model is regions on the initial combustion in some of the video
tested. The color model is tested on different video sequences sequences. False positives are mainly caused from the
for a variety of environmental conditions, for example, daytime, reflection of fire onto the sides of the metal container. Figure 8
888 Turgay Celik ETRI Journal, Volume 32, Number 6, December 2010
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 8. Sample fire detection results produced by using only proposed fire-color model on different types of videos as shown in different
rows: columns 1, 3, and 5 are RGB frames from input video sequences and columns 2, 4, and 6 are segmented fire regions
according to the proposed fire color model.
ETRI Journal, Volume 32, Number 6, December 2010 Turgay Celik 889
22337326, 2010, 6, Downloaded from https://onlinelibrary.wiley.com/doi/10.4218/etrij.10.0109.0695 by Indian Institute Of Technology, Kharagpur Central Library, Wiley Online Library on [20/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
References
Table 3. Performance comparisons of different fire detection methods
on video sequences detailed in Table 2. [1] T. Chen, P. Wu, and Y. Chiou, “An Early Fire-Detection Method
Method of [3] Proposed method Based on Image Processing,” Proc. IEEE Int. Image Process.,
Name 2004, pp. 1707-1710.
FP (%) FN (%) Rd (%) FP (%) FN (%) Rd (%)
[2] B.U. Toreyin, Y. Dedeoglu, and A.E. Cetin, “Flame Detection in
Movie 1 0.0 66.0 34.0 6.0 40.8 53.2
Video Using Hidden Markov Models,” Proc. IEEE Int. Conf.
Movie 2 4.9 7.6 87.5 1.0 19.8 79.2
Image Process., 2005, pp. 1230-1233, 2005.
Movie 3 44.1 4.7 51.2 0.5 41.2 58.3
[3] B.U. Toreyin, Y. Dedeoglu, and A.E. Cetin, “Computer Vision
Movie 4 10.0 16.3 73.7 1.1 1.4 97.5 Based Method for Real-Time Fire and Flame Detection,” Pattern
Movie 5 0.0 0.0 100 0.0 0.0 100 Recognition Lett., vol. 27, no. 1, 2006, pp. 49-58.
Movie 6 0.0 0.0 100 0.0 0.0 100 [4] T. Celik et al., “Fire Detection Using Statistical Color Model in
Movie 7 0.0 0.0 100 0.0 0.0 100 Video Sequences,” J. Visual Commun. Image Representation, vol.
Movie 8 0.0 0.0 100 0.0 0.0 100 18, no. 2, Apr 2007, pp. 176-185.
[5] T. Celik, H. Demirel, and H. Ozkaramanli, “Automatic Fire
Movie 9 0.0 0.0 100 1.5 0.2 98.3
Detection in Video Sequences,” Proc. European Signal Process.
Average 6.6 10.5 82.9 1.1 11.5 87.4
Conf., Florence, Italy, Sept. 2006.
[6] W. Krüll et al., “Design and Test Methods for a Video-Based
Cargo Fire Verification System for Commercial Aircraft,” Fire
IV. Conclusion Safety J., vol. 41, no. 4, 2006, pp. 290-300.
[7] G. Marbach, M. Loepfe, and T. Brupbacher, “An Image
In this paper, a new image-based real-time fire detection Processing Technique for Fire Detection in Video Images,” Fire
method was proposed which is based on computer vision Safety J., vol. 41, no. 4, 2006, pp. 285-289.
techniques. The proposed method consists of three main [8] W.-B. Horng, J.-W. Peng, and C.-Y. Chen, “A New Image-Based
stages: fire pixel detection using color, moving pixel detection, Real-Time Flame Detection Method Using Color Analysis,” Proc.
and analyzing fire-colored moving pixels in consecutive IEEE Networking, Sensing Control, 2005, pp. 100-105.
frames to raise an alarm. The proposed fire color model [9] W. Phillips III, M. Shah, and N. da Vitoria Lobo, “Flame
achieves a detection rate of 99.88% on the ten tested video Recognition in Video,” Proc. 5th Workshop Appl. Computer
sequences with diverse imaging conditions. Furthermore, the Vision, 2000, pp. 224-229.
experiments on benchmark fire video databases show that the [10] D. Malacara, Color Vision and Colorimetry, SPIE Press, 2002.
proposed method achieves comparable performance with
respect to the state-of-the-art fire detection method. Turgay Celik received the PhD in
The performance of the proposed fire detection system can electrical and electronic engineering from
be further improved by considering smoke at early stages of Eastern Mediterranean University,
fire. However, detecting smoke is a challenging task and prone Gazimagusa, TRNC, TURKEY. He is
to high false detections caused from fog, different lighting currently a research fellow with the
conditions caused by nature, and other external optical effects. Department of Chemistry, National
Such high false detections can be resolved by analyzing every University of Singapore, Singapore, and
smoke-like region. However, this yields a high computational Bioinformatics Institute, Agency for Science, Technology and
load. Research (A*STAR), Singapore. He has produced extensive
The motion information of fire is also considered to publications in various international journals and conferences.
characterize fire regions. The proposed system assumes that He has been acting as a reviewer for various international
the fire will grow gradually in a spatial domain. This might journals and conferences. His research interests are in the areas
not be the case in some situations. For instance, the system of biophysics, digital signal, image and video processing,
might not be able to detect a fire caused by a sudden pattern recognition, and artificial intelligence. These include
explosion. In order to alleviate such cases, the proposed fluorescent microscopy, digital image/video coding, wavelets
system will be further improved to include different scenarios. and filter banks, image/video processing, content-based image
Furthermore, texture and shape information of fire regions indexing and retrieval, and scene analysis and recognition.
will also be investigated to improve the system’s fire
detection performance.
890 Turgay Celik ETRI Journal, Volume 32, Number 6, December 2010