0% found this document useful (0 votes)
23 views5 pages

IJRTI2205012

The document discusses a deep learning approach to detecting fires in images and videos. It proposes a convolutional neural network model to analyze fire photos and overcome issues with existing techniques. The goal is to accurately identify fire patterns and build a statistical model to classify new media. Functional requirements include collecting data with OpenCV and developing a web app with Python. The problem is detecting fires in developing nations where fire services struggle due to lack of resources and data.

Uploaded by

hsdyua408nom10va
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views5 pages

IJRTI2205012

The document discusses a deep learning approach to detecting fires in images and videos. It proposes a convolutional neural network model to analyze fire photos and overcome issues with existing techniques. The goal is to accurately identify fire patterns and build a statistical model to classify new media. Functional requirements include collecting data with OpenCV and developing a web app with Python. The problem is detecting fires in developing nations where fire services struggle due to lack of resources and data.

Uploaded by

hsdyua408nom10va
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

© 2022 IJRTI | Volume 7, Issue 5 | ISSN: 2456-3315

Fire Detection and Warning Application from Images


and Videos using Deep Learning
1
Avinash, 2Akash Prajapati, 3Ashish Kumar Verma, 4Aryan Rajvans, 5Dr. S. Yasotha
1,2,3,4
Students, 5Assistant Professor
Computer Science and Engineering
Sri Eshwar College of Engineering, Coimbatore, India.

Abstract: To address the current problem, a number of fire picture organization options have been offered; Most of these
rely on rules-based processes or high-quality elements. Propose a novel, deep convolutional neural network (CNN)
computation for high-precision fire picture recognition. Use adaptable piecemeal direct units in the secret layers of the
organization, not the traditional straight straight units or resolving abilities of older techniques. Create a second small
dataset of fire photos to help us prepare and test our model. To address the issue of overfitting caused by limited dataset
preparation by an organization using traditional information extension methods and generative adversarial organizations
to operate on the amount of initial photographs available. This research examines handcrafted drawings in the light of fire
detection rules.

From 500 forest images taken under different imaging settings. Non-fire pixels are distinguished by the light force of a viable
photograph, while fire pixels are distinguished by the shading appearance of fire or fire and the existence of fire. This
representation allows a class-by-class examination of the performance of each standard. It is demonstrated that current
writing ideas and processes are class-dependent, with none of them performing equally well across all classifications.
Meanwhile, a recently proposed strategy, based on AI methods and incorporating all the highlighted parameters, overcomes
existing state-of-the-art writing processes in various classes. This technology ensures exciting advances in determining the
fate of metrologic devices for detecting fire in any setting.

Fire detection, deep learning, fire and non-fire are all index terms.

Keywords: Deep Learning, Fire Detection, Machine Learning, Multi-layer receptor (MLP)

I. INTRODUCTION
Because of the successful event of a larger fire with a negative impact on security and human well-being, the usage of fire detection
as a device has grown. Strain and fire cameras are used in this location approach, which is mostly based on electronic cameras.
Those techniques, however, have a flaw in that they only work in a given condition of strategy. In the worst-case scenario,
disappointment can result in weight loss if the cameras are damaged or are not constructed or performing as intended. Observation
cameras are being introduced to combat these concerns and defeat devices. As a result of PC vision, there is an increase in the
precision of recognition in the demand for fire placement for such devices to be used. A wide range of cameras are included in such
devices. These kinds of structures have a few key advantages over traditional fire detection methods. In comparison to traditional
strategies, the cost of applying this type of recognition is less expensive, and the execution of this type of framework is more
uncomplicated. Furthermore, when compared to other traditional discovery techniques, the response time of a dream camera-based
fire detection framework is incredibly swift because it does not require any kind of criteria to trigger the camera and may screen a
large area depending on the camera used. The most useful benefit of this type of system is that the fire source capture can be savedas
an image or video, which can be used to significantly improve the fire recognition technique.In this research, we present a
calculation that combines the fire's shade appearance data with its edge data. Then, using the combined results of both algorithms,
a border is drawn to separate the important details from the images in order to detect and recognise the Fire.The current fire detection
improvement is mostly finished during the period spent on the delicate location of fire and temperature. These sensors, like Fire
sensors, have high awareness, a strong underground insect impedance capacity, quick reaction, extended help life, low price, and a
wide range of applications. However, in the open-space climate, due to high, vast space, air portability, and other factors, Fire, gas,
and temperature effectively vanish during the time spent transmission of these signs, so the fire signal that finally appeared on the
locator is extremely frail, causing Fire, temperature, gas, and other indicators to lose recognition precision, making it simple to
postpone the best an ideal opportunity to alarm, fire recognises fiasco perils.Using the Fire sensor to detect fire is nearly impossible
in some open areas, such as wooded areas. As a result, for a vast space environment, it is necessary to shield the fire in multiple
ways. With the growth of PC vision, computerized picture handling, and example recognition technology, video-based fire
recognition technology has been gradually considered and developed to overcome the shortcomings of traditional fire detection. In
this presentation, the image handling innovation is used to replace the traditional identifier to examine, gather, and interact with the
image of a large-scale fire scene, ultimately achieving the goal of a continuous fire location and recognition process.Fire location
and fire discovery are the two main components of fire checking. The fire has two distinct shading properties as well as the most
crucial morphological aspects while it is igniting. Fire is one of the most serious threats to human life and property in the world.
Some point-type warm and fire indicators are commonly used to avoid large-scale fire and fire damage; nevertheless, such identifiers
must be close to the fire and are easy to fizzle or injure under adverse conditions.With the advancement of computer vision and
image processing, video-based fire detection is now a common practise that offers significant advantages over traditional methods,

IJRTI2205012 International Journal for Research Trends and Innovation (www.ijrti.org) 89


© 2022 IJRTI | Volume 7, Issue 5 | ISSN: 2456-3315

such as faster response and wide-area detection. Because fire is the anticipating picture of fire, fire identification provides a greater
range of fire than fire discovery. Recently, a variety of calculations for Fire detection have been proposed, including dividing any
single edge of a video transfer into small squares of 3232 pixels, then using discrete cosine change and wavelet change to remove
highlights, and finally, using a help vector machine to recognize Fire from recordings tone, wavelet coefficients, and movement
direction, a histogram of arranged angles and other component vectors for each applicant square, and then used two prepared
irregular kinds of wood to determine whether the rival block is Fire or not. Using histograms of neighboring twofold example and
neighborhood double example difference pyramids, a prepared neural organization classifier was used to separate Fire from non-
Fire extricated shape-invariant elements on multiscale parcels for video Fire detection highlights. Despite the fact that fire
recognition has come a long way and has made significant progress, there are still a number of concerns to be addressed. The
traditional Fire discovery or arrangement techniques can be summarized in two stages: first, compute manual elements from the
information Fire pictures, such as shading, surface, shapes, inconsistency, shudder, or recurrence; second, prepare a classifier in
light of the removed elements to test whether a picture is Fire or non-fire.

II. OBJECTIVE

The primary goal of this project is to detect fire in video or photographs. To recognize different patterns in photographs that may
indicate sarcasm. Build a model that correctly identifies new, undiscovered documents with a statistically higher accuracy than the
baseline provided. We have a sub-goal of obtaining high-quality data that will allow us to access our identity. I am deleting duplicate
fire photos from my database. Detecting images of fire only. By accurately scanning all photos for fire or not.

A. FUNCTIONAL REQUIREMENT
Open- CV : collecting fire videos and photos. Python : To develop Web Application programs
B. NON - FUNCTIONAL REQUIREMENTS
Step1 : To collect photos and upload videos. Step2 : Passing the raw data.
Step3 : Locate the data set and stored images.
Step4 : Segmentation of images using Machine Learning, Algorithm and Packages
Step 5: Tracing the raw data and old data of the project. Step 6: To Scan All Images Using a Data Set.
Step 7: Finally Predict the Images Using Fire or Not.

III. PROBLEM STATEMENT

Third world countries such as Africa, Asia and the Americas face many challenges, one of which is the occurrence of fires and the
inability of fire services to successfully control them. Most of these countries are adopting new strategies to strengthen their
capabilities, which have changed the scale of the fire hazard. In many countries, fire and damage statistics are not available, and
data collection is difficult. As a result, the task is to find the image of the fire and then calculate the expected output.
The Fire presents a significant risk to businesses. It can kill or seriously injure employees or visitors and can also damage or destroy
buildings, equipment or stock. The major cause of the fire are electricity, cooking, smoking and the rise in environmental
temperature.

Fire can cause problems anywhere. It may be possible that it can be a public place or poor housekeeping standards, some heat
processes like welding and cutting, older or poorly maintained equipment or electrical circuits or flammable liquids or gas. As a
result, the main task is to find the image or video of the fire and then instruct people about the fire.

IV. SYSTEM REQUIREMENTS HARDWARE REQUIREMENT :


i) Processor : Pentium Dual-Core 2.3 GH
ii) Hard Disk : Processor 250 GB or Higher
iii) RAM : 2GB(Minimum) SOFTWARE REQUIREMENT :
i) Operating System : > Windows 7
ii) Language used : Python(OpenCV and CNN)
iii) Tools : JupyterNoteBook, Anaconda, spyder, Packages
iv) Keras
v) TensorFlow
3. DESCRIPTION OF MODULES
3.1 Extract Images Frame From Video For Fire
3.2 Color conversion Module
3.3 Fire Detector Module
3.4 Alarm Module
3.1 EXTRACT IMAGES FRAME FROM VIDEO FOR FIRE This module deals with the video data processing required
for the system to function. Its primary function in the system is to read video input and extract scene frames.

3.2 COLOR CONVERSION MODULE


Video may use a variety of formats or configurations for processing raw video data. For the system to work, it needs The data must
be of the same type with the same format and configuration. This module converts video data to RGB to be modified format, which

IJRTI2205012 International Journal for Research Trends and Innovation (www.ijrti.org) 90


© 2022 IJRTI | Volume 7, Issue 5 | ISSN: 2456-3315

facilitates further processing of video data.

3.3 FIRE DETECTOR MODULE


This module is an important part of the framework module. It is concerned with the outline check and the pixels, which are two the
basic techniques used in the ordering from foundation pixels and non-fire pixels to fire pixels. Accordingly, this module. These can
be divided into two test parts and a classifier part.

3.4 ALARM MODULE


The alert Module is concerned about raising the alert upon detection of a fire in the viable shore. This module continuously checks
to fire pixels in the final wrapper represented by the classifier part. when a potential fire profile is identified , it warns indicating
the presence of fire.

V. METHODOLOGY
This algorithm is based on the fact that visible color images of fire have high Absolute value in the red component of RGB
coordinates. This property allows simple threshold-based criteria on the red component of color images for segmenting fire images
in the natural landscape. However, not only fire gives a high value in the red component. one more Fire is characterized by the red
component and the ratio between the blue and green components.
An image is loaded into a color detection system. The color recognition system implements specific output results as an image with
RGB pixel property and selected area of color trace. The rule based color model approach has been followed due to its simplicity.
Effectiveness. For that, the color space RGB and YCbCr is chosen. For pixel classification we have identified seven laws of fire. If
a pixel satisfies these seven rules, we say that the pixel belongs to to set fire to the classroom.

fig.1: flow chart for detection and warning using deep learning

fig.2: capture the image if fire exits

IJRTI2205012 International Journal for Research Trends and Innovation (www.ijrti.org) 91


© 2022 IJRTI | Volume 7, Issue 5 | ISSN: 2456-3315

VI. DISCUSSION OF RESULT


The aim of our work was to develop an application capable of detecting fire in video and images, which is robust and works in any
environment. In this regard, we have experimented with various deep learning models and classification models and selected the
ResNet-50-SVM combination for implementation as it provides the best performance metric values (accuracy, precision and recall
for this combination). Is. The values were 97.8%, 97.46% and 97.66%, respectively). An email alert feature has also been
incorporated into our application to provide a logging system as well as real-time alerts to relevant stakeholders, which is
implemented using Firebase. The GUI provides a user-friendly experience and allows users with non-technical backgrounds to use
the application. The application performed exceptionally well during testing. It was able to identify fires in all twelve test fire videos
but misclassified some instances of non-fire videos. Compared to existing hardware solutions, our application is economical, robust,
reliable, and delivers high performance without the need for the installation of a dedicated infrastructure. Due to the use of Deep
Learning and Transfer Learning techniques, our models are easier to build, transform and upgrade, require fewer computing
resources, and provide better performance than existing software solutions that focus on feature engineering and domains. Make
extensive use of knowledge.

VIII. CONCLUSIONS
Candidate region detection using a fast R-CNN network trained to detect fire. Detected Fire Zone- Verification of Linear Dynamic
Systems [LDS]. Expanding our dataset using images assesses the effectiveness of the proposed methodology. Extend the proposed
approach to fire detection in video sequences using dynamic textures. To isolate fire colored objects and for actual fire we used
VLAD encoding which improves performance and significantly reduces detection errors. The results show that the proposed
approach retains high true positive rates, as well as significantly reduces false positives due to fire-colored objects.
The main objective of this study is to automatically detect fire in frames extracted from videos using computer vision methods
implemented in real time with the help of OpenCV library. The proposed solution should be implemented in an existing security
system, which means the use of regular industrial or personal video cameras. A necessary precondition is that the camerais stable.
Given the perspectives of computer vision and image processing, the stated problem corresponds to the detection of dynamically
changing objects based on this color and moving characteristics. While stationary cameras are used, the background detection
method can provide effective segmentation of moving objects in the video sequence. Candidate fire zone segmented foreground
objects can be determined by rule-based color recognition.

ARCHITECTURAL DIAGRAM

Fig.3: Architectural diagram for fire detection and warning application

REFERENCES

1. A. Filonenko, L. Kurnianggoro, K. Jo (2017), "Comparative study of modern convolutional neural networks for smoke detection
on image data," 10th International Conference on Human System Interactions (HSI), pp. 64-68.
2. A.J.Dunnings,T.P. Breckon (2018),“Experimentally defined convolutional neural network architecture variants for non-temporal
real-time fire detection”,25th IEEE International Conference on Image Processing (ICIP), pp. 1558-1562.
3. A. Namozov, Y. Cho (2018),“An efficient deep learning algorithm for fire and smoke detection with limited data Adv.
Electr.” Comput. Eng., 18, pp. 121-128
4. Chao Hu ;Peng Tang ; WeiDong Jin ; ZhengWei He ; Wei Li( 2018).“Real-Time Fire Detection Based on Deep
Convolutional Long-Recurrent Networks and Optical Flow Method” 37th Chinese Control Conference (CCC)
5. C. Tao, J. Zhang, P (2016).“Wang Smoke detection based on deep convolutional neural networks 2016 International
Conference on Industrial Informatics” - Computing Technology, Intelligent Technology, Industrial Information Integration (ICICI),
pp. 150-153.

IJRTI2205012 International Journal for Research Trends and Innovation (www.ijrti.org) 92


© 2022 IJRTI | Volume 7, Issue 5 | ISSN: 2456-3315

6. C. Thou-Ho, W. Ping-Hsueh, C. Yung-Chuen (2004). “An Early Fire-Detection Method Based on Image Processing”.
2004 International Conference on Image Processing, vol. 3, pp.1707-1710.
7. Frizzi, Sebastien, et al(2016). "Convolutional neural network for video fire and smoke detection".IECON 2016-42nd
Annual Conference of the IEEE Industrial Electronics Society. IEEE.
8. GaoXu; Qixing Zhang; Dongcai Liu; Gaohua Lin; Jinjun Wang;Yongming Zhang(2019).“Adversarial Adaptation
From Synthesis to Reality in Fast Detector for Smoke Detection”. IEEE Access.
9. J. Huang, V. Rathod, C. Sun, et al(2017). “Speed/accuracy trade-offs for modern convolutional object detectors”.
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3296-3297
10. K. Muhammad, J. Ahmad, I. Mehmood, et al (2018). “Convolutional neural networks based fire detection in
surveillance videos”. IEEE Access, 6, pp. 18174-18183
11. K. Muhammad, J. Ahmad, S.W. Baik(2018). “Early fire detection using convolutional neural networks during
surveillance for effective disaster management”. Neurocomputing, 288 , pp. 30-42
12. L. Wonjae, K. Seonghyun, L. Yong-Tae, L. Hyun-Woo, C(2017). “Min Deep neural networks for wildfire detection
with unmanned aerial vehicles”. 2017 IEEE International Conference on Consumer Electronics (ICCE) , pp. 252-253
13. M. Everingham, S.M.A. Eslami, L. Van Gool, et al (2015).“ The pascal visual object classes challenge” a
retrospective Int. J. Comput. Vis., 111, pp. 98-136
14. N.M. Dung, D. Kim, S. Ro (2018).“ A video smoke detection algorithm based on cascade classification and deep
learning”. KSII Internet Inf., 12, pp. 6018-6033

IJRTI2205012 International Journal for Research Trends and Innovation (www.ijrti.org) 93

You might also like