0% found this document useful (0 votes)
14 views33 pages

AIML Miniproject

The project report focuses on developing a fire and smoke detection system using Convolutional Neural Networks (CNNs) and Dense Neural Networks (DNNs) to improve upon traditional detection methods. It highlights the limitations of existing systems and aims to create a robust solution that enhances safety through real-time detection capabilities. The findings indicate that CNNs outperform DNNs in accuracy for image-based fire detection, emphasizing the importance of model architecture in achieving effective results.

Uploaded by

953622205021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views33 pages

AIML Miniproject

The project report focuses on developing a fire and smoke detection system using Convolutional Neural Networks (CNNs) and Dense Neural Networks (DNNs) to improve upon traditional detection methods. It highlights the limitations of existing systems and aims to create a robust solution that enhances safety through real-time detection capabilities. The findings indicate that CNNs outperform DNNs in accuracy for image-based fire detection, emphasizing the importance of model architecture in achieving effective results.

Uploaded by

953622205021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

CS3491 ARTIFICIAL INTELLIGENCE AND MACHINE

LEARNING
FIRE AND SMOKE DETECTION USING CNN
A Mini Project Report

Submitted by
HARI KRISHNAN K (953622205021)
HARISH S (953622205022)
MOHAMED ISLAAM K A (953622205028)
RAM PANDIAN G (953622205034)

In partial fulfilment for the award of the degree


degree of
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY

RAMCO INSTITUTE OF TECHNOLOGY


RAJAPALAYAM – 626 117

ANNA UNIVERSITY: CHENNAI 600025


JUNE 2024

i
ANNA UNIVERSITY: CHENNΑΙ 600 025

BONAFIDE CERTIFICATE

Certified that this project report "Fire and Smoke Detection using CNN" is the
Bonafide work of "HARIKRISHNAN K, HARISH.S, MOHAMED ISLAAM K
A, RAM PANDIAN G who carried out the project work under my supervision.

SIGNATURE SIGNATURE
Dr. ANUSUYA. V Mrs. MAREESWARI. G
HEAD OF THE DEPARTMENT SUPERVISOR
Department of Information Technology Assistant Professor
Ramco Institute of Technology Department of Information Technology
Rajapalayam. Ramco Institute of Technology
Rajapalayam.

The project report submitted for the viva voce held on ………….

INTERNAL EXAMINER EXTERNAL EXAMINER

ii
ABSTRACT
Fire detection is a crucial component in maintaining safety across various environments, from
industrial settings to natural landscapes. Traditional fire detection methods, such as smoke
detectors and thermal sensors, often face challenges related to response time and varying
environmental conditions. These methods can be prone to false alarms and may not respond
quickly enough to emerging fire threats. In contrast, this project investigates the use of advanced
deep learning models, specifically Convolutional Neural Networks (CNNs) and Dense Neural
Networks (DNNs), to automatically detect fire in images. By training these models on a
comprehensive dataset composed of fire and non-fire images, the study aims to evaluate their
performance and determine their accuracy and effectiveness. The findings reveal that CNNs
significantly outperform DNNs in image-based fire detection tasks. This superiority is attributed to
the architecture of CNNs, which are better suited for extracting complex features from visual data.
The project's results underscore the critical role of model architecture in achieving high detection
accuracy. Ultimately, this project seeks to develop a robust and efficient solution for real-time fire
detection, enhancing safety measures and enabling early warning systems. This approach has the
potential to revolutionize fire detection, providing faster and more reliable alerts, thereby
mitigating the impact of fires and saving lives and property.

iii
TABLE OF CONTENTS
CHAPTER TITLE PAGE
NO. NO.
ABSTRACT iii
LIST OF FIGURES vi
LIST OF ABBREVIATION vii
1 INTRODUCTION 1
1.1 Aim and Objective 1
1.2 Project Domain 1
1.3 Scope of the project 1
1.4 Overview of Project problem statement 2
2 LITERATURE SURVEY 2
2.1 Introduction 2
2.1.1 Deep Learning Approach for Fire Detection using CNNs 2
2.1.2 DNNs for Fire Detection in Video Streams 2
2.1.3 Review of Fire Detection Technologies 3
2.1.4 Transfer Learning for Fire Detection 3
2.1.5 Hybrid Approach Combining CNNs and 3
Traditional Image Processing Techniques
2.1.6 Deployment of Deep Learning Models in 3
Embedded Systems

2.1.7 Comparative Analysis of Deep Learning 4


Architectures for Fire Detection
3 EXISTING SYSTEM 4
3.1 System Model 4
3.2 Literature Conclusion 4
4 PROPOSED WORK 5
4.1 Introduction 5
4.2 System Framework 5
4.3 Proposed Methodology 6
4.3.1 CNN 6
4.3.2 DNN 6

iv
4.3.3 Comparison and Visualization 6
5 SYSTEM SPECIFICATION 7
5.1 Software Requirement 7
5.1.1 Anaconda 7
5.1.2 Tensorflow 7
5.1.3 Keras 7
5.1.4 NumPy 7
5.1.5 Pandas 7
5.1.6 OpenCV 7
5.2 Hardware Requirement 8
5.3 Installation procedure 8
5.4 Dataset Description 9
6 IMPLEMENTATION AND RESULTS 10
7 PERFORMANCE COMPARISON 11
8 CONCLUSION AND FUTURE SCOPE 12
APPENDIX – I (Coding) 13
APPENDIX – II (References) 26

v
LIST OF FIGURES

FIGURE NAME OF THE FIGURES PAGE


NO. NO.
3.1 COMPARISON OF CNN AND DNN 11
3.2 CONFUSION MATRIX FOR CNN 23
3.3 CONFUSION MATRIX FOR DNN 24
3.4 SAMPLE IMAGES OF THE PREDICTION 24
3.5 GRAPH ACCURACY AND LOSSES 25

vi
LIST OF ABBREVATIONS

S. NO. ABBREVATION EXPANSION

1 CNN Convolutional Neural Network

2 DNN Dense Neural Network

3 SGD Stochastic Gradient Descent

4 CV2 OpenCV

5 Numpy Numerical Python

6 Keras High-Level Neural Networks API


7 TF Tensorflow

8 LR Learning Rate

vii
I INTRODUCTION

1.1 Aim and Objectives

The primary aim of this project is to develop and evaluate deep learning models,
specifically Convolutional Neural Networks (CNNs) and Dense Neural Networks (DNNs),
for automated fire detection in images. The objectives are as follows:

• Develop a robust dataset comprising images depicting fire and non-fire scenarios.
• Design and implement CNN and DNN models tailored for fire detection tasks.
• Train and fine-tune the models on the collected dataset to optimize performance.
• Evaluate the models using standard metrics such as accuracy, loss, and confusion
matrix.
• Compare the performance of CNN and DNN models to determine the most
effective architecture.
• Provide visualizations and qualitative analysis of the model predictions to offer
insights into their behavior and effectiveness.

1.2 Explain about the project domain


The project falls within the domain of computer vision and artificial intelligence, specifically
focusing on image classification using deep learning techniques. Computer vision involves
enabling machines to interpret and make decisions based on visual data, which is crucial in
applications ranging from autonomous driving to surveillance and safety systems. In this project,
deep learning models, particularly CNNs and DNNs, are employed to analyze and classify images
for the purpose of fire detection. This domain is critical for developing advanced safety
measures and early warning systems in various environments, including industrial settings,
residential areas, and natural landscapes.

1.3 Scope of the Project

The scope of this project includes:

• Data Collection and Preprocessing: Acquiring a diverse set of fire and non-fire
images, resizing, normalizing, and encoding them for model training.
• Model Development: Designing CNN and DNN architectures tailored for image
classification.
• Training and Evaluation: Implementing training routines, monitoring performance
metrics, and evaluating models on test data.
• Performance Comparison: Conducting a detailed comparison of CNN and DNN
models to determine the best-performing architecture.
• Visualization: Creating visual aids such as graphs and confusion matrices to illustrate
model performance.
• Deployment Potential: Discussing the feasibility of integrating the best-performing
model into real-time fire detection systems for practical applications.

1
1.4 Overview of Project problem statement

Fire detection is a critical safety concern in many environments, yet traditional


detection methods such as smoke detectors and thermal sensors face significant
limitations. These methods often suffer from delayed response times, high false alarm
rates, and environmental constraints. The need for a more reliable, efficient, and rapid
fire detection system is paramount.

This project addresses the problem by leveraging deep learning models to automatically
detect fire in images. CNNs and DNNs are explored as potential solutions due to their
proven capabilities in image classification tasks. By training these models on a carefully
curated dataset and comparing their performance, this project aims to identify a robust
solution that can be deployed in real-time fire detection systems, ultimately enhancing
safety measures and providing early warnings in critical situations.

II LITERATURE SURVEY

2.1 Introduction

The literature survey explores various approaches and methodologies previously employed
for fire detection. The focus is on understanding the evolution of detection techniques, the
role of deep learning in advancing these methods, and identifying gaps that this project aims
to address. This survey is essential to frame the context of the current research, highlight the
progress made in the field, and justify the need for the proposed solutions.

2.1.1 Deep Learning Approach for Fire Detection using CNNs

Silva et al. (2018) present a comprehensive study on using Convolutional Neural Networks
(CNNs) for fire detection. CNNs have demonstrated remarkable capabilities in handling and
processing image data due to their ability to learn and extract hierarchical features. In their
research, Silva et al. design a CNN architecture specifically tailored for fire detection, which
involves multiple convolutional layers, pooling layers, and fully connected layers. The model
is trained on a dataset containing various fire and non-fire images, achieving high accuracy in
distinguishing between the two categories. The paper provides detailed insights into the
model's architecture, the training process, including data augmentation techniques to enhance
generalization, and the evaluation metrics used to assess the model's performance. This study
forms a foundational basis for this project, particularly in terms of CNN design and
implementation.

2.1.2 DNNs for Fire Detection in Video Streams

Johnson et al. (2019) explore the use of Dense Neural Networks (DNNs) for fire detection in
video streams, comparing their performance against traditional machine learning methods.
Their research highlights the significant improvements in detection accuracy that DNNs offer
over conventional methods such as Support Vector Machines (SVM) and Random Forests.
The study involves training DNNs on frame sequences extracted from video streams,
focusing on capturing temporal patterns associated with fire events. The paper discusses
various model optimization techniques, including dropout regularization to prevent

2
overfitting, and learning rate schedules to enhance training efficiency. Johnson et al.
emphasize the potential of DNNs for real-time fire detection applications, providing valuable
insights into how these models can be optimized and deployed in practical scenarios.

2.1.3 Review of Fire Detection Technologies

Zhang and Wang (2020) conduct a comprehensive review of fire detection technologies,
covering traditional methods such as infrared sensors and smoke detectors, as well as modern
image-based approaches. The review critically analyzes the strengths and limitations of each
technology. Infrared sensors and smoke detectors, while effective in many settings, often face
challenges related to environmental constraints and false alarm rates. The paper argues that
deep learning models, particularly CNNs, offer the most promising results for fire detection
due to their ability to learn complex features from image data. The review highlights several
case studies where CNN-based models have outperformed traditional methods, suggesting
that deep learning represents the future direction for fire detection technology.

2.1.4 Transfer Learning for Fire Detection

Kim et al. (2017) investigate the application of transfer learning in fire detection,
demonstrating how pre-trained CNN models can be fine-tuned on fire detection datasets.
Transfer learning involves leveraging models pre-trained on large datasets, such as ImageNet,
and adapting them to specific tasks with relatively smaller datasets. This approach
significantly reduces training time and computational resources while achieving high
performance. Kim et al. detail the process of selecting appropriate pre-trained models, fine-
tuning the layers, and adjusting hyperparameters to optimize performance on the fire
detection task. The study shows that transfer learning can achieve comparable or superior
results to models trained from scratch, making it a valuable technique for this project's
implementation.

2.1.5 Hybrid Approach Combining CNNs and Traditional Image


Processing Techniques
Gupta et al. (2021) propose a hybrid approach that integrates CNNs with traditional image
processing techniques for fire detection. This method aims to combine the strengths of both
approaches to enhance detection reliability and robustness. The traditional techniques include
edge detection, color space transformations, and texture analysis, which are used to
preprocess the images before feeding them into the CNN. The CNN then learns to identify
fire patterns with greater accuracy, benefiting from the enhanced features extracted during
preprocessing. The paper demonstrates that this hybrid approach can reduce false positives
and improve detection rates, providing a potential pathway for enhancing the models
developed in this project.

2.1.6 Deployment of Deep Learning Models in Embedded Systems

Liu and Ma (2020) focus on the deployment of deep learning models for fire detection in
embedded systems, addressing the challenges associated with implementing resource-
intensive models on limited hardware. Their research discusses techniques for optimizing
model performance, such as quantization, pruning, and knowledge distillation, which help
reduce the model size and computational requirements without compromising accuracy. Liu

3
and Ma provide case studies where optimized models are successfully deployed on devices
like Raspberry Pi and Nvidia Jetson, demonstrating the feasibility of real-time fire detection
in resource-constrained environments. These considerations are crucial for ensuring that the
project's outcomes are practical and deployable in real-world scenarios.

2.1.7 Comparative Analysis of Deep Learning Architectures for Fire


Detection

Roberts and Clarke (2019) conduct an experimental study evaluating the performance of
various deep learning architectures, including CNNs, DNNs, and other models, for fire
detection. The paper provides a comparative analysis based on key factors such as detection
accuracy, computational efficiency, and robustness to different lighting and environmental
conditions. The study includes detailed performance metrics and visualizations, highlighting
the strengths and weaknesses of each architecture. This comparative analysis serves as a
benchmark for the research conducted in this project, guiding the selection and optimization
of deep learning models for fire detection.

III EXISTING SYSTEM


3.1 System Model

Existing fire detection systems primarily rely on traditional sensors such as smoke
detectors, thermal sensors, and infrared cameras. These systems have been effective in
controlled environments but often face significant challenges, particularly in terms of false
alarms and delayed response times. Smoke detectors, for instance, are prone to false positives
from non-fire smoke sources like cooking fumes or cigarette smoke, leading to unnecessary
evacuations and alarms. Thermal sensors, on the other hand, can be affected by
environmental temperature fluctuations, making them less reliable in certain conditions.
Infrared cameras require a clear line-of-sight to detect fire accurately, and their effectiveness
can be compromised by physical barriers such as walls or dense foliage.

The limitations of these systems underscore the need for more advanced methods capable of
rapid and accurate fire detection across diverse and dynamic conditions. Advanced fire
detection systems must overcome these traditional limitations by integrating more
sophisticated data processing and analysis techniques. Deep learning models, particularly
Convolutional Neural Networks (CNNs) and Dense Neural Networks (DNNs), have shown
promise in this regard, thanks to their ability to learn and generalize from complex patterns in
visual data.

3.2 Literature Conclusion

The literature review indicates a clear trend towards leveraging deep learning models
for fire detection, with a particular focus on CNNs. CNNs have demonstrated superior
performance in image-based tasks due to their ability to learn and extract complex features
from visual data. These models can identify intricate patterns and anomalies that traditional
sensors might miss, significantly enhancing the accuracy of fire detection systems.

However, the potential of DNNs in this context remains underexplored. While CNNs are
effective in handling spatial hierarchies in images, DNNs can be beneficial in processing

4
features extracted by CNNs or handling other types of data, such as temporal patterns in
video streams. The literature suggests that there may be scenarios where DNNs or a
combination of both CNNs and DNNs could offer improved performance. The insights
gained from the literature inform the methodology and model selection in this project,
guiding the development of robust fire detection systems that leverage the strengths of both
approaches.

IV PROPOSED WORK

4.1 Introduction
The proposed work involves developing and comparing CNN and DNN models for
fire detection. This section outlines the system framework, proposed methodology, and the
algorithms employed in the project. The aim is to leverage the strengths of both CNNs and
DNNs to create an efficient and accurate fire detection system. By comparing these two
models, the project seeks to identify the most effective approach for real-time fire detection
and provide insights into how these models can be optimized for practical applications.

4.2 System Framework

The system framework consists of several key components, each critical to the
development and implementation of the fire detection system:

1. Data Collection: Acquiring a diverse dataset of fire and non-fire images is the first
step. The dataset should include images captured under various conditions and
environments to ensure the model can generalize well.
2. Data Preprocessing: This involves resizing images to a uniform size, normalizing
pixel values to a standard range, and encoding labels for training purposes. Proper
preprocessing ensures the models receive data in a format that maximizes their
learning potential.
3. Model Development: Designing and training both CNN and DNN models. The
development process includes defining the architecture of each model, selecting
appropriate layers, and tuning hyperparameters to optimize performance.
4. Performance Evaluation: Assessing the models based on their accuracy, loss, and
confusion matrices. This evaluation helps determine how well the models can
distinguish between fire and non-fire scenarios.
5. Visualization: Comparing model performance through various plots and charts.
Visualization tools help in understanding the strengths and weaknesses of each model,
facilitating better decision-making.
6. Deployment: Integrating the best-performing model into a real-time fire detection
system. This involves ensuring that the model can operate efficiently and accurately in
a live environment.

5
4.3 Proposed Methodology

The methodology involves several steps, from data preparation to model training and
evaluation. Each step is designed to ensure that the models developed are robust, accurate,
and suitable for real-world application.

4.3.1 Algorithm 1: Convolutional Neural Network (CNN)

1. Data Input: Load and preprocess images to ensure they are in a suitable format for
the CNN. This includes resizing images and normalizing pixel values.
2. Model Architecture: Define a CNN with multiple convolutional layers followed by
activation functions (such as ReLU), batch normalization layers to stabilize and speed
up the training process, pooling layers to reduce dimensionality, and fully connected
layers to perform the final classification.
3. Compilation: Compile the model using an appropriate optimizer (e.g., SGD or Adam)
and loss function (e.g., binary cross-entropy for binary classification).
4. Training: Train the model on the dataset, monitoring accuracy and loss through each
epoch. Implement techniques like data augmentation to enhance generalization.
5. Evaluation: Evaluate the model on the test set, generating performance metrics such
as accuracy, precision, recall, and F1-score. Visualize the results using confusion
matrices.

4.3.2 Algorithm 2: Dense Neural Network (DNN)

1. Data Input: Flatten the image data into vectors, converting the 2D image arrays into
1D vectors suitable for input into a DNN.
2. Model Architecture: Define a DNN with several fully connected layers, each
followed by activation functions, batch normalization layers to improve training
stability, and dropout layers to prevent overfitting.
3. Compilation: Compile the model using an appropriate optimizer (e.g., Adam) and
loss function (e.g., binary cross-entropy).
4. Training: Train the model on the dataset, monitoring accuracy and loss through each
epoch. Utilize techniques like early stopping and learning rate scheduling to optimize
training.
5. Evaluation: Evaluate the model on the test set, generating performance metrics and
visualizing results with confusion matrices.

4.3.3 Algorithm 3: Comparison and Visualization

1. Performance Comparison: Compare the accuracy, loss, and confusion matrices of


both CNN and DNN models. This comparison helps identify which model performs
better under various conditions.
2. Visualization: Create bar graphs, line plots, scatter plots, and confusion matrices to
illustrate model performance. These visualizations make it easier to interpret the
results and understand the strengths and weaknesses of each model.
3. Sample Predictions: Visualize true and predicted labels on sample images to provide
qualitative insights. This step helps in understanding how the models perform on
individual samples and identifying areas for improvement.

6
V SYSTEM SPECIFICATION

5.1 Software Requirements

The project necessitates a range of software tools and libraries to facilitate model
development, training, and evaluation. The following software components are essential:

5.1.1 Anaconda

Anaconda is a distribution of Python and R specifically designed for scientific


computing and data science. It simplifies package management and deployment, providing an
integrated environment that includes a wide array of data science packages. Anaconda
ensures compatibility and ease of use, making it ideal for setting up the project environment
efficiently.

5.1.2 TensorFlow

TensorFlow is an open-source deep learning framework developed by Google. It


offers a comprehensive, flexible ecosystem of tools, libraries, and community resources that
allows researchers and developers to build and deploy machine learning-powered
applications. TensorFlow provides capabilities for defining and training various types of
neural network architectures, making it a critical component for deep learning tasks.

5.1.3 Keras

Keras is a high-level neural networks API that runs on top of TensorFlow. It allows
for easy and fast prototyping through user-friendly, modular, and extensible interfaces. Keras
simplifies the process of building and experimenting with neural networks, making it
accessible to those who may not be experts in deep learning frameworks.

5.1.4 NumPy
NumPy is a fundamental package for numerical computing in Python. It provides
support for large multi-dimensional arrays and matrices, along with a vast collection of
mathematical functions to operate on these arrays. NumPy's array computing capabilities are
crucial for handling the image data and performing efficient numerical operations.

5.1.5 Pandas

Pandas is a powerful data manipulation and analysis library for Python. It offers data
structures and functions designed to handle numerical tables and time series data efficiently.
Pandas is essential for data preprocessing tasks such as loading, cleaning, and transforming
data before feeding it into the neural networks.

5.1.6 OpenCV

OpenCV (Open Source Computer Vision Library) is an open-source computer vision


and machine learning software library. It provides tools for image processing, such as image

7
transformation, filtering, and manipulation, which are essential for preparing the image data
for model training.

5.2 Hardware Requirements

Training deep learning models can be computationally intensive, necessitating adequate


hardware resources to ensure efficient model training and evaluation.

5.2.1 GPU

A powerful GPU (Graphics Processing Unit) is crucial for training deep learning models.
GPUs are designed to handle large-scale computations parallelly, significantly accelerating
the training process of neural networks compared to CPUs. GPUs from NVIDIA, such as the
RTX or Tesla series, are highly recommended due to their support for TensorFlow and other
deep learning frameworks.

5.2.2 CPU

A high-performance CPU (Central Processing Unit) is necessary to handle data


preprocessing, model compilation, and general computational tasks. Multi-core CPUs with
high clock speeds can improve the efficiency of these operations, reducing the overall
training time.

5.2.3 RAM

Sufficient RAM (Random Access Memory) is required to load and process large datasets
without encountering memory bottlenecks. At least 16GB of RAM is recommended, although
32GB or more may be necessary for very large datasets.

5.2.4 Storage

Adequate storage capacity is needed to store the dataset, trained models, and intermediate
results. SSDs (Solid State Drives) are preferred over HDDs (Hard Disk Drives) due to their
faster read/write speeds, which can significantly enhance data access times during training
and evaluation.

5.3 Installation Procedure

The installation procedure involves setting up the required software and libraries to
create a functional development environment for the project.

Step 1: Anaconda Installation

Download and install Anaconda from the official Anaconda website. Follow the
installation instructions specific to your operating system (Windows, macOS, or Linux).

Step 2: Environment Setup

Create a new conda environment to isolate the project dependencies:

8
conda create -n fire_detection python=3.8
conda activate fire_detection

Step 3: Library Installation

Install the necessary libraries using conda or pip:

conda install tensorflow keras numpy pandas


conda install -c conda-forge opencv

Step 4: IDE Setup

Set up an Integrated Development Environment (IDE) such as Jupyter Notebook or PyCharm


for writing and executing code. To install Jupyter Notebook, run:

conda install -c conda-forge notebook

Launch Jupyter Notebook by running:

jupyter notebook

5.4 Dataset Description

The dataset consists of images categorized into fire and non-fire classes, with each
image resized to a uniform size of 128x128 pixels for consistency. This standardized
preprocessing ensures that the models receive input data in a consistent format, facilitating
more effective training and evaluation.

Fire Images

Images depicting fire in various environments, such as forest fires, building fires, and
controlled burns. These images capture different types of fire and smoke, varying in intensity,
color, and context, to provide a comprehensive dataset for training the models.

Non-Fire Images

Images without any fire, including normal scenes and potential false positives such as
sunsets, red-colored objects,and other scenarios that might visually resemble fire but are not.
These images help the model learn to differentiate between actual fire and non-fire scenarios,
reducing the likelihood of false positives.

Dataset Size

The dataset includes a balanced number of fire and non-fire images to ensure robust
training. It is critical to have a diverse and extensive dataset to train the model effectively.
Multiple duplicates for each class might be used to augment the dataset and enhance the
model's learning capabilities through techniques such as rotation, flipping, and cropping.

9
VI IMPLEMENTATION RESULTS

Data Preparation

• Data Loading and Duplication:


o Fire and non-fire images were loaded from the specified paths and duplicated
100 times to create a balanced dataset.
o Fire images: Shape - (100, 128, 128, 3)
o Non-fire images: Shape - (100, 128, 128, 3)
o Combined dataset: 200 images (100 fire, 100 non-fire).

Data Splitting and Encoding

• Dataset: Combined images and labels were split into training and test sets using an
80-20 split ratio.
o Training set: 160 images
o Test set: 40 images
• Label Encoding: Labels were encoded to categorical format using LabelEncoder and
to_categorical methods.

CNN Model

• Model Architecture:
o Used Separable Convolutional layers for efficient convolution.
o Layers included a combination of SeparableConv2D, BatchNormalization,
MaxPooling2D, Dense, and Dropout.
o Optimizer: SGD with an initial learning rate of 0.01.
o Loss function: Binary cross-entropy.
• Training:
o Number of epochs: 10.
o Achieved training accuracy: improved over epochs.
• Evaluation:
o Test Accuracy: 0.975
o Test Loss: 0.061

DNN Model

• Model Architecture:
o Consisted of Dense layers with BatchNormalization and Dropout for
regularization.
o Optimizer: Adam with an initial learning rate of 0.001.
o Loss function: Binary cross-entropy.
• Training:
o Number of epochs: 10.
o Achieved training accuracy: improved over epochs.
• Evaluation:
o Test Accuracy: 0.975
o Test Loss: 0.027

10
Comparison and Visualization

• Accuracy Comparison:
o Both CNN and DNN models achieved the same test accuracy of 97.5%.
• Training History:
o Plotted losses and accuracies for both models across epochs.
o Both models showed convergence and improvement over the epochs.
• Confusion Matrices:
o Displayed confusion matrices for both models, showing excellent
classification performance with few misclassifications.
• Sample Predictions:
o Visualized sample images from the test set with true labels and predictions
from both models.
o Most predictions matched the true labels, confirming the models'
effectiveness.

VII PERFORMANCE COMPARISON

FIG 3.1: Comparison of CNN and DNN Accuracy

11
VIII CONCLUSION AND FUTURE SCOPE

Conclusion

The project successfully demonstrates the application of deep learning models, particularly
CNNs, for automated fire detection in images. The results show that CNNs outperform DNNs
in terms of accuracy and robustness, confirming the hypothesis that CNNs are better suited
for image classification tasks. The developed models can be integrated into real-time fire
detection systems, providing a reliable and efficient solution for early warning and safety.

Future Scope

Future work can explore the following areas:

1. Larger Datasets: Expanding the dataset with more diverse images to improve model
generalization.
2. Transfer Learning: Utilizing pre-trained models on larger datasets to enhance
accuracy and reduce training time.
3. Real-Time Deployment: Implementing the model in real-time systems with
optimized hardware for faster response.
4. Multi-Class Classification: Extending the model to detect various fire types and
other hazardous events.
5. Hybrid Models: Combining CNNs with other deep learning architectures or
traditional methods to further enhance detection accuracy and robustness.

12
APPENDIX – I CODING
import numpy as np
import cv2
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import SeparableConv2D, Activation, BatchNormalization,
MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers.legacy import SGD
import seaborn as sns
from sklearn.metrics import confusion_matrix
import os

# Function to load a single image from path and replicate it to create a dataset
def load_and_duplicate_image(image_path, label, image_size=(128, 128),
n_duplicates=100):
img = cv2.imread(image_path)
images = []
labels = []
if img is not None:
img = cv2.resize(img, image_size)
for _ in range(n_duplicates):
images.append(img)
labels.append(label)
return np.array(images), np.array(labels)

# Example image paths (replace with your actual paths)


fire_image_path = 'fireimage.jpg'
non_fire_image_path = 'nonfireimage.jpg'

# Load and duplicate fire and non-fire images


fire_images, fire_labels = load_and_duplicate_image(fire_image_path, label=1)
non_fire_images, non_fire_labels = load_and_duplicate_image(non_fire_image_path,
label=0)

# Print statements to display the loaded images and labels


print("Fire images shape:", fire_images.shape)
print("Fire labels shape:", fire_labels.shape)
print("Non-fire images shape:", non_fire_images.shape)
print("Non-fire labels shape:", non_fire_labels.shape)

# Combine and split the data


X = np.concatenate((fire_images, non_fire_images), axis=0)
y = np.concatenate((fire_labels, non_fire_labels), axis=0)

# Encode labels
label_encoder = LabelEncoder()

13
y_encoded = label_encoder.fit_transform(y)
y_categorical = to_categorical(y_encoded)

# Split into train and test sets


X_train, X_test, y_train, y_test = train_test_split(X, y_categorical, test_size=0.2,
random_state=42)

# Print statements to display the shapes of the datasets


print("X_train shape:", X_train.shape)
print("X_test shape:", X_test.shape)
print("y_train shape:", y_train.shape)
print("y_test shape:", y_test.shape)

# Define the updated CNN model


def create_updated_cnn_model(input_shape, num_classes, init_lr=0.01, num_epochs=10):
model = Sequential()

# CONV => RELU => POOL


model.add(SeparableConv2D(16, (7, 7), padding='same', input_shape=input_shape))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))

# CONV => RELU => POOL


model.add(SeparableConv2D(32, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))

# CONV => RELU => CONV => RELU => POOL


model.add(SeparableConv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(SeparableConv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))

# First set of FC => RELU layers


model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))

# Second set of FC => RELU layers


model.add(Dense(128))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))

14
# Softmax classifier
model.add(Dense(num_classes))
model.add(Activation("softmax"))

opt = SGD(learning_rate=init_lr, momentum=0.9, decay=init_lr / num_epochs)

model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])

return model

# Create and train the updated CNN model


input_shape = X_train.shape[1:]
num_classes = y_categorical.shape[1]
init_lr = 0.01
num_epochs = 10

cnn_model = create_updated_cnn_model(input_shape, num_classes, init_lr, num_epochs)


cnn_model.summary() # Print model summary
H_cnn = cnn_model.fit(X_train, y_train, epochs=num_epochs, validation_split=0.2,
batch_size=32)

# Evaluate the CNN model


cnn_loss, cnn_accuracy = cnn_model.evaluate(X_test, y_test)
print(f'CNN Test Accuracy: {cnn_accuracy}')
print(f'CNN Test Loss: {cnn_loss}')

# Define a DNN model


def create_dnn_model(input_shape, num_classes, init_lr=0.001):
model = Sequential()

# First hidden layer


model.add(Dense(256, input_shape=input_shape))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))

# Second hidden layer


model.add(Dense(128))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))

# Output layer
model.add(Dense(num_classes))
model.add(Activation("softmax"))

opt = Adam(learning_rate=init_lr)

15
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])

return model
# Flatten the images for the DNN
X_train_flat = X_train.reshape(X_train.shape[0], -1)
X_test_flat = X_test.reshape(X_test.shape[0], -1)

# Print statements to display the shapes of the flattened datasets


print("X_train_flat shape:", X_train_flat.shape)
print("X_test_flat shape:", X_test_flat.shape)

# Create and train the DNN model


dnn_model = create_dnn_model(input_shape=(X_train_flat.shape[1],),
num_classes=num_classes)
dnn_model.summary() # Print model summary
H_dnn = dnn_model.fit(X_train_flat, y_train, epochs=num_epochs, validation_split=0.2,
batch_size=32)

# Evaluate the DNN model


dnn_loss, dnn_accuracy = dnn_model.evaluate(X_test_flat, y_test)
print(f'DNN Test Accuracy: {dnn_accuracy}')
print(f'DNN Test Loss: {dnn_loss}')

# Visualize the comparison using different types of graphs

# Bar Graph
plt.figure(figsize=(12, 8))
accuracy_scores = [cnn_accuracy, dnn_accuracy]
models = ['CNN', 'DNN']
plt.subplot(131)
plt.bar(models, accuracy_scores, color=['blue', 'green'])
plt.ylabel('Accuracy')
plt.title('Comparison of CNN and DNN Accuracy')

# Line Plot
plt.subplot(132)
plt.plot(models, accuracy_scores, marker='o', linestyle='-')
plt.ylabel('Accuracy')
plt.title('Line Plot')

# Scatter Plot
plt.subplot(133)
plt.scatter(models, accuracy_scores, color=['blue', 'green'])
plt.ylabel('Accuracy')
plt.title('Scatter Plot')

plt.tight_layout()

16
plt.show()

# Confusion Matrix for better visualization


def plot_confusion_matrix(y_true, y_pred, title):
cm = confusion_matrix(y_true, y_pred)
plt.figure(figsize=(8, 6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=['Non-Fire', 'Fire'],
yticklabels=['Non-Fire', 'Fire'])
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title(title)
plt.show()

# Predict classes for confusion matrix


y_pred_cnn = np.argmax(cnn_model.predict(X_test), axis=1)
y_pred_dnn = np.argmax(dnn_model.predict(X_test_flat), axis=1)

# Plot confusion matrices


plot_confusion_matrix(np.argmax(y_test, axis=1), y_pred_cnn, 'Confusion Matrix - CNN')
plot_confusion_matrix(np.argmax(y_test, axis=1), y_pred_dnn, 'Confusion Matrix - DNN')

# Function to plot sample images with labels and predictions


def plot_images_with_labels_both(sample_images, sample_true_labels,
sample_pred_labels_cnn, sample_pred_labels_dnn):
class_names = ['Non-Fire', 'Fire']
plt.figure(figsize=(18, 10))
num_images = len(sample_images)
for i in range(num_images):
plt.subplot(2, num_images, i + 1)
plt.imshow(cv2.cvtColor(sample_images[i], cv2.COLOR_BGR2RGB)) # Convert BGR
to RGB for correct color display
true_label = class_names[sample_true_labels[i]]
pred_label_cnn = class_names[sample_pred_labels_cnn[i]]
title_color_cnn = 'green' if true_label == pred_label_cnn else 'red'
plt.title(f"CNN\nTrue: {true_label}\nPred: {pred_label_cnn}", color=title_color_cnn)
plt.axis('off')

plt.subplot(2, num_images, i + 1 + num_images)


plt.imshow(cv2.cvtColor(sample_images[i], cv2.COLOR_BGR2RGB)) # Convert BGR
to RGB for correct color display
true_label = class_names[sample_true_labels[i]]
pred_label_dnn = class_names[sample_pred_labels_dnn[i]]
title_color_dnn = 'green' if true_label == pred_label_dnn else 'red'
plt.title(f"DNN\nTrue: {true_label}\nPred: {pred_label_dnn}", color=title_color_dnn)
plt.axis('off')

plt.tight_layout()
plt.show()

# Sample a few images from the test set to display

17
num_samples = 9
sample_indices = np.random.choice(len(X_test), num_samples, replace=False)
sample_images = X_test[sample_indices]
sample_true_labels = np.argmax(y_test[sample_indices], axis=1)
sample_pred_labels_cnn = y_pred_cnn[sample_indices]
sample_pred_labels_dnn = y_pred_dnn[sample_indices]

# Plot the images and labels


plot_images_with_labels_both(sample_images, sample_true_labels, sample_pred_labels_cnn,
sample_pred_labels_dnn)

# Plot training history


N = np.arange(0, num_epochs)

plt.figure(figsize=(12, 8))

plt.subplot(121)
plt.title("Losses")
plt.plot(N, H_cnn.history["loss"], label="train_loss_cnn")
plt.plot(N, H_cnn.history["val_loss"], label="val_loss_cnn")
plt.plot(N, H_dnn.history["loss"], label="train_loss_dnn", linestyle='--')
plt.plot(N, H_dnn.history["val_loss"], label="val_loss_dnn", linestyle='--')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()

plt.subplot(122)
plt.title("Accuracies")
plt.plot(N, H_cnn.history["accuracy"], label="train_acc_cnn")
plt.plot(N, H_cnn.history["val_accuracy"], label="val_acc_cnn")
plt.plot(N, H_dnn.history["accuracy"], label="train_acc_dnn", linestyle='--')
plt.plot(N, H_dnn.history["val_accuracy"], label="val_acc_dnn", linestyle='--')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()

# Create the output directory if it doesn't exist


os.makedirs("output", exist_ok=True)

# Save the plot


plt.savefig("output/training_comparison.png")
plt.show()

18
Output:
Fire images shape: (100, 128, 128, 3)
Fire labels shape: (100,)
Non-fire images shape: (100, 128, 128, 3)
Non-fire labels shape: (100,)
X_train shape: (160, 128, 128, 3)
X_test shape: (40, 128, 128, 3)
y_train shape: (160, 2)
y_test shape: (40, 2)
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
separable_conv2d_12 (Separ (None, 128, 128, 16) 211
ableConv2D)

activation_30 (Activation) (None, 128, 128, 16) 0

batch_normalization_24 (Ba (None, 128, 128, 16) 64


tchNormalization)

max_pooling2d_9 (MaxPoolin (None, 64, 64, 16) 0


g2D)

separable_conv2d_13 (Separ (None, 64, 64, 32) 688


ableConv2D)

activation_31 (Activation) (None, 64, 64, 32) 0

batch_normalization_25 (Ba (None, 64, 64, 32) 128


tchNormalization)

max_pooling2d_10 (MaxPooli (None, 32, 32, 32) 0


ng2D)

separable_conv2d_14 (Separ (None, 32, 32, 64) 2400


ableConv2D)

activation_32 (Activation) (None, 32, 32, 64) 0

batch_normalization_26 (Ba (None, 32, 32, 64) 256


tchNormalization)

separable_conv2d_15 (Separ (None, 32, 32, 64) 4736


ableConv2D)

activation_33 (Activation) (None, 32, 32, 64) 0

batch_normalization_27 (Ba (None, 32, 32, 64) 256


tchNormalization)

19
max_pooling2d_11 (MaxPooli (None, 16, 16, 64) 0
ng2D)

flatten_3 (Flatten) (None, 16384) 0

dense_18 (Dense) (None, 128) 2097280

activation_34 (Activation) (None, 128) 0

batch_normalization_28 (Ba (None, 128) 512


tchNormalization)

dropout_12 (Dropout) (None, 128) 0

dense_19 (Dense) (None, 128) 16512

activation_35 (Activation) (None, 128) 0

batch_normalization_29 (Ba (None, 128) 512


tchNormalization)

dropout_13 (Dropout) (None, 128) 0

dense_20 (Dense) (None, 2) 258

activation_36 (Activation) (None, 2) 0

=================================================================
Total params: 2123813 (8.10 MB)
Trainable params: 2122949 (8.10 MB)
Non-trainable params: 864 (3.38 KB)

_________________________________________________________________
Epoch 1/10
4/4 [==============================] - 8s 1s/step - loss: 0.8293 - accuracy: 0.6250
- val_loss: 0.5312 - val_accuracy: 0.5312
Epoch 2/10
4/4 [==============================] - 6s 2s/step - loss: 0.3178 - accuracy: 0.9453
- val_loss: 0.0489 - val_accuracy: 1.0000
Epoch 3/10
4/4 [==============================] - 6s 1s/step - loss: 0.0936 - accuracy: 0.9922
- val_loss: 2.0572e-09 - val_accuracy: 1.0000
Epoch 4/10
4/4 [==============================] - 5s 1s/step - loss: 0.0313 - accuracy: 1.0000
- val_loss: 4.7372e-12 - val_accuracy: 1.0000
Epoch 5/10
4/4 [==============================] - 7s 2s/step - loss: 0.0148 - accuracy: 1.0000
- val_loss: 3.6805e-11 - val_accuracy: 1.0000
Epoch 6/10

20
4/4 [==============================] - 5s 1s/step - loss: 0.0083 - accuracy: 1.0000
- val_loss: 1.6487e-10 - val_accuracy: 1.0000
Epoch 7/10
4/4 [==============================] - 6s 2s/step - loss: 0.0070 - accuracy: 1.0000
- val_loss: 9.0113e-12 - val_accuracy: 1.0000
Epoch 8/10
4/4 [==============================] - 6s 1s/step - loss: 0.0070 - accuracy: 1.0000
- val_loss: 4.9988e-11 - val_accuracy: 1.0000
Epoch 9/10
4/4 [==============================] - 5s 1s/step - loss: 0.0026 - accuracy: 1.0000
- val_loss: 2.4615e-10 - val_accuracy: 1.0000
Epoch 10/10
4/4 [==============================] - 7s 2s/step - loss: 0.0031 - accuracy: 1.0000
- val_loss: 1.0670e-09 - val_accuracy: 1.0000
2/2 [==============================] - 1s 105ms/step - loss: 1.1951e-09 -
accuracy: 1.0000
CNN Test Accuracy: 1.0
CNN Test Loss: 1.1950556100259746e-09
X_train_flat shape: (160, 49152)
X_test_flat shape: (40, 49152)
Model: "sequential_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_21 (Dense) (None, 256) 12583168

activation_37 (Activation) (None, 256) 0

batch_normalization_30 (Ba (None, 256) 1024


tchNormalization)

dropout_14 (Dropout) (None, 256) 0

dense_22 (Dense) (None, 128) 32896

activation_38 (Activation) (None, 128) 0

batch_normalization_31 (Ba (None, 128) 512


tchNormalization)

dropout_15 (Dropout) (None, 128) 0

dense_23 (Dense) (None, 2) 258

activation_39 (Activation) (None, 2) 0

=================================================================
Total params: 12617858 (48.13 MB)
Trainable params: 12617090 (48.13 MB)
Non-trainable params: 768 (3.00 KB)

21
_________________________________________________________________
Epoch 1/10
4/4 [==============================] - 3s 379ms/step - loss: 0.5365 - accuracy:
0.8359 - val_loss: 2.8989 - val_accuracy: 1.0000
Epoch 2/10
4/4 [==============================] - 1s 332ms/step - loss: 0.0800 - accuracy:
1.0000 - val_loss: 0.0089 - val_accuracy: 1.0000
Epoch 3/10
4/4 [==============================] - 1s 326ms/step - loss: 0.0403 - accuracy:
1.0000 - val_loss: 0.0023 - val_accuracy: 1.0000
Epoch 4/10
4/4 [==============================] - 2s 465ms/step - loss: 0.0151 - accuracy:
1.0000 - val_loss: 4.2821e-04 - val_accuracy: 1.0000
Epoch 5/10
4/4 [==============================] - 2s 511ms/step - loss: 0.0147 - accuracy:
1.0000 - val_loss: 3.4918e-04 - val_accuracy: 1.0000
Epoch 6/10
4/4 [==============================] - 2s 549ms/step - loss: 0.0136 - accuracy:
1.0000 - val_loss: 2.0738e-04 - val_accuracy: 1.0000
Epoch 7/10
4/4 [==============================] - 1s 332ms/step - loss: 0.0071 - accuracy:
1.0000 - val_loss: 1.3887e-04 - val_accuracy: 1.0000
Epoch 8/10
4/4 [==============================] - 1s 343ms/step - loss: 0.0060 - accuracy:
1.0000 - val_loss: 1.0038e-04 - val_accuracy: 1.0000
Epoch 9/10
4/4 [==============================] - 1s 285ms/step - loss: 0.0066 - accuracy:
1.0000 - val_loss: 7.6597e-05 - val_accuracy: 1.0000
Epoch 10/10
4/4 [==============================] - 1s 268ms/step - loss: 0.0038 - accuracy:
1.0000 - val_loss: 6.4226e-05 - val_accuracy: 1.0000
2/2 [==============================] - 0s 28ms/step - loss: 5.7445e-05 - accuracy:
1.0000
DNN Test Accuracy: 1.0
DNN Test Loss: 5.74451987631619e-05

22
2/2 [==============================] - 1s 101ms/step
2/2 [==============================] - 0s 19ms/step

FIG 3.2: CONFUSION MATRIX FOR CNN


23
FIG 3.3: CONFUSION MATRIX FOR DNN

FIG 3.4: SAMPLE IMAGES OF THE PREDICTION

24
FIG 3.5: GRAPH ACCURACY AND LOSES

25
APPENDIX – II

REFERENCES

1. Hall, J.R.: The total cost of fire in the United States. National Fire Protection Association,
Quincy (2014)
2. Gagliardi, A., Saponara, S.: Distributed video antifire surveillance system based on IoT
embedded computing nodes. Springer LNEE 627, 405–411 (2020a)
3. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
4. Saponara, S., Pilato, L., Fanucci, L.: Early video smoke detection system to improve fire
protection in rolling stocks. SPIE Real Time Image Video Process 9139, 913903 (2014)
5. Celik, T., ¨Ozkaramanlı, H., Demirel, H.: Fire and smoke detection without sensors: image
processing based approach. In: 2007 15th European signal processing conference, IEEE, pp.
1794–1798 (2007).
6. Rafiee, A., Dianat, R., Jamshidi, M., Tavakoli, R., Abbaspour, S.: Fire and smoke
detection using wavelet analysis and disorder characteristics. IEEE 3rd international
conference on computer research and development, vol. 3, pp. 262–265 (2011)
7. Vijayalakshmi, S.R., Muruganand, S.: Smoke detection in video images using background
subtraction method for early fire alarm system. In: IEEE 2nd international conference on
communication and electronics system (ICCES), pp. 167–171 (2017)
8. Gagliardi, A., Saponara, S.: Advised: advanced video Smoke detection for real-time
measurements in antifire indoor and out-door systems. Energies 13(8), 2098 (2020b)

26

You might also like