A Smart Fire Detector IoT System With Extinguisher Class Recommendation Using Deep Learning

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

IoT

Article
A Smart Fire Detector IoT System with Extinguisher Class
Recommendation Using Deep Learning
Tareq Khan

School of Engineering, Eastern Michigan University, Ypsilanti, MI 48197, USA; [email protected]

Abstract: Fires kill and injure people, destroy residences, pollute the air, and cause economic loss.
The damage of the fire can be reduced if we can detect the fire early and notify the firefighters
as soon as possible. In this project, a novel Internet of Things (IoT)-based fire detector device is
developed that automatically detects a fire, recognizes the object that is burning, finds out the class of
fire extinguisher needed, and then sends notifications with location information to the user and the
emergency responders smartphones within a second. This will help firefighters to arrive quickly with
the correct fire extinguisher—thus, the spread of fire can be reduced. The device detects fire using
a thermal camera and common objects using a red-green-blue (RGB) camera with a deep-learning-
based algorithm. When a fire is detected, the device sends data using the Internet to a central server,
and it then sends notifications to the smartphone apps. No smoke detector or fire alarm is available
in the literature that can automatically suggest the class of fire extinguisher needed, and this research
fills this gap. Prototypes of the fire detector device, the central server for the emergency responder’s
station, and smartphone apps have been developed and tested successfully.

Keywords: fire detection; deep learning; thermal camera; fire extinguisher class; Jetson Nano;
smartphone app; SQL server; Bluetooth; Wi-Fi; push notification

1. Introduction
According to the National Fire Protection Association (NFPA), an estimated 358,500 home
Citation: Khan, T. A Smart Fire
fires occur every year in the United States alone. House fires cause an average of 2620
Detector IoT System with civilian deaths and nearly 12 billion dollars in property damage each year in the USA.
Extinguisher Class Recommendation Among residential fires, 50% occur due to cooking, 12.5% from heating equipment, and 6.3%
Using Deep Learning. IoT 2023, 4, from electrical malfunction. Over 22% of non-residential fires are electrical fires, caused
558–581. https://doi.org/10.3390/ by short circuits or wiring problems [1]. One way to reduce the spread of fire is to detect
iot4040024 the fire early and notify emergency responders with information about the fire as soon as
possible. In this project, a novel fire detection device based on the Internet of Things (IoT)
Academic Editor: Amiya Nayak
framework is developed. This device autonomously detects fire occurrences, identifies
Received: 12 October 2023 the burning objects, recommends the required class of fire extinguisher needed, and then
Revised: 21 November 2023 transmits notifications to both end users and emergency responders via smartphones—
Accepted: 23 November 2023 all accomplished within a span of a second. This instrumental advancement serves to
Published: 25 November 2023 facilitate the prompt arrival of firefighters armed with the appropriate fire extinguisher,
thereby effectively mitigating the propagation of fires. The novelty of this research lies in
its ability to fill the gap of a smoke detector or fire alarm system in the literature that can
autonomously recommend the required class of fire extinguisher. The device employs a
Copyright: © 2023 by the author.
Licensee MDPI, Basel, Switzerland.
thermal camera for fire detection, and a red-green-blue (RGB) camera with a deep-learning-
This article is an open access article
based algorithm to detect common objects. Upon fire detection, the device uses the Internet
distributed under the terms and to send data to a central server, and then the server disseminates push notifications to the
conditions of the Creative Commons designated smartphones. The proposed device could be used in homes, schools, factories,
Attribution (CC BY) license (https:// grocery stores, warehouses, etc. The overall operation of the proposed system is shown in
creativecommons.org/licenses/by/ Figure 1.
4.0/).

IoT 2023, 4, 558–581. https://doi.org/10.3390/iot4040024 https://www.mdpi.com/journal/iot


IoT 2023, 4, FOR PEER REVIEW 2
IoT 2023, 4 559

Figure
Figure The
1. 1. Theproposed
proposed fire detector
fire detector (a)(a)
captures
captures thethe
image
image ofof
the
theroom
room and
anddetects
detectscommon
common objects
objects
such
such asasplants, couches,
plants, couches, tables, lamps,
tables, lamps, TV,TV,
A/C,A/C,etc.,
etc.,and
anddetects
detectsthe fire
the using
fire usinga thermal
a thermal camera
camera asas
shown
shown in in
(b). When
(b). When fire is is
fire detected,
detected, it sends data
it sends data using
using thethe
Internet
Internet (c)(c)
to to
thethe
central server
central (d).
server The
(d). The
fire
fire event
event location
location is is marked
marked onon thethemapmap (d),
(d), saved
saved in in a database,
a database, and,
and, depending
depending upon
upon thethe object
object
onon fire,
fire, it determines
it determines thethe type
type of of
firefire extinguisher
extinguisher needed—for
needed—for instance,
instance, class
class CC forfor electrical
electrical fire
fire onon
TV. The server then sends push notifications to the user’s smartphone
TV. The server then sends push notifications to the user’s smartphone app (e) and the emergency app (e) and the emergency
responder’s
responder’s smartphone
smartphone appapp(f).(f).
AA firetruck
firetruck (g)(g)
is is dispatched.
dispatched.

The
The needs
needs andand significances
significances ofofthethe proposed
proposed system
system areare mentioned
mentioned below:
below:
•• Traditional
Traditional smoke
smoke detectors
detectors determine
determine thethe presence
presence of fire
of fire from from smoke.
smoke. If someone
If someone is
is cooking where smoke is generated, these smoke detectors
cooking where smoke is generated, these smoke detectors produce false alarms [2]. In produce false alarms [2].
In the proposed fire detector device, fire is detected using
the proposed fire detector device, fire is detected using thermal camera images with a thermal camera images
with confidence
higher a higher confidence
level, andlevel,thusand falsethusalarmsfalsecanalarms can be reduced.
be reduced.
•• The
The smoke
smoke detectors
detectors have have
a higha high response
response timetime as smoke
as smoke needsneeds
to travelto travel to the de-
to the detector.
tector.
The The proposed
proposed thermal-camera-based
thermal-camera-based fire detector fire has
detector
a lowerhasresponse
a lower timeresponse time
as light
as light
travels travels
faster thanfaster
smoke. than smoke.
•• When
When smoke
smoke is detected,
is detected, traditional
traditional smokesmoke detectors
detectors produce
produce alarmalarm sounds.
sounds. If there
If there is
noisone
no in theinhouse
one and the
the house andfirethe
starts
firefrom
startsa leaking gas pipegas
from a leaking or electrical short-circuits,
pipe or electrical short-
then no one
circuits, will
then nohear
one thewillsound,
hear the and the fire
sound, andwill
thespread.
fire willInspread.
the proposed device,
In the proposed
notifications will be sent
device, notifications to be
will thesent
users toand the emergency
the users respondersresponders
and the emergency using the Internet,
using the
soInternet,
people will be notified
so people willeven if they are
be notified evenaway fromare
if they home—thus,
away fromithome—thus,
will give peace of
it will
mind.
give peace of mind.
•• Fire extinguishers
Fire extinguishers areareclassified
classified as types
as typesA, B,A, C,B,D,C,
orD,K [3].
or KIt [3].
is crucial to use the
It is crucial right
to use the
type of extinguisher for the specific class of fire to avoid personal
right type of extinguisher for the specific class of fire to avoid personal injury or dam- injury or damage to
property. The wrong
age to property. The type
wrongof extinguisher
type of extinguishercould cause electrical
could cause shocks
electrical or shocks
explosions,
or ex-
orplosions,
spread the fire [4]. the
or spread Thefireproposed
[4]. The device
proposed recognizes the objectthe
device recognizes thatobject
is burning
that is and
burn-
suggests
ing andthe class ofthe
suggests fireclass
extinguisher needed andneeded
of fire extinguisher then sends a notification
and then with this
sends a notification
information to emergencytoresponders.
with this information emergencyThus, the emergency
responders. Thus, responders
the emergency knowresponders
the type
ofknow
fire extinguisher needed and can arrive at the site with the right
the type of fire extinguisher needed and can arrive at the site with the right fire fire extinguisher—
thus, harm to life and property
extinguisher—thus, harm to life canandbe reduced.
property can be reduced.
The
Therest ofof
rest the paper
the paperis is
organized
organized asasfollows.
follows.Section
Section2 2discusses
discussesthe
therelated
relatedworks.
works.
InInSection 3, materials and methods are discussed for detecting objects
Section 3, materials and methods are discussed for detecting objects and fires, and fires,
andand
rec-
recommending
ommending the extinguisher class, as is the prototype system architecture consistingofof
the extinguisher class, as is the prototype system architecture consisting
the
thedevice, central
device, server,
central and
server, and smartphone
smartphone apps. The
apps. Thesimulation
simulationand
andprototype
prototype system’s
system’s
IoT 2023, 4 560

results are elaborated upon in Section 4, while Section 5 delves into the discussion and
future works. Lastly, Section 6 provides a conclusion.

2. Literature Review
The recent commercial smoke detector produced by Google [5] can send a notification
to smartphones; however, its detection technique is smoke-based and cannot suggest the
required fire extinguisher. The proposed work uses a thermal camera for fire detection,
which is quicker, as light travels faster than smoke. In [6], fire is detected in camera images
using Convolutional Neural Network (CNN)-based object detection models such as a faster
Region-based Convolutional Neural Network (RCNN), Region-based Fully Convolutional
Network (R–FCN), Single-Shot Detector (SSD), and You Only Look Once (YOLO) v3.
Pretrained models of these four CNN models have been retrained with a custom dataset
using transfer learning. Among them, YOLO v3 detected fire most quickly with 83.7%
accuracy for that custom dataset. The work in [7] uses two deep learning models—Faster-
RCNN Inception V2 and SSD MobileNet V2—to detect indoor fires in camera images. The
models were trained using a custom dataset and the accuracy of the fire detection of these
models was 95% and 88%, respectively. In [8], the authors use a reduced-complexity CNN
architecture, InceptionV4-OnFire, to detect fires from camera images. A custom dataset
was used to train the model and an accuracy of 96% was achieved for that dataset. The
work in [9] uses camera images to detect fires using image processing techniques. The
proposed method has three stages: fire pixel detection using a color model developed in the
CIE L*a*b* color space, moving pixel detection, and analyzing fire-colored moving pixels
in consecutive frames. The method was tested on several video sequences and it had a
detection accuracy of 99.88%. In [10], image processing is used to detect fires. The authors
use the YCbCr color space and fuzzy logic for fire detection. The proposed method was
trained using custom fire images and it achieved up to 99% accuracy. Here, works [6–10]
use RGB camera images with deep learning and image processing algorithms to detect fire.
Although a good detection accuracy was reported for these techniques, it should be noted
that these results are for a custom dataset, and the accuracy may differ when implemented
in real life with unknown environments. Generalization is challenging for deep learning
models and these models perform unexpectedly if conditions such as lighting, viewing
angle, orientation, etc. are different from the training set. Hence, the reliability of these
models in fire detection may not be sufficiently high, which is crucial for ensuring the safety
of lives and property. In the proposed work, fires are detected using a thermal camera,
and it can detect object temperature and fire with an accuracy of 100% as long there is a
line of sight. Its detection is not affected by lighting conditions or the orientation of the
object. Moreover, the works in [6–10] neither recognize the burning object on fire nor notify
the emergency responders suggesting the class of fire extinguisher needed. No hardware
implementation of the IoT system is presented in those works.
The work in [11] proposed a smart fire alarm system to be used in a kitchen. Here,
fire is detected using a thermal camera and the user manually draws a rectangle on the
stove area as the region of interest. The proposed system also contains an RGB camera
for detecting a person in the kitchen using the YOLOv3-tiny CNN model. If a person
is detected and there is a fire in the stove area, then the alarm will not be triggered, as
the person can take care of the fire. However, according to this method, if the person’s
clothes or somewhere other than the stove area catches fire, such as on the shelves, curtains,
or carpet, then an alarm will not be triggered. Thus, emergency responders will not be
alerted. The proposed method has been implemented in an embedded Industry Personal
Computer (IPC) interfaced with an expensive bi-spectrum camera that can capture both
thermal and RGB images. This work neither recognizes the burning object on fire nor
notifies the emergency responders suggesting the class of fire extinguisher needed.
A comparison of the proposed system with other works is shown in Table 1. Compared
with these works, the proposed smart fire detector uses a low-cost thermal camera to detect
fires reliably in any room of a building, uses an RGB camera to detect the object on fire using
IoT 2023, 4 561

a deep learning method, and notifies the emergency responders suggesting the class of fire
extinguisher needed—which is critical information for mitigating the fire. To the author’s
knowledge at the time of writing this paper, no work is available in the literature that
can detect the object that is on fire and automatically suggest the class of fire extinguisher
needed, and this research fills this gap. This work also presents an IoT system prototype
comprising the fire detector device, central monitoring server software, and smartphone
apps for the users and emergency responders.

Table 1. Comparison with other works.

P. Li J. Pincott G. Samarth T. Celik H. Demirel Y. Ma


Proposed
et al. [6] et al. [7] et al. [8] et al. [9] et al. [10] et al. [11]
Image
Fire detection From images From images From images Image processing Thermal Thermal
method using CNN using CNN using CNN processing with fuzzy camera camera
logic
Fire detection
83.7% 95% 96% 99.88% 99% 100% 100%
accuracy
Yes, using
The object on fire
No No No No No No ssd-
detection
inception-v2
Embedded
system No No No No No Yes Yes
implementation
Record fire scene
video with No No No No No Yes Yes
timestamp
Plot on map No No No No No No Yes
User and device
No No No No No No Yes
configuration
Database
No No No No No No Yes
implementation
Smartphone
No No No No No Yes Yes
notification
Extinguisher
No No No No No No Yes
recommendation

3. Materials and Methods


3.1. Detection Algorithms
The experimental setup as shown in Figure 2 is used to develop the object and fire
detection, and extinguisher class recommendation algorithm. An RGB camera and a
thermal camera are interfaced with an NVIDIA Jetson Nano developer kit [12]. Realistic
small-sized furniture for dolls [13] was placed under the cameras during experiments.
Small fires were generated using a barbeque lighter in front of the furniture. A brief
description of the detection of the objects in the room, the fire, the burning object, and the
extinguisher class recommendation is described below.

3.1.1. Object Detection


From RGB camera images, object detection within a room is performed to identify
the burning object’s name and subsequently recommend the appropriate class of fire
extinguisher. The deep learning model, SSD-Inception-V2, which combines the Single Shot
MultiBox Detector (SSD) architecture [14] with the Inception neural network [15], is used for
object detection. This object detector operates by employing a multiscale feature extraction
process, extracting feature maps from various convolutional layers of the Inception network.
These feature maps are then used to predict bounding boxes and class scores for objects at
different spatial resolutions, allowing for the detection of objects of various sizes within
a single pass through the network. Additionally, SSD-Inception introduces auxiliary
IoT 2023, 4 562

convolutional layers to further enhance the feature representation. The model utilizes
anchor boxes to propose potential object locations and refines these predictions using a
combination of localization and classification tasks. By integrating Inception’s advanced
feature extraction capabilities with SSD’s real-time, single-pass detection approach, SSD-
IoT 2023, 4, FOR PEER REVIEW
Inception achieves a balance between accuracy and speed, making it well suited for real- 5
time object detection tasks. The model is trained using the Common Objects in Context
(COCO) dataset [16,17], with a total of 91 classes of objects such as chair, couch, bed, dining
table, window, desk, door, TV, oven, refrigerator, blender, book, clock, etc. [18].

Figure 2. Experimental setup: an RGB camera (a) and a thermal camera (b) are interfaced with
Figure 2. Experimental setup: an RGB camera (a) and a thermal camera (b) are interfaced with an
an NVIDIA Jetson Nano developer kit (c). A monitor, wireless keyboard, and wireless mouse (d)
NVIDIA Jetson Nano developer kit (c). A monitor, wireless keyboard, and wireless mouse (d) were
were connected
connected to the Jenson
to the Jenson Nano. Nano. Realistic
Realistic small-sized
small-sized furniture
furniture (e) was(e) was placed
placed under under the camera
the camera dur-
during experiments.
ing experiments.
3.1.2. Fire Detection
3.1.1. Object Detection
In the proposed system, fires are detected using a thermal camera. A thermal camera
From RGB
is designed cameraand
to capture images,
quantify objectthisdetection
emitted infraredwithin aenergy room is performed
from objects. Everyto identify
object
the burning object’s name and subsequently recommend
emits infrared energy, referred to as its heat signature. A thermal camera captures the appropriate class of fire ex-and
tinguisher.
then translates The deep learningdata
this infrared model, intoSSD-Inception-V2,
an electronic image, which combines
revealing the Singlesurface
the apparent Shot
MultiBox
temperature Detectorof the(SSD)
objects. architecture
The thermal [14]camera
with the is notInception
affectedneural
by thenetwork
lighting [15], is usedof
conditions
for object detection. This object detector operates by employing
the environment; thus, it can measure temperature in both day and night conditions. Fires a multiscale feature ex-
traction process,
are detected from extracting
the thermal featureimage maps from
using various convolutional
thresholding. layers of theisIncep-
If a pixel temperature higher
tion network. These feature maps are
than the threshold, then that pixel is detected as fire. then used to predict bounding boxes and class scores
for objects at different spatial resolutions, allowing for the detection of objects of various
IoT 2023, 4, FOR PEER REVIEW
sizes
3.1.3.within
Burning a single
Objectpass through the network. Additionally, SSD-Inception introduces
Detection
IoT 2023, 4, FOR PEER REVIEW auxiliary 6
IoT 2023, 4, FOR PEER REVIEW The name of the object ontofire
convolutional layers further enhance
is detected the featurethe
by calculating representation.
overlap between The the model area
6
utilizes
inside theanchorboundaryboxes box,to propose
referred potential
as β, of object locationsobject
each detected and refines
from the these
RGBpredictions
image, and
using a combination
the area of each fire of localization
contour, referred and asclassification
means that if
ϕ, from the tasks.
the Byimage,
integrating
intersection
thermal ofassets Inception’s
shown β and in φ(1).isad-
not an empt
This
vanced
means
meansthat
means
feature
that
thatifififthe extraction
the intersection
theintersection capabilities
intersectionofof sets
setsβname
ofsets with
ββ and
andandφ ofφ SSD’s
is
the
ϕis not
isnot real-time,
object
not an
an empty
on
anempty fire, set,
empty η, thenʄ ʄʄtoto
single-pass
then
where
set,
set, then
detection
toβββyields
maps yields
yields ap-
the
the boundary
the
the
box
proach,
name
name of
of SSD-Inception
the
the object
object on
on achieves
fire,
fire, η,
η, where
where ʄ mapsbetween
a balance the boundaryaccuracy box and
to itsspeed,
assigned
assigned making
objectitname.
object well
name.
name
suitedoffor the object on
real-time fire, detection
object η, where tasks. ʄ mapsThe themodel
boundary box to
is trained the𝛽Common
its assigned
using ∩ 𝜑object
≠ ∅Objects ⟹𝜂=ʄ β
name.
𝛽 ∩ 𝜑 ≠ ∅ ⟹ 𝜂 = ʄ β (1)
in Context (COCO) dataset [16,17], 𝛽∩ 𝜑 a≠total
with ∅ ⟹
The of
RGB 91𝜂classes
= ʄ β ofcaptures
camera objects such as chair,
the optical couch,
image and
(1)(1) then the
bed, dining table, window,
The RGB camera captures the optical desk, door, TV,
mon image oven,
object and refrigerator,
is detected blender,
then theaccording
boundarytobox book, clock,
theofdiscussion etc.
each com-in Section
[18]. The RGB camera captures the optical image and then the boundary box of each com-
mon Theobject RGB camera according
is detected captures the optical
tected
to the image in
objects,
discussion aand
TV thena3.1.1.
and
Section the
couch,boundary
Inwith
Figure box
boundary
3a, twoof each
boxes
de- are sh
mon object is detected according to the discussion in Section 3.1.1. In Figure 3a, two de-
common
tected object
objects, is detected
a TV and a couch, according to the discussion
withgenerated
boundary for
boxeseachare in Section
object
shown. where 3.1.1.
Then, theIn Figure
pixels
mask 3a, two
inside
images the bounda
are
tected
3.1.2. objects,
Fire a TV and a couch, with boundary boxes are shown. Then, mask images are
Detection
detected
generated objects,
for eacha object
TV andwhere a couch, the with
pixels
pixels boundary
are setthe
inside toboxes
0. Anare
boundary shown.
OR-ed box areThen,
(i.e., set tomask
union) andimages
1image of all the obje
other
generated
are In
generated for each object wherewhere the pixels inside inside
the boundary box arebox set are
to 1 andtoother
pixels the set
are tofor
proposed each
0. An object
system,
OR-ed fires
(i.e., are thewhere
3b
union) pixels
detected
image using
the
of all a the
black the
thermal boundary
pixel
object camera.
masks A
indicates andset
thermal
is0shown 1pixels
incamera
white and indicat
Figure
pixels
other are set
pixels to
are 0. An
set OR-ed
to 0. An (i.e.,
OR-ed union)
(i.e., image
union) of all
image the object
of all themasks
object is shown
masks isinshown
Figurein
is3bdesigned to capture and
where the black pixel indicates 0 and quantify this emitted
in Figure
white3b infrared energy
is calculated,
pixels from
indicate 1.where objects.
the white
The inverse Every
of the object
areamaskrepresents th
3bFigure
where3bthe whereblackthe pixel
black indicates
pixelto 0 and white
indicates 0asand pixels
white indicateindicate
1. The inverse of the mask
emits infrared
in Figure 3b isenergy,
calculated, referred
where as jects,
the its heat
white areasignature.
shown inpixels
represents A thermal
Figure the3c.
mask 1. The
camera
of inverse
the captures
unknown ofandthe
ob-
inmask
Figure in 3b
Figureis calculated,
3b is where where
calculated, the white the area
white represents
area the mask
represents the of theof
mask unknown
the unknown ob-
then
jects,translates
as shownthis infrared
in Figure 3c.data into an electronic image, revealing the apparent surface
jects, as shown
objects, as shown in Figure 3c. 3c.
temperature of the in Figure
objects. The thermal camera is not affected by the lighting conditions
of the environment; thus, it can measure temperature in both day and night conditions.
Fires are detected from the thermal image using thresholding. If a pixel temperature is
higher than the threshold, then that pixel is detected as fire.

3.1.3. Burning Object Detection


mon object is detected according to the discussion in Section 3.1.1. In Figure 3a, two de-
tected objects, a TV and a couch, with boundary boxes are shown. Then, mask images are
generated for each object where the pixels inside the boundary box are set to 1 and other
pixels are set to 0. An OR-ed (i.e., union) image of all the object masks is shown in Figure
3b where the black pixel indicates 0 and white pixels indicate 1. The inverse of the mask
IoT 2023, 4 563
in Figure 3b is calculated, where the white area represents the mask of the unknown ob-
jects, as shown in Figure 3c.

(a) (b) (c)


Figure
Figure 3.3. (a)
(a) Detected
Detectedobjects—tv
objects—tvand andcouch—with
couch—with boundary
boundary boxes
boxes from
from RGBRGB camera
camera image;
image; (b)
(b) OR-
OR-ed
ed (i.e.,(i.e.,
union)union) image
image of allofthe
all object
the object masks;
masks; (c) mask
(c) mask of unknown
of the the unknown objects.
objects.

The thermal camera, which which is placed beside the RGB camera, captures the surface
temperature of of the
theobjects.
objects.A Agrayscale
grayscalethermal
thermalimage
imageand anda apseudo-colored
pseudo-colored thermal
thermal im-
image
age
usingusing
a jet acolormap
jet colormap are shown
are shown in Figure
in Figure 4a,b, respectively.
4a,b, respectively. Thecaptured
The image image captured
by the RGB by
the RGBofcamera
camera the same ofscene
the same scene
is shown in is shown
Figure 4c.in Figure
If we 4c. Ifthe
compare weimages
compare the images
in Figure 4a,c, wein
Figure
see that4a,c, we see that theobjects
the corresponding corresponding objects
have different have
sizes anddifferent sizes The
aspect ratios. andreason
aspectfor ratios.
this
IoT 2023, 4, FOR PEER REVIEW The reasonisfor
mismatch duethis mismatch
to the is due
difference in thetophysical
the difference
location inofthe
thephysical location
two cameras, theirof different
the two7
focus lengths,
cameras, theirand the different
different aspect ratios
focus lengths, of the
and the captured
different images
aspect (RGB
ratios camera
of the 1.777 and
captured im-
thermal
ages (RGB camera
camera 1.333).
1.777Thus, overlapping
and thermal these
camera two images
1.333). will not givethese
Thus, overlapping accurate
tworesults.
images
will not give accurate results.
To solve this problem, a transformation known as homography [19] is used. Homog-
raphy takes at least four points in the source image and their corresponding points in the
target image as inputs and then calculates a 3 × 3 transformation H matrix. Once the H
matrix between the two images is known, a wrapped image can be generated as an output
by multiplying the source image by the H matrix. The generated wrapped image will have
similar object dimensions to the target image. In this project, the thermal image is consid-
ered the source image, and the RGB image is considered the target image, as shown in
Figure 5a,b. Nine corresponding point pairs are manually selected. Then, using OpenCV
[20,21], the H matrix is calculated as shown in (2), and a wrapped image is generated as
(a) shown in Figure 5c. The objects (b) in Figure 5c, the image taken by the (c)thermal camera, have
similar sizes and aspect ratios when compared with the objects in Figure 5b, the image
Figure 4. (a)
(a) Grayscale image captured by the thermal
thermal camera;
camera; (b)
(b) pseudo-colored
pseudo-colored thermal image
Figureby
taken 4. theGrayscale
RGB camera. imageThus,
captured by images
these the can be overlapped without too thermal
much image
error
using
using a jet colormap
a jet colormap with
with maximum temperature point labeled; (c) image captured by the cam-
maximum temperature point labeled; (c) image captured by the RGB RGB
in corresponding
era of the same scene.
points.
camera of the same scene.

To solve this problem, a transformation known as homography [19] is used. Homog-


raphy takes at least four points in the source image and their corresponding points in
the target image as inputs and then calculates a 3 × 3 transformation H matrix. Once
the H matrix between the two images is known, a wrapped image can be generated as
an output by multiplying the source image by the H matrix. The generated wrapped
image will have similar object dimensions to the target image. In this project, the thermal
image is considered the source image, and the RGB image is considered the target image,
as shown in Figure 5a,b. Nine corresponding point pairs are manually selected. Then,
using OpenCV [20,21], the H matrix is calculated as shown in (2), and a wrapped image is
generated as shown in Figure 5c. The objects in Figure 5c, the image taken by the thermal
camera, have similar(a)
sizes and aspect ratios when compared with(b)the objects in Figure 5b,
the image taken by the RGB camera. Thus, these images can be overlapped without too
much error in corresponding points.

1.13722850e + 00 9.47917113e − 02 −5.05224869e + 01


H = −2.39533853e − 02 1.41938468e + 00 −4.80018099e + 01 (2)
1.88423621e − 04 3.61817207e − 04 1.00000000e + 00
(a) (b) (c)
Figure 4. (a) Grayscale image captured by the thermal camera; (b) pseudo-colored thermal image
IoT 2023, 4 using a jet colormap with maximum temperature point labeled; (c) image captured by the RGB cam- 564
era of the same scene.

(a) (b)

(c)
Figure 5. Thermal
Figure image
5. Thermal (a) and
image RGBRGB
(a) and image (b) with
image corresponding
(b) with points;
corresponding (c) generated
points; wrapped
(c) generated wrapped
thermal image using homography having similar dimensions to the objects in (b).
thermal image using homography having similar dimensions to the objects in (b).

The thermal image is converted into a binary mask image by thresholding to detect
1.13722850𝑒 00 9.47917113𝑒 − 02 −5.05224869𝑒 01
fire—as discussed in Section −
𝐻 = −2.39533853𝑒 3.1.2.
02 In1.41938468𝑒
this image, the00 pixels of fire will be 1,01and the (2)
−4.80018099𝑒 pixels
without fire will be 0. Then, this image is multiplied by
1.88423621𝑒 − 04 3.61817207𝑒 − 04 1.00000000𝑒 00 the H matrix to get the wrapped
image that will correspond to the image taken by the RGB camera. Then, using OpenCV,
The thermal image is converted into a binary mask image by thresholding to detect
the contours for each fire segment are calculated. Figure 6a shows the image from the
fire—as
RGB discussed in Section
camera, where fire is3.1.2. In this
placed image,
in front the pixels
of the couch.ofThe
firethermal
will be 1, and the
camera pixels
image with 8
IoT 2023, 4, FOR PEER REVIEW
without fire will be 0. Then, this image is multiplied by the H matrix to get the
pseudo-coloring is shown in Figure 6b and the mask of the fire after applying homography wrapped
image that will
is shown correspond
in Figure 6c. to the image taken by the RGB camera. Then, using OpenCV,
the contours for each fire segment are calculated. Figure 6a shows the image from the RGB
camera, where fire is placed in front of the couch. The thermal camera image with pseudo-
coloring is shown in Figure 6b and the mask of the fire after applying homography is
shown in Figure 6c.

(a) (b) (c)


Figure
Figure6.6.(a)
(a)RGB
RGBcamera
camera image
image showing detected objects’
showing detected objects’boundary
boundaryboxes
boxesand
and fire;
fire; (b)(b) pseudo-
pseudo-
colored thermal image; (c) mask of the fire after applying homography.
colored thermal image; (c) mask of the fire after applying homography.

Tofind
To findthe
theobjects
objects on fire, the
the intersection
intersectionbetween
betweenthe thefire
firemasks
masksand andeach
eachdetected
detected
object’smask
object’s maskisis calculated.
calculated. If there
If there is overlap
is an an overlap between
between the the
fire fire
mask,mask, as shown
as shown in
in Figure
Figure 6c, and the object’s mask, as shown in Figure 3b, then that object is
6c, and the object’s mask, as shown in Figure 3b, then that object is considered on fire. If considered on
fire.
the If the
fire mask fireintersects
mask intersects with
with the the unknown
unknown object’s
object’s mask,mask, as shown
as shown in in Figure3c,
Figure 3c,then
thenthe
the unknown object is considered
unknown object is considered on fire. on fire.

3.1.4. Extinguisher Recommendation


Once the name of the burning object is found, the proper extinguisher class is recom-
mended based on the material and nature of the object. For instance, fires on TVs and
refrigerators are electrical fires and will need class C. Fires on beds and couches are on
IoT 2023, 4 565

3.1.4. Extinguisher Recommendation


Once the name of the burning object is found, the proper extinguisher class is rec-
ommended based on the material and nature of the object. For instance, fires on TVs and
refrigerators are electrical fires and will need class C. Fires on beds and couches are on
wood and cloth and will need class A [3]. A lookup table is used to obtain the extinguisher
class from the object name. Each of the 91 classes of objects [18] in the SSD-Inception-V2
model is assigned to an extinguisher class and they are stored in a database table.
Some objects are as hot as fire, such as an oven, and some objects are on fire for
legitimate reasons, such as candles, furnaces, and fireplaces. These objects are referred to as
exception objects, ε. Neither alerts are generated nor extinguisher classes are recommended
when the exception objects are on fire. The recommended extinguisher, referred as C, is
calculated using (3). This indicates that set ε is subtracted from η, and then the function
ψ is applied to the result, yielding a mapping to C. Here, the function ψ maps the object
name to its assigned fire extinguisher class.

C = ψ(η − ε) (3)

3.2. Architecture of the Prototype System


The proposed smart fire detection system, as illustrated in Figure 1, is developed,
which comprises a fire detector device, a central server, and smartphone apps for users
and emergency responders. Users position the device within a room for optimal camera
visibility and then employ the smartphone app to configure the device’s Wi-Fi settings
and update relevant information on the central server. Emergency responders also utilize
a dedicated smartphone app to update their details. Once configured, both users and
emergency responders can receive real-time smartphone notifications from anywhere in
the world, as long as they have an Internet connection, triggered by the fire detector device.
Below, we provide a concise overview of the system’s various modules.

3.2.1. Smart Fire Detector Device


The smart fire detector device captures both thermal and RGB images of its surround-
ings and effectively detects fires, including the object on fire. Upon detecting a fire, it
promptly transmits the relevant data to the central server via the Internet and saves the
video footage of the fire within its local SD card. Configuration of the device is facilitated
using a smartphone app. Below, we provide a brief overview of the device’s hardware
and firmware.

Hardware
Figure 7 illustrates the hardware unit block diagram of the fire detector device, fea-
turing the NVIDIA® Jetson Nano™ Developer Kit (NVIDIA, Santa Clara, CA, USA) [12]
as its central processing unit, renowned for its compact design and energy efficiency. This
single-board computer excels in executing neural network models, including tasks like
image classification, object detection, and segmentation, among others. The Jetson Nano™
Developer Kit boasts a robust Quad-core ARM A57 microprocessor running at 1.43 GHz,
4 GB of RAM, a 128-core Maxwell graphics processing unit (GPU), a micro SD card slot,
USB ports, GPIO, and various integrated hardware peripherals. A thermal camera [22]
is connected via a smart I/O module [23] and interfaced with the Jetson Nano using a
USB. According to the datasheet of the FLIR Lepton® thermal camera (Teledyne FLIR LLC,
Wilsonville, OR, USA), it captures accurate, calibrated, and noncontact temperature data in
every pixel of each image. An eight-megapixel RGB camera [24] is connected to the Jetson
Nano via the Camera Serial Interface (CSI). For wireless connectivity, a Network Interface
Card (NIC) [25], supporting both Bluetooth and Wi-Fi, is connected to the Jetson Nano’s
M.2 socket. An LED, serving as a program status indicator known as the heartbeat LED,
is interfaced with a GPIO pin on the Jetson Nano. To power the device, a 110 V AC to
5 V 4 A DC adapter is employed, and to maintain the optimal operating temperature, a
every pixel of each image. An eight-megapixel RGB camera [24] is connected to the Jetson
Nano via the Camera Serial Interface (CSI). For wireless connectivity, a Network Interface
Card (NIC) [25], supporting both Bluetooth and Wi-Fi, is connected to the Jetson Nano’s
M.2 socket. An LED, serving as a program status indicator known as the heartbeat LED,
IoT 2023, 4 is interfaced with a GPIO pin on the Jetson Nano. To power the device, a 110 V AC 566 to 5 V
4 A DC adapter is employed, and to maintain the optimal operating temperature, a cooling
fan with pulse width modulation (PWM)-based speed control is positioned above the mi-
cooling fan with pulse width modulation (PWM)-based speed control is positioned above
croprocessor.
the microprocessor.

Figure 7. Hardware block diagram for the smart fire detection device.
Figure 7. Hardware block diagram for the smart fire detection device.
Firmware
Firmware
The Jetson Nano board contains a 64 GB SD card, running a customized version of
the Ubuntu
The Jetson18.04Nano
operating
board system known
contains a 64as GB
Bionic
SDBeaver. The application
card, running softwareversion
a customized is of
developed using Python, and the system is equipped with all necessary
the Ubuntu 18.04 operating system known as Bionic Beaver. The application software ispackages, including
JetPack 4.6.3. After system startup, three Python programs operate concurrently in separate
developed using Python, and the system is equipped with all necessary packages, includ-
threads: one for configuration, another for fire detection, and a third for accessing the
ing JetPack 4.6.3. After system startup, three Python programs operate concurrently in
recorded videos. A brief overview of these programs is provided below.
separate
Wi-Fithreads: one for
Configuration: Theconfiguration,
objective of thisanother
program for
is fire detection,
to facilitate and a third for
the configuration accessing
of the
the recorded videos. A brief overview of these programs is provided
device’s Wi-Fi connection using the user’s smartphone. After booting, the program initiatesbelow.
Wi-Fiadvertisement
Bluetooth Configuration:[26]Theonobjective
the Jetson of thismaking
Nano, program the is to facilitate
device detectable the
toconfiguration
the user’s of
the device’s during
smartphone Wi-Fi connection using
Bluetooth scans. In the
this user’s smartphone.
setup, the Jetson NanoAfterservesbooting, the program
as a Bluetooth
server, while
initiates the smartphone
Bluetooth acts as[26]
advertisement a Bluetooth
on the client.
JetsonThe program
Nano, makingsubsequently
the devicewaits
detectable
for a Bluetooth connection request from the client through a
to the user’s smartphone during Bluetooth scans. In this setup, the Jetson socket [27]. If no connection
Nano serves as
arequest
Bluetoothis received within 30 min of booting, then it closes the socket, disables Bluetooth
server, while the smartphone acts as a Bluetooth client. The program subse-
advertising, and terminates the program. This timeout mechanism is implemented to
quently waits for a Bluetooth connection request from the client through a socket [27]. If
reduce unauthorized access. Once the smartphone establishes a connection with the device,
no connection
Bluetooth requestisisdisabled,
advertising received andwithin 30 min
the device of booting,
awaits commands then it closes
from the socket, disa-
the smartphone.
The smartphone requires knowledge of nearby Wi-Fi service set identifiers (SSIDs) to
proceed. When the smartphone sends a command to the device to request the list of nearby
SSIDs, the device generates this list using the Linux nmcli tool [28] and transmits it to the
smartphone. On the smartphone, the user selects the desired Wi-Fi SSID for the device
to connect to and inputs the password. Subsequently, the smartphone sends a command,
including the SSID and password, to the device, instructing it to connect. Upon receiving
the Wi-Fi connection command, the device attempts to connect to the requested SSID and
responds with the connected SSID and its local Internet Protocol (IP) address. Once the
Wi-Fi configuration is completed, the smartphone sends a “done” command, prompting
the device to close the socket connection, re-enable advertising, and await a new Bluetooth
connection within the timeout period.
Detecting Objects and Fires: A flowchart of smart fire detection firmware is shown in
Figure 8. First, it initializes the hardware and global variables. Here, the RGB camera is
configured to capture 320 × 240-pixel color images [29], the thermal camera is configured
to send 80 × 60 images in Y16 data format [30], the heartbeat LED pin is initialized as an
output pin, and the SSD-Inception-v2 object detection model is loaded in the memory [31].
A list, listObjectStatus, is used to keep track of notifications for each object. The length
of the list is the total number of objects the deep learning model can detect, which is
91. The list indexes correspond to the class IDs of the detected objects. The list stores
the instances of a class ObjectStatus, having the properties isOnFire and isNotificationSent.
to send 80 × 60 images in Y16 data format [30], the heartbeat LED pin is initialized as an
output pin, and the SSD-Inception-v2 object detection model is loaded in the memory [31].
A list, listObjectStatus, is used to keep track of notifications for each object. The length of
the list is the total number of objects the deep learning model can detect, which is 91. The
IoT 2023, 4 list indexes correspond to the class IDs of the detected objects. The list stores the instances 567

of a class ObjectStatus, having the properties isOnFire and isNotificationSent. During initial-
ization, these properties are set to false. The list, listObjectExceptionClassID, is initialized
During
with the initialization, these
class IDs of the properties
exception are set to false.
objects—such The list,
as oven, listObjectExceptionClassID,
bowl, toaster, hair drier, etc.—
is initialized with the class IDs of the exception
where a high temperature is expected and accepted. objects—such as oven, bowl, toaster, hair
drier, etc.— where a high temperature is expected and accepted.

Figure 8. Firmware flowchart for the smart fire detection implemented on the microcontroller.
Figure 8. Firmware flowchart for the smart fire detection implemented on the microcontroller.
The common objects, fire, and burning objects are detected in the firmware according
Thediscussion
to the common objects, fire,
in Section and
3.1. burning
The objects
RGB image is are detected
captured andinitthe firmware
is stored according
in the GPU
tomemory
the discussion
as a CUDAin Section 3.1.instead
image [32] The RGB image
of RAM for is captured
faster and it of
instantiation is the
stored
deepinlearning
the GPU
model. The
memory as aCUDA
CUDAimage
imageis [32]
theninstead
passed to
ofthe
RAMobject
fordetector [33] as input,of
faster instantiation and
theit deep
provides
learn-
the class IDs and the bounding box coordinates as the output. Then, a list
ing model. The CUDA image is then passed to the object detector [33] as input, and it of contours [34],
listObjectContour,
provides the classisIDs
generated
and thefrom the bounding
bounding box coordinates
box coordinates as theofoutput.
each detected
Then object.
,a list of
If no object is detected, the list will be empty. Then, a mask is generated, maskUnknownObj,
contours [34], listObjectContour, is generated from the bounding box coordinates of each
that represents the areas of unknown objects, similar to Figure 3c. The mask is generated
detected object. If no object is detected, the list will be empty. Then, a mask is generated,
by first generating a canvas filled with 1 and then drawing each contour on it from the
listObjectContour filled with 0.
The thermal image is then captured, which is an array of size 80 × 60. Each element
contains 16-bit temperature data in kelvin multiplied by 100. The array is then resized to
320 × 240 using interpolation. The temperature data, Y, are then converted into degrees
Celsius, C, using (4).
C = (Y ÷ 100) − 273.15 (4)
After that, a binary image, based on thresholding, is generated from the temperature
image, where pixel values higher than the threshold are set to 255 and lower than the
threshold are set to 0. The threshold temperature—anything higher than which is consid-
ered fire—is set at 65.5 degrees Celsius for this prototype. Then, homography [20,21] is
applied to this image, so that the objects in the thermal image have similar coordinates to
the RGB image, as discussed in Section 3.1.3. From this image, the contours of the fires
are detected [34], and a list of contours, listFireContour, is generated from the detected fire
contours. If there is more than one fire segment in the scene, then the list will contain the
contours of each fire segment. If there is no fire, the list will be empty.
Once listObjectContour and listFireContour are generated, then each detected object’s
status, whether it is on fire or not, is determined by calculating the intersects of these
contour areas. Each detected object’s contour is checked for intersections for each fire
contour, and the result of the intersection, isOnFire, and the object’s class ID, ClassID, is
appended as an instance of the class, FireStatus, in the list listFireStatus. To find whether the
areas between two contours intersect or not, first, a canvas of size 320 × 240 is generated
IoT 2023, 4 568

filled with 0. Then, mask1 is generated by drawing the first contour filled with 1 on a copy
of the canvas, and mask2 is generated by drawing the second contour filled with 1 on
another copy of the canvas. Logical AND is then calculated between mask1 and mask2. If
there any results of 1, then the contours intersect.
Note that, the fire could be on objects that are not recognized by the object detector. In
this case, the fire is considered to be on an unknown object. The class ID for the unknown
object in the object detector is 0 [18]. The intersection between maskUnknownObj and each
fire contour is calculated in a similar way by creating masks and running logical AND
operations. The ClassID and isOnFire properties of the unknown object are appended to
listFireStatus.
The object detector may detect multiple instances of objects of the same class, such as
three couches in a room. If fire is detected on all three or any of the couches, then sending
one notification—mentioning the couch is on fire—is sufficient instead of sending the
specific fire status of each couch. However, there will be more than one entry for couch on
the list listFireStatus. Moreover, if there is more than one fire segment detected, then there
will be several entries in listFireStatus for the same class ID. To determine which class IDs
are on fire in listFireStatus, the same class IDs are grouped and then a logical OR operation
is used on the isOnFire property. Thus, the listFireStatus contains unique class IDs along
with their fire status stored in isOnFire.
The class IDs in the listFireStatus that are on the exception objects list, listObjectExcep-
tionClassID, are removed from listFireStatus so that a notification is not generated where
fire is expected and safe.
The isOnFire property of the listObjectStatus, which is used to keep track of notifications
for each 91 objects, is updated from the listFireStatus. If an object is no longer on fire and its
notification was sent (i.e., isNotificationSent is true), it resets the notification status for that
object. If an object is on fire and has not had a notification sent yet (i.e., isNotificationSent
is false), it sends a notification to the server and updates the notification status (i.e., sets
isNotificationSent to true) to indicate that a notification has been sent. The code ensures that
notifications are only sent once for each object on fire to avoid continuous notifications. To
send the notification, the program tries to connect with the central server using a socket [35]
with a timeout of 5 s and sends a data string containing the serial number of the device,
the class ID of the burning object, and the current date and time. The program reads the
Bluetooth’s media access control (MAC) address [26] of the Jetson Nano and it is used as
the serial number of the device. As the device is connected to Wi-Fi, it can get the correct
date and time information [36].
If any object in listFireStatus is on fire, and recording is not already in progress, it
generates a filename based on the current date and time and initializes a video writer with
the MJPG codec. It then sets the recording status to true and turns on the heartbeat LED
to signal that the recording has started. While recording, the code writes each RGB image
frame to the video file in the rec_fire_video folder. If no fire is detected while recording is
in progress, it releases the video writer, sets the recording status to false, and turns off the
LED. The LED continuously blinks when no fire is detected.
Server for Accessing Recorded Videos: For the purpose of accessing and playing the
recorded fire videos for post-incident analysis, an HTTP server [37] is implemented within
the device, running on port 8000, with the working directory set to the “rec_fire_video”
folder where the fire videos are stored. These files can be accessed and played by the user’s
smartphone utilizing the local IP address of the device and the designated port number.
This accessibility is applicable as long as both the smartphone and the device are connected
to the same Wi-Fi network. The smartphone obtains the local IP address of the device
during the Wi-Fi configuration process.

3.2.2. Software for the Central Server


The central server, created using Visual C# and Microsoft SQL Server [38], incorporates
various capabilities, including mapping fire events, generating alerts, sending push notifica-
IoT 2023, 4 569

tions to smartphones, and enabling database queries using a graphical user interface (GUI).
This server can be hosted on a computer, providing a platform for emergency responders
to effectively monitor and respond to critical events.

Structured Query Language (SQL) Database


The software incorporates an SQL database with its table structures, field definitions,
IoT 2023, 4, FOR PEER REVIEW and relational connections depicted in Figure 9. In this visual representation, the primary 13
key for each table is denoted by a key symbol positioned to the left of the field name,
while lines establish the relationships, linking primary key fields on the left side to the
corresponding foreign key fields on the right.

Figure 9. The database tables, their fields, and the associated relationships.
Figure 9. The database tables, their fields, and the associated relationships.
The user_tbl table houses user-related information, including name, address, email,
Processing
and phone,Datawith within
the user’sa TCP Server Android ID [39] serving as the unique UserID.
smartphone’s
Each smartphone app is treated as a distinct user, and Firebase Cloud Messaging (FCM)
The central server is equipped with a Transmission Control Protocol (TCP) server
registration tokens [40] for sending fire notifications are stored in the “FCMID” field. A
[41] that actively listens on port 8050. Establishing a connection between the fire detector
unique Android ID and FCM registration token are generated for each user upon app
devices or smartphones
installation. In the device_tbl andtable,
this the
server necessitates
device a fixedwith
details are stored, public IP and anMAC
the Bluetooth open port.
The router’s
address publicasIP,
functioning theassigned by the
unique device Internet
serial number service providerfield.
in the DeviceSN (ISP), is typically
Location data static
and
suchremains relatively
as latitude, longitude, unchanged. It serves
address, floor, and roomas the fixed
are also public
stored for IP for this
a swift purpose. To
emergency
facilitate
response.the transmission
Users can assign a of incoming
nickname data
to the packets
device, from
recorded inthe Internetfield,
the “Name” to our custom
while the TCP
local IPport,
server of thethe
device
localis IP
stored in the
of the “IP” field.
server The user_device_tbl
computer establishes
is set to a static connections
configuration, and port
between users and devices, accommodating multiple devices per user and
forwarding [42] is configured within the router. Additionally, the specified port is opened vice versa,
creating
within a many-to-many
the firewall settings relationship.
[43]. TheTheTCPer_tbl table
server compiles
serves information
as the receiver onforemergency
user and device
responders, including ERID, FCMID, name, address, email, and phone, with ERID storing
configuration data, emergency responder configuration data from smartphones, and fire
the Android ID and FCMID holding the FCM registration token, akin to the user table.
notification data from
The event_data_tbl tablefiremaintains
detectordata
devices. Thefire
on each first byte encompassing
event, of the data indicates whether it
the device’s
pertains to user and device configuration, emergency responder configuration, or fire no-
tification, with each of these data types briefly outlined below.
Configuration data for users and devices: The user and device configuration data strings
contain all field values from the user_tbl, the total number of devices, and each field value
IoT 2023, 4 570

serial number, date, time, location details, and the burning object’s class ID, enabling
comprehensive event tracking and data queries. Lastly, the object_tbl contains 91 class IDs
from the deep learning model, along with their labels and assigned fire extinguisher classes.

Processing Data within a TCP Server


The central server is equipped with a Transmission Control Protocol (TCP) server [41]
that actively listens on port 8050. Establishing a connection between the fire detector
devices or smartphones and this server necessitates a fixed public IP and an open port.
The router’s public IP, assigned by the Internet service provider (ISP), is typically static
and remains relatively unchanged. It serves as the fixed public IP for this purpose. To
facilitate the transmission of incoming data packets from the Internet to our custom TCP
server port, the local IP of the server computer is set to a static configuration, and port
forwarding [42] is configured within the router. Additionally, the specified port is opened
within the firewall settings [43]. The TCP server serves as the receiver for user and device
configuration data, emergency responder configuration data from smartphones, and fire
notification data from fire detector devices. The first byte of the data indicates whether
it pertains to user and device configuration, emergency responder configuration, or fire
notification, with each of these data types briefly outlined below.
Configuration data for users and devices: The user and device configuration data strings
contain all field values from the user_tbl, the total number of devices, and each field
value from the device_tbl for each individual device. Upon arrival at the server, the data
undergo parsing, with the extracted information stored in variables and subsequently
saved in the respective database tables. If the UserID already exists in the user_tbl, the
user’s information is edited by updating the corresponding row with the incoming data;
otherwise, a new user is added by inserting a new row into the table. SQL queries [44]
are employed to execute these operations, connecting to the database. For each device
listed in the data, the DeviceID is verified within the device_tbl. If the DeviceID already
exists, the device information is updated; otherwise, new device data are appended to the
table. Subsequently, the user_device_tbl is modified to allocate the devices to the user. This
involves deleting all rows containing the user’s UserID and then inserting new rows for
each device listed in the data, thereby ensuring the ongoing association of devices with the
user, regardless of additions, edits, or removals.
Configuration data for emergency responder: The data string incorporates all field values
from the er_tbl. Upon arrival at the server, the data are parsed, stored in variables, and
subsequently stored in the corresponding database table. If the ERID already exists within
the er_tbl, the information for the emergency responder is updated, while if the ERID is
not found in the table, a new entry for the emergency responder is added.
Notification data for fire: Upon the detection of a fire event, these data are transmitted
to the server from the fire detector device, comprising the DeviceSN, the class ID of the
burning object, and the event date and time. Once received by the server, several actions
are initiated: the location information of the device is retrieved from the device_tbl using
the DeviceSN; a new entry is inserted into the event_data_tbl to preserve the event data
in the database; the event is plotted on a map using a marker [45]; information about
the burning object’s label and the fire extinguisher class is retrieved from the object_tbl;
a fire detection message is displayed; a warning sound is triggered; and Firebase Cloud
Messaging (FCM) push notifications [46] are dispatched to both the smartphones of the
device’s assigned users and all emergency responders. To send these push notifications to
each user associated with the device, FCM registration tokens for each user are gathered
from the user_device_tbl and user_tbl using a multiple-table query. Each push notification
includes essential details such as the DeviceSN, burning object label, fire extinguisher class,
device location information, and event date and time.
IoT 2023, 4 571

Fire Event Searching


The software features a graphical user interface (GUI) allowing users to select a date
and time range, define a rectangular area on the map, or apply both criteria simultaneously
to search for fire events. An SQL query is generated based on the selected criteria, and the
resulting data are fetched from the database. Subsequently, the identified fire events from
these data are plotted on the map, and relevant location and user information are presented
for viewing.

3.2.3. App for Smartphone


Two smartphone applications have been developed specifically for the Android plat-
form: one for regular users and another for emergency responders. These applications
feature a settings interface where users and emergency responders can input their informa-
tion, as depicted in the user_tbl and er_tbl tables shown in Figure 9. It is worth noting that
the UserID and ERID, which serve as unique Android identifiers [39] for the smartphones,
along with the FCMID, a Firebase Cloud Messaging registration token [40], are all assigned
automatically and require no manual input.
The primary distinction between these two applications lies in their functionality.
The user application provides options for configuring their devices, while the emergency
responder application lacks device configuration options since they are not end users of
any devices themselves. The settings window includes a custom list view that displays
the user’s associated devices. Users can add new devices, edit existing ones, or remove
them directly from this interface. The device properties, as illustrated in the device_tbl
from Figure 9, can be updated by selecting the respective device.
To simplify the process of inputting device locations, the smartphone can be placed
near the device, and the GeoLocation [47] library is used to automatically retrieve GPS co-
ordinates and address information. Additionally, the app incorporates Wi-Fi configuration,
as discussed in Section 3.2.1 via a graphical user interface (GUI). Within this interface, users
can search for nearby Bluetooth devices and establish connections. Before establishing a
connection, pairing between the device and smartphone is required. Once connected, the
Bluetooth MAC address is assigned as the DeviceSN, the list of available Wi-Fi SSIDs is
retrieved from the device and displayed in the app, and users can select their desired SSID
and provide the necessary password, as described in Section 3.2.1.
Upon exiting the settings window, the smartphone establishes a connection with the
central server via the Internet as a client. It utilizes a socket to transmit configuration data,
which subsequently updates the server’s database.
Once the device’s Wi-Fi is configured, the smartphone app acquires the local IP address
of the device. Utilizing this local IP and the HTTP server port of the device, users can access
and play fire videos recorded on the device directly from their smartphones.
The initial screen of both applications features a customized list view displaying fire
events. This list includes details such as the device name, serial number, location, burning
object name, recommended fire extinguisher class, and date and time of each event. These
applications are registered in the Firebase Cloud Messaging (FCM) [48] dashboard to
receive push notifications. In the background, a service named FirebaseMessaging operates
within the app [49]. When a push notification message is received from FCM, a callback
function is triggered. The app appends the message to a list, saves the list to a file, generates
a smartphone notification, and updates the list view on the screen. If a user clicks on any
item within the list view, it opens Google Maps with the destination set to the location of
the device. This feature allows users, as well as emergency responders, to navigate to the
incident site quickly.

4. Results
4.1. Simulation Results
The most common evaluation metric for object detection accuracy is mean average
precision (mAP). Table 2 shows the inference latency and the mAP for different SSD
IoT 2023, 4 572

models [50] trained using the COCO dataset. In Table 2, latency for one 600 × 600 image is
reported when running on a desktop computer with an NVIDIA GeForce GTX TITAN X
GPU. The detector performance mAP is calculated using a subset of the COCO validation
set consisting of 8000 samples. The higher the mAP, the better the accuracy is. These
mAPs are bounding box accuracies rounded to the nearest integer. Here, we see that
SSD-Inception-v2 has the highest mAP, except for the SSD resnet 50 fpn, which has high
latency. Considering both mAP and latency, the SSD-Inception-v2 model is chosen for
this project.

Table 2. Latency and mean average precision (mAP) of SSD object detection models.

Model. Latency (ms) mAP


SSD mobilenet_v1 30 21
SSD mobilenet v2 31 22
SSD mobilenet v2 quantized 29 22
SSD inception v2 42 24
SSD resnet 50 fpn 76 35

In Figure 10, some of the test images are shown. Here, we see that the object detector
IoT 2023, 4, FOR PEER REVIEW 16
successfully detected the objects in most cases and drew the boundary boxes with the
percentage confidence level.

Figure 10. Several examples of object detection on test images. The object label, bounding box, and
Figure 10. Several examples of object detection on test images. The object label, bounding box, and
detection confidence levels are overlaid on the images.
detection confidence levels are overlaid on the images.

4.2. Prototype Testing Results


A prototype of the proposed smart fire detector device, central server, and
smartphone app has been developed and tested successfully. A photograph of the smart
fire detector device, labeling different parts, is shown in Figure 11a. A birds-eye-view of
the experimental setup with miniature toy furniture is shown in Figure 11b. The device is
programmed according to the Firmware section of Section 3.2.1and is configured to run
IoT 2023, 4 573

4.2. Prototype Testing Results


A prototype of the proposed smart fire detector device, central server, and smartphone
app has been developed and tested successfully. A photograph of the smart fire detector
device, labeling different parts, is shown in Figure 11a. A birds-eye-view of the experimental
setup with miniature toy furniture is shown in Figure 11b. The device is programmed
according to the Firmware section of Section 3.2.1and is configured to run the programs
automatically on boot. On the Jetson Nano device, the average time for executing the steps
shown in the flowchart in Figure 8 is 65 ms. In this 65 ms, 52 ms is spent by the SSD-
IoT 2023, 4, FOR PEER REVIEW Inception-V2 model on detecting common objects. The power consumption of different 17

parts and the entire device is measured using the jetson-stats library [51] and shown in
Table 3.

(a) (b)
Figure
Figure 11. 11. (a) Photograph
(a) Photograph of the
of the smart
smart fire fire detector
detector device:
device: the the
RGBRGB camera
camera (1) and
(1) and the the thermal
thermal
camera (2) are mounted on a plastic frame side by side. They are interfaced with
camera (2) are mounted on a plastic frame side by side. They are interfaced with Jetson Nano Jetson Nano devel-
oper kit (3) using CSI and USB ports namely. Wireless module with antenna (4), heartbeat LED (5),
developer kit (3) using CSI and USB ports namely. Wireless module with antenna (4), heartbeat LED
and DC power adapter (6) are interfaced with the Jetson Nano; (b) Experimental setup: the plastic
(5), and DC power adapter (6) are interfaced with the Jetson Nano; (b) Experimental setup: the plastic
frame containing the two cameras are placed above using a camera holder (7), so that it can capture
framethecontaining the miniature
images of the two camerastoyare placed such
furniture aboveasusing
coucha (8)
camera holder
and TV (9). (7), so that it can capture
the images of the miniature toy furniture such as couch (8) and TV (9).
After the device is powered up, the heartbeat LED starts to blink, indicating the pro-
Table 3. Power
gram consumption
is running of the smart
and capturing bothfireRGB
detector
anddevice.
thermal images. The central server, as dis-
cussed in Section 3.2.2, was running on an Internet-connected computer. The system is
Hardware Part Power
then configured using the smartphone app, as discussed in Section 3.2.3. Some screenshots
Jetson
of the smartphone app for configuring the1.2
Nano’s CPU W
emergency responder, the user, and their de-
Jetson Nano’s GPU 3 W
vice are shown in Figure 12. Using the app, a user and a device are added, and the Wi-Fi
of the
Entire device is configured successfully. The
Device 6.6 user
W and device information were updated
as expected in the central server. Using the smartphone app designed for emergency re-
sponders, an emergency responder was also added to the system.
After the device is powered up, the heartbeat LED starts to blink, indicating the
program is running and capturing both RGB and thermal images. The central server, as
discussed in Section 3.2.2, was running on an Internet-connected computer. The system is
then configured using the smartphone app, as discussed in Section 3.2.3. Some screenshots
of the smartphone app for configuring the emergency responder, the user, and their device
are shown in Figure 12. Using the app, a user and a device are added, and the Wi-Fi
of the device is configured successfully. The user and device information were updated
as expected in the central server. Using the smartphone app designed for emergency
responders, an emergency responder was also added to the system.
IoT 2023, 44, FOR PEER REVIEW 574
18

(a) (b) (c)

(d) (e)
Figure
Figure 12.
12. Screenshots
Screenshots from
from the
the smartphone
smartphone apps:
apps: (a)
(a) emergency
emergency responder
responder app
app displaying
displaying config-
config-
uration options; (b) user app presenting user configuration and a list of connected devices;
uration options; (b) user app presenting user configuration and a list of connected devices; (c) device
(c) de-
property configuration and buttons for Wi-Fi Internet setup and playing recorded fire videos; (d)
vice property configuration and buttons for Wi-Fi Internet setup and playing recorded fire videos;
searching for and connecting with devices using Bluetooth; and (e) configuring the Wi-Fi SSID for
(d) searching for and connecting with devices using Bluetooth; and (e) configuring the Wi-Fi SSID for
the device.
the device.
oT 2023, 4, FOR PEER REVIEW 19

IoT 2023, 4 575

The smart fire detector system was tested for different cases inside a lab environment
by settingThe
small fires
smart indetector
fire front ofsystem
the miniature
was testedfurniture using
for different a barbeque
cases inside a lablighter. The ob-
environment
jects were not burnt
by setting duein
small fires tofront
safety reasons.
of the Thefurniture
miniature differentusing
testing cases are
a barbeque briefly
lighter. Thedescribed
objects
below:were not burnt due to safety reasons. The different testing cases are briefly described below:
• • Testing
Testing fire aonknown
fire on a known object:
object: Duringtesting,
During testing,fires
fires were
were set
set ininfront
frontofofthethe
objects
objects
that are known to the object detector as listed in [18]. Testing
that are known to the object detector as listed in [18]. Testing was carried out for was carried out for thethe
couch and TV as shown in Figure 13a,b, respectively. The device successfully detected
couch and TV as shown in Figure 13a,b, respectively. The device successfully de-
the fire and the class of the burning object and notified the central server within
tected the fire and the class of the burning object and notified the central server within
a second. Upon receiving the notification data from the device, the central server
a second. Upon receiving
successfully marked the the notification
location data
of the fire from
event onthe
thedevice, the central
map, displayed the server
assignedsuc-
cessfully marked the location of the fire event on the map, displayed
user and device information in the event log, saved the event data in the database, the assigned
user generated
and device information
warning sounds,inand thesent
event log, saved
smartphone the event to
notifications data
thein the database,
assigned user
generated
and allwarning sounds,
the emergency and sent Some
responders. smartphone
screenshots notifications to the
of the central assigned
server softwareuser
and smartphone app after a fire event are shown in Figures
and all the emergency responders. Some screenshots of the central server software 14 and 15. Here, we see
that the proposed
and smartphone app system
after acorrectly
fire event identified
are shown the objects on fire,14such
in Figures andas15.
theHere,
couchweandsee
TV, and also successfully suggested the fire extinguisher class, such as A for the couch
that the proposed system correctly identified the objects on fire, such as the couch
and C for the TV.
and TV, and also successfully suggested the fire extinguisher class, such as A for the
• Testing fire on an unknown object: When the fire was on an unknown object as shown
couchinand C for
Figure 13c,the TV.as the wooden TV stand, it detected fire but did not recommend
such
• Testing fire on an unknown
fire extinguisher class. The object: Whenwas
notification thesuccessfully
fire was onsent an unknown object asabout
to the smartphones shown
in Figure 13c,
the fire, such as
without the wooden aTV
recommending firestand, it detected fire but did not recommend
extinguisher.
fire
• extinguisher
Testing fire on class. The notification
an exception object: Whenwas successfully
the fire sent to object,
was on an exception the smartphones
as shown
in Figure 13d, such as on an oven where
about the fire, without recommending a fire extinguisher. a high temperature is expected, the system
• neither considered it fire nor sent any notification.
Testing fire on an exception object: When the fire was on an exception object, as shown
• Figure
in Testing with
13d, suchmultiple
as on users
an oven andwhere
devices: The system
a high was alsoistested
temperature with the
expected, multiple
system
emergency responders, multiple users, and devices having many-to-many relation-
neither considered it fire nor sent any notification.
ships, and notifications were sent successfully as expected to several devices.
• Testing with multiple users and devices: The system was also tested with multiple
In the central server, fire events can be successfully searched for using a range of dates
emergency responders, multiple users, and devices having many-to-many relation-
and times, a rectangular area on the map, or both. A screenshot of the searching fire event
ships, andinnotifications
is shown Figure 16. were sent successfully as expected to several devices.

(a) (b)

(c) (d)
FigureFigure 13. Images
13. Images captured
captured by the
by the RGB RGB camera
camera whentesting
when testing with
with fire:
fire: (a)
(a)fire
fireon
oncouch;
couch;(b)(b)
firefire
on on
TV; (c)TV;
fire(c)on
fire on unknown
unknown object;
object; (d)(d)
firefire
onon exceptionobject
exception object such as
asoven.
oven.
IoT 2023,
IoT 4 FOR PEER REVIEW
2023, 4, 576
20
IoT 2023, 4, FOR PEER REVIEW 20

Figure
Figure 14.
14. Screenshot
Screenshot ofof the
the central
central server
server software
software demonstrating
demonstrating the the mapping
mapping of of aa fire
fire event
event (on
(on
14. the central server
the
the right)
right) and
and displaying
displaying pertinent
pertinent information,
information, such
such as
as the
the object
object on
on fire
fire (in
(in this
this case,
case, aa couch),
couch),
the right) and displaying pertinent information, such as the object on fire (in this case, a couch),
recommended
recommended firefire extinguisher, within
extinguisher,within the
withinthe event
theevent log
eventlog on
logon the
onthe left.
theleft.
left.
recommended fire extinguisher,

(a)
(a) (b)
(b) (c)
(c)

Figure 15. Cont.


IoT 2023,
IoT 2023, 44, FOR PEER REVIEW 21
577

(d) (e) (f)


Figure 15. Screenshots of the smartphone apps: (a) emergency responder app’s list view showing a
list of fire events with the name of the object on fire, the recommended extinguisher class, location,
(d)date, and time; (b) clicking on(e) the list item shows the direction (f) to the fire event location on the map;
(c) user app’s list view showing a list of fire events with the name of the object on fire, the recom-
Figure
mended
Figure 15.extinguisher
15. Screenshots of
Screenshots of the smartphone
class,
the smartphone
location, date, apps: (a) time;
and
apps: (a) emergency responder app’s
(d) smartphone
emergency responder app’s list view
viewwhen
notification
list showing
showingfire aais de-
list
list of
tected; fire
of fire events
(e)events with
accessing the name
withrecorded
the namefire of the object
event
of the on
videos
object fire, the
on the
on fire, recommended
thedevice from user’s
recommended extinguisher
smartphone
extinguisher class, location,
app;
class, (f) thumb-
location,
date, and
nailsandof thetime; (b) clicking
recorded videosononthethelistsmartphone
item shows the direction to the fire event location on the map;
date, time; (b) clicking on the list item shows theapp usingtoVLC
direction media
the fire player.
event location on the map; (c)
(c) user app’s list view showing a list of fire events with the name of the object on fire, the recom-
user app’s list view showing a list of fire events with the name of the object on fire, the recommended
mended extinguisher class, location, date, and time; (d) smartphone notification when fire is de-
In the class,
extinguisher central server, fireand
location, events (d)can be successfully searched for isusing a range
(e) of
tected; (e) accessing recordeddate,
fire event time;
videos onsmartphone notification
the device from when fire
user’s smartphone detected;
app; (f) thumb-
datesofand
accessing
nails times,fire
recorded
the recorded a videos
rectangular
event on
videos area
on theon
the smartphone theapp
device map,
from or VLC
both.
user’s
using A screenshot
smartphone
media app; (f) of
player. the searching
thumbnails of the fire
event is videos
recorded shownoninthe
Figure 16. app using VLC media player.
smartphone
In the central server, fire events can be successfully searched for using a range of
dates and times, a rectangular area on the map, or both. A screenshot of the searching fire
event is shown in Figure 16.

Figure 16. Central server software screenshot for fire event searching. Fire events can be searched for
using a range of dates and times, selecting a rectangular location on the map, or both. The left side
shows the detailed log and the right side shows the plot on the map based on the search result.
IoT 2023, 4 578

5. Discussion and Future Work


It was found during experiments that fire is reliably detected all the time as long as
there is a line of sight between the fire and the thermal camera. Fire detection is not affected
by the ambient lighting conditions. However, sometimes, object detections by the RGB
camera are missed in a few frames. It was found that the performance of object detection
was affected by the lighting conditions or the orientation of the object. Fluctuations in
detection are a common occurrence in the majority of object detection models, and more
basic research on machine learning is needed to solve this.
In this work, the object detection model is trained using 91 common objects [18]. We
plan to add more objects to the model by retraining it using transfer learning in the future.
Instead of finding the bounding box coordinates of the objects, semantic segmentation
could be used, where each pixel is classified. This will improve the accuracy of detecting
the object on fire; however, the training and inferencing models for semantic segmentation
will require more data and computing power. We plan to investigate this approach in
the future.
One of the biggest challenges is to position the cameras for fire detection in houses
because this technique requires a direct view of the fire. To cover all objects and to avoid
occlusion in the room, multiple cameras may be interfaced in the Jetson Nano device
and placed in different corners of the room. Along with the RGB cameras, night vision
cameras can be interfaced so that the device can detect objects on fire even in low light
conditions. Note that the thermal camera can detect heat and fire irrespective of the lighting
condition. In this work, homography was applied to the thermal image, as discussed in
Section 3.1.3, by manually selecting corresponding point pairs between the thermal image
and RGB image. This manual process can be automated using the Scale-Invariant Feature
Transform (SIFT) algorithm [52]. This will calculate the H matrix dynamically, and we plan
to implement it in the future.
The highest temperature ever recorded in North America was 56.6 degrees Celsius,
and this scorching record was set at Greenland Ranch in Death Valley, California, on
10 July 1913. No other location in the United States has even come close to encountering
such extreme heat. The threshold temperature – anything higher than which is considered
as fire—is set at 65.5 degrees Celsius for this prototype. If any object experiences long sun
exposure and becomes hot, then it will not be considered on fire as long as the temperature
of the object is below the 65.5 degrees Celsius threshold level.
One scenario could be that fire destroys the electric supply network before the pro-
posed device detects fire. To avoid such situations, multiple cameras can be interfaced with
the device to increase coverage. Moreover, we plan to reduce the power consumption of
the proposed device, as shown in Table 3, so that it can run with backup batteries in case of
main power failures.
The object and fire detection process continue all the time—even during a fire—as
shown in the firmware flowchart in Figure 8. In the firmware, the object detection is inside
an infinite loop and it continuously repeats. If an object falls, then it will be detected as long
as it is included in the object detection model. Fire generally grows gradually from a small
to large size. When the fire on the object is less extensive, the object detector will be able
to detect both the object and the fire and notify the server. However, when the fire grows
on and engulfs the object, the object may become distorted or fully covered by the fire. In
that case, the object will not be recognized. In such a scenario, a notification is already sent
when the fire on the object is less extensive.
Instead of using Bluetooth for the Wi-Fi configuration, an alternate method could be
that the IoT device creates its own Wi-Fi network (i.e., temporary access point) when it is
powered on for the first time. The smartphone of the user can then connect to this access
point and transfer Wi-Fi credentials to the device. This approach will eliminate the need for
Bluetooth in the IoT device and the smartphone. We plan to implement this in the future.
The fire scenes detected and stored as video files within the device are not transmitted
to the central server. These files can exclusively be accessed using the user’s smartphone
IoT 2023, 4 579

when it is connected to the same Wi-Fi network as the device, ensuring there are no privacy
concerns. We plan to automatically blur the human faces in the fire videos to further
address privacy concerns in the future. To ensure better security when connecting with the
central server and the server in the device for accessing videos, we plan to implement a
REST API that can authenticate by providing a username and password within an HTTP
header. Though this approach will have additional header data rather than sending fewer
raw TCP/IP data, it will help to make the system more secure.

6. Conclusions
This project has produced an innovative IoT-based fire detection device that not
only identifies fires but also determines the burning object and the appropriate class of
fire extinguisher required, sending location-specific notifications to users and emergency
responders on their smartphones within a second. The device utilizes a thermal camera for
fire detection and an RGB camera with a deep learning algorithm for object recognition.
Notably, it fills a crucial gap in the existing literature by offering an automated system that
suggests the class of fire extinguisher needed. The project encompasses a fully functional
prototype of the fire detection device, a central server for emergency responders, and
successfully tested smartphone apps.

Funding: This research was funded by the Summer Research/Creative Activity (SRA) award of
Eastern Michigan University.
Data Availability Statement: The data presented in this study are available only for non-commercial
research purposes on request from the corresponding author.
Conflicts of Interest: The author declares no conflict of interest.

References
1. House Fire Statistics. Available online: https://www.thezebra.com/resources/research/house-fire-statistics/ (accessed on 30
August 2023).
2. The Reasons for Smoke Detector False Alarms. Available online: https://www.x-sense.com/blogs/tips/the-common-reasons-
for-smoke-detectors-false-alarms (accessed on 30 August 2023).
3. Choosing and Using Fire Extinguishers. Available online: https://www.usfa.fema.gov/prevention/home-fires/prepare-for-fire/
fire-extinguishers/ (accessed on 30 August 2023).
4. Different Types of Fire Extinguishers for Each Kind of Fire. Available online: https://weeklysafety.com/blog/fire-extinguisher-
types/ (accessed on 30 August 2023).
5. Nest Protect smoke and CO Alarm. Available online: https://store.google.com/product/nest_protect_2nd_gen?hl=en-US
(accessed on 1 September 2023).
6. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625.
[CrossRef]
7. Pincott, J.; Tien, P.W.; Wei, S.; Calautit, J.K. Indoor fire detection utilizing computer vision-based strategies. J. Build. Eng. 2022,
61, 105154. [CrossRef]
8. Samarth, G.; Bhowmik, C.A.N.; Breckon, T.P. Experimental Exploration of Compact Convolutional Neural Network Architectures
for Non-Temporal Real-Time Fire Detection. In Proceedings of the 2019 18th IEEE International Conference on Machine Learning
and Applications (ICMLA), Boca Raton, FL, USA, 16–19 December 2019; pp. 653–658.
9. Celik, T. Fast and Efficient Method for Fire Detection Using Image Processing. ETRI J. 2010, 32, 881–890. [CrossRef]
10. Çelik, T.; Özkaramanlı, H.; Demirel, H. Fire and smoke detection without sensors: Image processing based approach. In
Proceedings of the 2007 15th European Signal Processing Conference, Poznan, Poland, 3–7 September 2007; pp. 1794–1798.
11. Ma, Y.; Feng, X.; Jiao, J.; Peng, Z.; Qian, S.; Xue, H.; Li, H. Smart Fire Alarm System with Person Detection and Thermal Camera.
In ICCS 2020. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12143, pp. 353–366.
12. Jetson Nano Developer Kit. Available online: https://developer.nvidia.com/embedded/jetson-nano-developer-kit (accessed on
6 September 2023).
13. iLAND Dollhouse Furniture and Accessories. Available online: https://www.amazon.com/Dollhouse-Furniture-Accessories-
Bookshelves-Decorations/dp/B09QM4WMDP?th=1 (accessed on 6 September 2023).
14. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In European
Conference on Computer Vision (ECCV), Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9905,
pp. 21–37.
IoT 2023, 4 580

15. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In
Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June
2016; pp. 2818–2826.
16. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in
Context. In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2014;
Volume 8693, pp. 740–755.
17. COCO Dataset. Available online: https://cocodataset.org/ (accessed on 5 September 2023).
18. SSD COCO Class Labels. Available online: https://github.com/dusty-nv/jetson-inference/blob/master/data/networks/ssd_
coco_labels.txt (accessed on 5 September 2023).
19. Babbar, G.; Bajaj, R. Homography Theories Used for Image Mapping: A Review. In Proceedings of the 10th International
Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 13–14
October 2022; pp. 1–5.
20. Feature Matching + Homography to Find Objects. Available online: https://docs.opencv.org/3.4/d1/de0/tutorial_py_feature_
homography.html (accessed on 7 September 2023).
21. Homography Examples Using OpenCV (Python/C ++). Available online: https://learnopencv.com/homography-examples-
using-opencv-python-c/ (accessed on 7 September 2023).
22. FLIR Lepton 2.5—Thermal Imaging Module. Available online: https://www.sparkfun.com/products/16465 (accessed on 13
September 2023).
23. PureThermal 2 FLIR Lepton Smart I/O Module. Available online: https://www.digikey.com/en/products/detail/groupgets-
llc/PURETHERMAL-2/9866290 (accessed on 13 September 2023).
24. Waveshare 8MP IMX219-77 Camera Compatible with NVIDIA Jetson Nano Developer Kit. Available online: https://www.
amazon.com/IMX219-77-Camera-Developer-Resolution-Megapixels/dp/B07S2QDT4V (accessed on 13 September 2023).
25. Wireless NIC Module for Jetson Nano. Available online: https://www.amazon.com/Wireless-AC8265-Wireless-Developer-
Support-Bluetooth/dp/B07V9B5C6M/ (accessed on 13 September 2023).
26. Bluetooth Device Configure. Available online: https://manpages.ubuntu.com/manpages/trusty/man8/hciconfig.8.html (ac-
cessed on 13 September 2023).
27. PyBluez. Available online: https://pybluez.readthedocs.io/en/latest/ (accessed on 13 September 2023).
28. Wi-Fi Wrapper Library. Available online: https://pypi.org/project/wifi-wrapper/ (accessed on 13 September 2023).
29. C++/CUDA/Python Multimedia Utilities for NVIDIA Jetson. Available online: https://github.com/dusty-nv/jetson-utils
(accessed on 14 September 2023).
30. Boson Video and Image Capture Using OpenCV 16-Bit Y16. Available online: https://flir.custhelp.com/app/answers/detail/a_
id/3387/~/boson-video-and-image-capture-using-opencv-16-bit-y16 (accessed on 14 September 2023).
31. Locating Objects with DetectNet. Available online: https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-
console-2.md#pre-trained-detection-models-available (accessed on 14 September 2023).
32. Image Manipulation with CUDA. Available online: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-
image.md (accessed on 14 September 2023).
33. Jetson Inference Library Documentation. Available online: https://rawgit.com/dusty-nv/jetson-inference/master/docs/html/
python/jetson.inference.html#detectNet, (accessed on 14 September 2023).
34. OpenCV Contours. Available online: https://docs.opencv.org/3.4/d4/d73/tutorial_py_contours_begin.html (accessed on 14
September 2023).
35. Socket—Low-Level Networking Interface. Available online: https://docs.python.org/3/library/socket.html (accessed on 14
September 2023).
36. Date and Time Library. Available online: https://docs.python.org/3/library/datetime.html (accessed on 14 September 2023).
37. HTTP Server. Available online: https://docs.python.org/3/library/http.server.html (accessed on 14 September 2023).
38. SQL Server 2022 Express. Available online: https://www.microsoft.com/en-us/sql-server/sql-server-downloads (accessed on
18 September 2023).
39. Android Identifiers. Available online: https://developer.android.com/training/articles/user-data-ids (accessed on 18 Septem-
ber 2023).
40. FCM Registration Token. Available online: https://firebase.google.com/docs/cloud-messaging/manage-tokens#ensuring-
registration-token-freshness (accessed on 18 September 2023).
41. TCP Server. Available online: https://www.codeproject.com/articles/488668/csharp-tcp-server (accessed on 18 September 2023).
42. How to Port Forward. Available online: https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/
(accessed on 18 September 2023).
43. How Do I Open a Port on Windows Firewall? Available online: https://www.howtogeek.com/394735/how-do-i-open-a-port-
on-windows-firewall/ (accessed on 18 September 2023).
44. Thompson, B. C# Database Connection: How to Connect SQL Server. Available online: https://www.guru99.com/c-sharp-
access-database.html (accessed on 18 September 2023).
45. GMap.NET—Maps for Windows. Available online: https://github.com/judero01col/GMap.NET (accessed on 18 Septem-
ber 2023).
IoT 2023, 4 581

46. FcmSharp. Available online: https://github.com/bytefish/FcmSharp (accessed on 18 September 2023).


47. GeoLocation. Available online: https://www.b4x.com/android/forum/threads/geolocation.99710/#content (accessed on 18
September 2023).
48. Firebase Cloud Messaging. Available online: https://firebase.google.com/docs/cloud-messaging (accessed on 18 Septem-
ber 2023).
49. FirebaseNotifications—Push messages/Firebase Cloud Messaging (FCM). Available online: https://www.b4x.com/android/
forum/threads/b4x-firebase-push-notifications-2023.148715/ (accessed on 18 September 2023).
50. TensorFlow 1 Detection Model Zoo. Available online: https://github.com/tensorflow/models/blob/master/research/object_
detection/g3doc/tf1_detection_zoo.md#coco-trained-models (accessed on 15 November 2023).
51. Jetson-Stats. Available online: https://rnext.it/jetson_stats/ (accessed on 5 October 2023).
52. The SIFT Keypoint Detector. Available online: https://www.cs.ubc.ca/~lowe/keypoints/ (accessed on 11 October 2023).

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like