phase 2 Report
phase 2 Report
phase 2 Report
BELAGAVI
Submitted by:
S VINAY KUMAR (1KN17IS044) RAJIV SAH (1KN17IS027)
Prof. Akhila
Asst.prof. ,Dept of CSE
KNSIT,Bengaluru
2020-2021
K.N.S INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE/INFORMATION SCIENCE AND
ENGINEERING
Hegde-Nagar-Kogilu Road,Thirumenahalli,Yelahanka,Bengaluru-560064.
CERTIFICATE
This is to certify that the project work entitled “IOT SENSOR AND DEEP NEURAL NETWORK BASED
WILDFIRE AND ANIMAL PREDICTION SYSTEM” is a bonafide work carried out by S VINAY
KUMAR(1KN17IS044),SUHALUDDIN(1KN17IS039),RAJIV SAH(1KN17IS027)&MOHAMME AHAD
(1KN17IS020),a bonafide students of KNS Institute of Technology ,Bengaluru in partial fulfillment for
the award of Bachelor of Engineering in Computer Science/Information Science and Engineering of
the Visvesvaraya Technological University,Belagavi during the year 2020-21.It is certified that all
corrections/suggestions indicated for the internal assessment have been incorporated in the report
deposited in the department library.The project report has been approved as it satisfies the academic
requirements in respect of project work prescribed for the Bachelor of Engineering degree.
External Viva
Name of the Examiners Signature with Date
1……………………………… 1 ………………………….
2……………………………… 2………………………….
ACKNOWLEDGEMENT
It is with great satisfaction and euphoria that we are submitting the project report on “IOT
SENSOR AND DEEP NEURAL NETWORK BASED WILDFIRE AND ANIMAL PREDICTION
SYSTEM” which we had completed as a part of curriculum of our university.
We would express our sincere thanks to Dr. S.M Prakash, Principal, KNS Institute
of Technology, Bengaluru, for providing necessary facilities and motivation to carry out
seminar work successfully.
We would like to express heartfelt gratitude and humble thanks to Mr. Mohamed
Shakir,Head of the Department,Department of Computer Science and Engineering ,
for the constant encouragement,inspiration and help to carry out seminar work successfully.
We could like to thank our guide prof. Akhila Asst.prof. ,Dept of ISE for her assistance
and support to help us understand and complete the project.
We are thankful to all the Teaching and Non-Teaching staff members of the
Department of Computer Science/Information Science and Engineering for the help and
needed support rendered throughout the academics.
SUHAIL (1KN17IS039)
REFERENCE…………………………………………………… 55
ABSTRACT
Intrusion of wildlife is proved to be destructive for both human beings and animals. The
incompatibility between the human wildlife is the major cause that leads to crop damage,
injuries caused to both human and animals .In this system we have put forth wildlife intrusion
monitoring using IoT. The wildlife are captured by using a camera. A Message notification
along with the alarm is processed to the forest officials indicating that an animal has been
detected in the forest borders and is fast approaching the human habitats. The existing system
also focuses on atmospheric monitoring and therefore it overcomes the drawbacks of existing
system. Thus, we have refined a prototype model that allows persistent detection and monitoring.
As humans advanced in technology, man made and natural disasters are increasing
exponentially. One of the most dangerous is the forest fire. Forest fire destroys trees which give
us oxygen and it is very difficult to stop a forest fire spreading if it is not detected early. Our
method is to detect the forest fire as early as possible and also predict the forest fire in advance
so that prompt action can be taken before the fire destroys and spreads over a large area. We
also find the rate of spread of fire in all direction so that necessary action can be initiated. The
other problems are deforestation where humans cut the trees from restricted areas and so the
wild animals from the forest enter the human habitation and cause problems. In this project we
discuss the solutions to overcome these problems.
IOT SENSOR AND DEEP NURAL NETWORK BASED WILD FIRE PREDICTION SYSTEM
CHAPTER 1
INTRODUCTION
1.1 Overview
Monitoring animals in the wild without disturbing them is possible using camera trapping
framework, which is a technique to study wildlife using automatically triggered cameras and
produces great volumes of data. However, camera trapping collects images often result in low
image quality and includes a lot of false positives (images without animals), which must be
detection before the post processing step. This paper presents a two-channeled perceiving
residual pyramid networks (TPRPN) for camera trap images objection. Our TPRPN model
attends to generating high-resolution and high-quality results. In order to provide enough
local information, we extract depth cue from the original images and use two-channeled
perceiving model as input to training our networks. Finally, the proposed three-layer residual
blocks learn to merge all the information and generate full size detection results. Besides, we
construct a new high-quality dataset with the help of Wildlife Thailand’s Community and
eMammal Organization. Experimental results on our dataset demonstrate that our method is
superior to the existing object detection methods.The rapid increase in human population has
led to the conversion of forest land into human settlements. Due to this, the wild animals face
lack of food and water. However, wildlife is greatly distressed due to deforestation which
forces them to move into human habitats. It creates tremendous loss to properties and lives.
In Times of India it has been reported that over 1300 people died due to tiger elephant attacks
in India over the past three years. Thus, humans face serious danger and the time to regain
from the huge loss is imperceptible. Human animal interaction can prove to cause crisis for
both species and therefore there is a need for an intelligence supervision and perceptive
system. Human animal conflict is increased to a higher extent. A number of factors include
elephant habitat structure,weather,animal life etc. Forest fire is an important hazard that
occurs periodically due to the natural changes, human activities and other factors. In the
contemporary years there is a persistent increase in the forest fires that causes damage to
crops, wildlife as well as to humans. Therefore, a network based wireless sensor is used for
forest fire to achieve high verdict accuracy for the early detection. The approach targets on
detecting animals and sending cautionary messages using GSM and alarm. The humidity of
the forest is measured and maintained. The main aim of our work is to alert the people in and
around the forest borders and to forbid their lives. In an uncontrolled field environments like
Our wildlife population is increasingly threatened because human behavior is changing the natural
system through aggressive resource acquisition and landscape changes. In addition, the urbanization
of our society has reduced the interaction between humans and wildlife, and many outdoor
recreation activities have decreased in popularity. As a result of this problem, our society has caused
more problems for wildlife, while also reducing the focus on wildlife species and natural ecosystems.
This has created a major barrier to effective management of natural resources and wildlife
conservation. Studying and monitoring wildlife can be achieved by means of non-invasive sampling
techniques such as the camera trapping approach [14, 4, 8]. This method captures digital images of
wild animals, using small devices composed of a digital camera and a passive infrared sensor.
Camera trapping helps the biologist to sample animal populations and to observe species for
conservation purposes, e.g. delineating species distributions, monitoring animal behavior, and
detecting rare species.
1.3. Objectives:
Detect Fire Using Fire Sensor and intimate through Message to concerned Person
Chapter 2
LITERATURE SUVERY
Content-based Retrieval and Real Time Detection from Video Sequences Acquired by
Surveillance Systems.
In this , a surveillance system devoted to detect abandoned objects in unattended
environments is presented to which image processing content based retrieval capabilities
have been added for making easier inspection task from operators.Video-based surveillance
systems generally employ one or more cameras connected to a set of monitors. This kind of
systems needs the presence of a human operator, who interprets the acquired information and
controls the evolution of the events in a surveyed environment. During the last years efforts
have been performed to develop systems supporting human operators in their surveillance
task, in order to focus the attention of operators when unusual situations are detected. Image
sequences databases are also managed by the proposed surveillance system in order to
provide operators with the possibility of retrieving in a second time the interesting sequences
that may contain useful information for discovering causes of an alarm.Experimental results
are shown in terms of the probability of correct detection of abandoned objects and examples
about the retrieval sequences.
The goal of the event detection module is to alert the operator when a dangerous situation is
detected and to provide him with the possibility to recover the image in which a person leave
an object in the surveyed. This possibility is given by means of image processing content
based retrieval capabilities added to the surveillance system. Image processing capabilities
allow complex events implying the detection of both long term changes (corresponding to
abandoned objects) and patterns moving in the scene near detected changes (corresponding to
possible people leaving objects) to be jointly detected. These complex events can be used
both for on line and for off-line retrieval of potentially dangerous situation together with their
causes. The retrieval is performed on the basis of the object classification. The recognition is
executed by means of geometric features usually used in the literature for recognition
purposes,for classifying object remaining in the same position for a long time, and by means
of a feature, which describes the movement around the detected changes. The concept is that
there is a moving person in proximity of the position of an abandoned object, in the instant in
which the object is left. More precisely, taking into account that meaningful changes are
detected by the change detection module after 16 frames starting from the moment X in
which probably the object Is abandoned, it is interesting to study the movement in a temporal
slot. The movement is studied within a set of pixels belonging to a region surrounding the
abandoned object. A pixel belongs to a moving object if it is different from both the
background and the previous acquired image. The movement feature is computed by
summing the number of pixels belonging to moving objects and by normalizing the sum with
respect to the size of the analyzed region.Performances of the proposed system are measured
in terms of false alarm and misdetection errors in classifying detected objects. It shows the
classification results from a sequence acquired in laboratory, by simulating a waiting room
environment. Table 3 and 4 shows the results obtained from sequences acquiredin waiting
rooms of two Italian railway stations (respectively Genova-Rivarolo and Genova-Borzoli
railway stations).A false alarm is presented whenever a change not related to an abandoned
object is classified as it was an abandoned object. A misdetection happens whenever an
abandoned object is classified as it was not. On the basis of these definitions, it is possible to
obtain the performances of the system in the different considered environments:
segment the motion and track objects in the foreground. We then align each object along the
temporal axis (using the object's tracking results) and compute the object's self-similarity as it
evolves in time.For periodic motions, the self-similarity metric is periodic and we apply
Time-Frequency analysis to detect and characterize the periodicity. The periodicity is also
analyzed robustly using the 2D lattice structures inherent in similarity matrices.
We have described new techniques to detect and analyze periodic motion as seen from both a
static and moving camera. By tracking objects of interest, we compute an object's self-
similarity as it evolves in time. For periodic motion, the self-similarity measure is also
periodic and we apply Time-Frequency analysis to detect and characterize the periodic
motion. The periodicity is also analyzed robustly using the 2D lattice structures inherent in
similarity matrices.Future work includes using alternative independent motion algorithms for
moving camera video, which could make the analysis more robust for non homogeneous
backgrounds for the case of a moving camera. Further use of the symmetries of motion for
use in classification of additional types of periodic motion is also being investigated.
give false alert, but its not skip the right indication of motion. Comparing this technique to
other, this is less time consuming process.There are many smoothing filter used for remove
the noise in image. Generally there are seven basic filter are mainly used which are
convolution filter, Gaussian filter, first derivative GF, second derivative GF etc.This method
used for check the similarity between two images. This method works pixel based
comparison. The value of coefficient is between 0 and 1.If the coefficient value is 0 its show
both images are different in each pixel value at same location. If the coefficient value is 1 its
show both images are same in each pixel value at same location.There are mainly two classes
categorized for motion detection, i.e. pixel-based motion detection and region-based motion
based algorithm. The pixel-based motion detection is based on binary difference by
employing local model of intensity used in real time applications. The latter is based on the
spatial dependencies of neighbouring pixel colors to provide result is more robust to false
alarm. The region based motion detection algorithm include special point detection, block
matching algorithm etc.
Efficient and convenient motion detection surveillance is proposed in this work. The system
captures images onlywhen the motions exceed a certain threshold that is preset in the system.
It thus reduces the volume of data that needs to be reviewed and is therefore a more
convenient way of monitoring the environment, especially with the increasing demand for
multi-camera. Also, it helps to save data space by not capturing static images which usually
do not contain the object of interest. It is applicable for both office and home uses. After
successfully implementing the project, it can be apply for the motion detection for smart
home security system which would be very much helpful in auto theft detection for security
purpose. It can also be useful in bank, museum and street at mid-night. As a future work we
can improve the alert system,instead of alarm we can use SMS alert, email alert with the
moving object. There is less(3%) chances to skip any detection.There is may be some false
detection due to the illumination effects, which can be overcome for the better performance.
Motion Detection for Security Surveillance
This paper deals with the design and Implementation of Smart surveillance monitoring
system using Raspberry pi and CCTV camera. This design is a small portable monitoring
system for home and college security. This system will monitor when motion detected, the
Raspberry Pi will control the Raspberry Pi camera to take a picture and sent out image to the
user according to the program written in python environment. The proposed home security
system captures information and transmits it via a Raspberry towards pc. Raspberry pi
operates and controls motion detectors and CCTV camera for remote sensing and
8th SEM. Dept of ISE, KNSIT. Page 7 2020-2021
IOT SENSOR AND DEEP NURAL NETWORK BASED WILD FIRE PREDICTION SYSTEM
surveillance, streams live records it for Future playback. Python software plays an important
role in this project.Motion detection systems are a necessity in the modern times. Although
some people object the idea of being watched, surveillance systems actually improve the
level of public security,
allowing the system operators to detect threats and the security forces to react in time.
Surveillance systems evolved in the recent years from simple CCTV systems into complex
structures, containing numerous cameras and advanced monitoring centres, equipped with
sophisticated hardware and software. However, the future of surveillance systems belongs to
automatic tools that assist the system operator and notice him on the detected security threats.
This is important, because in complex systems consisting of tens or hundreds of cameras, the
operator is not able to notice all the events.
It uses raspberry pi model-B, Memory,LCD display screen, Dc servo motor, CCTV camera
and power supply. Power supply is connected to raspberry pi which is module
B.RASBPERRY PI is connected to dc servo motor.DC servo motor is connected to CCTV
camera.CCTV camera which capture the image, here image is obstacle such as in college
student,CCTV camera which is connected to raspberry pi module b. Memory, LCD display
are also connected to raspberry pi module -b. for raspberry pi basically we used python
software .after power supply is on ,then we initialize raspberry pi module-b .Through
raspberry pi module-b we will given power supply to dc servo motor .dc servomotor which
rotates in right or left depending on object .after capturing the CCTV camera image,these
images are given to raspberry pi module-b.for storage the images memory part is plays an
important role in this project. Depending on duration of camera used memory is
required.storing data or image in this project is display on LCD display screen. Motion
detection systems area necessity in the modern times. Although some people object the idea
of being watched,surveillance systems actually improve the level of public security, allowing
the system operators to detect threats and the security forces to react in time. Surveillance
systems evolved in the recent years from simple CCTV systems into complex structures,
containing numerous cameras and advanced monitoring centers, equipped with sophisticated
hardware and software. However, the future of surveillance systems belongs to automatic
tools that assist the system operator and notice him on the detected security threats. This is
important,because in complex systems consisting of tens or hundreds of cameras, the operator
is not able to notice all the events. Surveillance systems are widely used to monitor threats by
using CCTV camer as to prevent criminal activity in airports,subway stations, large complex
malls, for examples. Surveillance systems are continually being developed with such security
issues, but we need a more accurate automated system.
This project has set out a vision of how aspects to Obstacle detection like aircraft on the
ground and also in the air. The main purpose of the system is to detect obstacle and captured
image in computer. Now This system useful for in colleges,in traffic area.The main aim to
develop the system for motion detection and detected image storing.Main goal of this project
is that it has best accuracy, low cost and better performance, low power consumption. Using
this motion detection for security surveillance we can avoid some incident.The Raspberry Pi
Camera Module is a custom designed add-on for Raspberry Pi. It attaches to Raspberry Pi by
way of one of the two small sockets on the board upper surface. This interface uses the
dedicated CSI interface, which was designed especially for interfacing to cameras. The CSI
bus is capable of extremely high data rates,and it exclusively carries pixel data. The board
itself is tiny, at around 25mm x 20mm x 9mm. It also weighs just over 3g, making it perfect
for mobile or other applications where size and weight are important. It connects to
Raspberry Pi by way of a short ribbon cable. The camera is connected to the BCM2835
processor on the Pi via the CSI bus,a higher bandwidth link which carries pixel data from the
camera back to the processor. This bust ravels along the ribbon cable that attaches the camera
board to the Pi.
CHAPTER 3
Animal Detection
Arduino
Embedded C
FUNCTIONAL REQUIREMENTS:
NON-FUNCTIONAL REQUIREMENTS:
The Camera is used take Video and face of the Animal and Animal information store
in the database .
Requirement data will be stored in the python database according Animal Name
choose the information.
System should be reliable
System Should be Easily Implementable
System Should be Easy to Implement.
Cost of Implementation should be low.
Arduino UNO
The Arduino Uno is a microcontroller board based on the ATmega328. It has 14 digital
input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz
crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. It
contains everything needed to support the microcontroller; simply connect it to a
computer with a USB cable or power it with a AC-to-DC adapter or battery to get started.
The Uno differs from all preceding boards in that it does not use the FTDI USB-to-serial
driver chip. Instead, it features the Atmega8U2 programmed as a USB-to serial converter.
"Uno" means one in Italian and is named to mark the upcoming release of Arduino
1.0. The Uno and version1.0 will be the reference versions of Arduino, moving
forward. The Uno is the latest in a series of USB Arduino boards, and the reference model
for the Arduino platform.
Technical Specifications
Microcontroller ATmega328
Operating Voltage 5V
Input Voltage (recommended) 7-12V
Input Voltage (limits) 6-20V
Digital I/O Pins 14 (of which 6 provide PWM output)
Analog Input Pins 6
DC Current per I/O Pin 40 mA
DC Current for 3.3V Pin 50 mA
Flash Memory 32 KB of which 0.5 KB used by boot loader
SRAM 2 KB
EEPROM 1 KB
POWER
The Arduino Uno can be powered via the USB connection or with an external power
supply. The power source is selected automatically. External (non-USB) power can
come either from an AC- to-DC adapter (wall-wart) or battery. The adapter can be
connected by plugging a 2.1mm center- positive plug into the board's power jack. Leads
from a battery can be inserted in the Gnd and Vin pin headers of the POWER connector.
The board can operate on an external supply of 6 to 20 volts. If supplied with less than
7V, however, the 5Vpin may supply less than five volts and the board may be unstable.
If using more than 12V, the voltage regulator may overheat and damage the board. The
recommended range is 7 to 12 volts. The power pins are as follows:
VIN. The input voltage to the Arduino board when it's using an external power
source (as opposed to5 volts from the USB connection or other regulated power
source). You can supply voltage through this pin, or, if supplying voltage via the
power jack, access it through this pin.
5V. The regulated power supply used to power the microcontroller and other
components on the board. This can come either from VIN via an on-board
regulator, or be supplied by USB or another regulated 5V supply.
3V3. A 3.3 volt supply generated by the on-board regulator. Maximum current
draw is 50 mA.
GND. Ground pins.
The Atmega328 has 32 KB of flash memory for storing code (of which 0,5 KB is used
for the boot loader); It has also 2 KB of SRAM and 1 KB of EEPROM (which can be
read and written with the EEPROM library). Each of the 14 digital pins on the Uno can
be used as an input or output, using pin Mode(), digital Write(), and digital Read()
functions. They operate at 5 volts. Each pin can provide or receive a maximum of 40
mA and has an internal pull-up resistor (disconnected by default) of 20-50kOhms. In
addition, some pins have specialized functions:
Serial: 0 (RX) and 1 (TX). Used to receive (RX) and transmit (TX) TTL serial data.
These pins are connected to the corresponding pins of the ATmega8U2 USB-to-TTL
Serial chip.
Fire Sensor:
A fire sensor is used to detect the presence of fire around its range. A fire emits heat waves
which are also called infrared rays. So, it consists of an IR receiver, a comparator LM393,
resistors, capacitors and a potentiometer to adjust its sensitivity. The infrared waves of
700nm to 1000nm wavelength can be readily detected by this sensor. The IR receiver
converts the wave intensity to corresponding current value. It can give analog as well as
digital outputs. The sensor has a detection angle of 60 degrees in the forward direction. A
voltage of 3.3V to 5.2V can be used to power the sensor.
Water Pump:
Micro DC 3-6V Micro Submersible Pump Mini water pump For Fountain Garden Mini water
circulation System DIY project. This is a low cost, small size Submersible Pump Motor
which can be operated from a 3 ~ 6V power supply. It can take up to 120 liters per hour with
very low current consumption of 220mA. Just connect tube pipe to the motor outlet,
submerge it in water and power it. Make sure that the water level is always higher than the
motor. Dry run may damage the motor due to heating and it will also produce noise.
3.4.2 OpenCV
OpenCV abbreviated as open source computer version is a library with functions that mainly
aim real-time computer vision.With OpenCV one can perform face detection using pre-
trained deep learning face detection model which is shipped with the library OpenCV is
written in C++ and its primary interface is in C++, but it still retains a less comprehensive
though extensive older C interface.
OpenCV-Python
Python is a general purpose programming language started by Guido van Rossum, which
became very popular in short time mainly because of its simplicity and code readability. It
enables the programmer to express his ideas in fewer lines of code without reducing any
readability.
Compared to other languages like C/C++, Python is slower. But another important feature of
Python is that it can be easily extended with C/C++. This feature helps us to write
computationally intensive codes in C/C++ and create a Python wrapper for it so that we can
use these wrappers as Python modules. This gives us two advantages: first, our code is as fast
as original C/C++ code (since it is the actual C++ code working in background) and second,
it is very easy to code in Python. This is how OpenCV-Python works, it is a Python
wrapper around original C++ implementation.
And the support of Numpy makes the task more easier. Numpy is a highly optimized library
for numerical operations. It gives a MATLAB-style syntax. All the OpenCV array structures
are converted to-and-from Numpy arrays. So whatever operations you can do in Numpy, you
can combine it with OpenCV, which increases number of weapons in your arsenal. Besides
that, several other libraries like SciPy, Matplotlib which supports Numpy can be used with
this.
So OpenCV-Python is an appropriate tool for fast prototyping of computer vision problems.
Fire Sensor:
The Fire sensor is used to detect fire flames . The module makes use of Fire sensor and
comparator to detect fire up to a range of 1 meters.
Feature:
BUZZER:
to learn new things. Anyone - children, hobbyists, artists, programmers - can start tinkering
just following the step by step instructions of a kit, or sharing ideas online with other
members of the Arduino community.
There are many other microcontrollers and microcontroller platforms available for physical
computing. Parallax Basic Stamp, Netmedia's BX-24, Phidgets, MIT's Handyboard, and
many others offer similar functionality. All of these tools take the messy details of
microcontroller programming and wrap it up in an easy-to-use package. Arduino also
simplifies the process of working with microcontrollers, but it offers some advantage for
teachers, students, and interested amateurs over other systems:
• Inexpensive - Arduino boards are relatively inexpensive compared to other microcontroller
platforms. The least expensive version of the Arduino module can be assembled by hand, and
even the pre-assembled Arduino modules cost less than $50
Cross-platform - The Arduino Software (IDE) runs on Windows, Macintosh OSX, and Linux
operating systems. Most microcontroller systems are limited to Windows.
• Simple, clear programming environment - The Arduino Software (IDE) is easy-touse for
beginners, yet flexible enough for advanced users to take advantage of as well.
• Open source and extensible software - The Arduino software is published as open source
tools, available for extension by experienced programmers. The language can be expanded
through C++ libraries, and people wanting to understand the technical details can make the
leap from Arduino to the AVR C programming language on which it's based. Similarly, you
can add AVR-C code directly into your Arduino programs if you want to.
• Open source and extensible hardware - The plans of the Arduino boards are published under
a Creative Commons license, so experienced circuit designers can make their own version of
the module, extending it and improving it. Even relatively inexperienced users can build the
breadboard version of the module in order to understand how it works and save money.
OpenCV
OpenCV abbreviated as open source computer version is a library with functions that mainly
aim real-time computer vision.With OpenCV one can perform face detection using pre-
trained deep learning face detection model which is shipped with the libraryOpenCV is
written in C++ and its primary interface is in C++, but it still retains a less comprehensive
though extensive older C interface.
ADVANTAGES:
1. Early Detection of the forest fire by the different readings taken through the sensors.
2. Due to wild the fire causes death of many animals, so this system will definitely Save the
life of animals by controlling the fire which is been spreded in forest developed Forest Fire
Surveillance system consists of WSNs
3. Continues checking of forest fire generated or not this system will reduce man work by
directly getting the information of generation of fire in any region of forest.
Applications:
CHAPTER 4
Design
The proposed system includes five modules. The initial stage is the image
acquisition stage through which the real world sample is recorded in its digital
form using a digital camera.
DFD LEVEL:
Fire Monitoring:
A use case diagram at its simplest is a representation of a user's interaction with the system
that shows the relationship between the user and the different use cases in which the user is
involved. A use case diagram can identify the different types of users of a system and the
different use cases and will often be accompanied by other types of diagrams as well. While
a use case itself might drill into a lot of detail about every possibility, a use case diagram
can help provide a higher-level view of the system. It has been said before that "Use case
diagrams are the blueprints for your system". They provide the simplified and graphical
representation of what the system must actually do.
Fire Detection:
In this Animal images are given as input through digital camera and stored in the hard disk.
Here color image is converted to gray scale image to make the image device independent. The
image is then resized to a size of 256*256. Then median filtering is performed on the image
to Segmentation
Image segmentation is the third step in our proposed method. The segmented images are
clustered into different sectors using Otsu classifier and k-mean clustering algorithm.
Image segmentation is the process of partitioning a digital image into multiple pixels.
In the proposed approach, the method adopted for extracting the feature set is called the
Color Co-occurrence Matrix method or CCM method in short. It is a method, in which both
the color and texture of an image are taken into account, to arrive at unique features, which
represent that image.
CNN architectures vary with the type of the problem at hand. The proposed model consists of
three convolutional layers each followed by a max-pooling layer. The final layer is fully
connected MLP(multi-layer perceptron). ReLu activation function is applied to the output of
every convolutional layer and fully connected layer.
The first convolutional layer filters the input image with 32 kernels of size 3x3. After max-
pooling is applied, the output is given as an input for the second convolutional layer with 64
kernels of size 4x4.
The last convolutional layer has 128 kernels of size 1x1 followed by a fully connected layer
of 512 neurons. The output of this layer is given to Softmax function which produces a
probability distribution of the four output classes.
CHAPTER 5
To obtain (correct) predictions from deep neural networks you first need to preprocess your
data.In the context of deep learning and image classification, these preprocessing tasks
normally involve:
1. Mean subtraction
2. Scaling by some factor
OpenCV’s new deep neural network ( dnn ) module contains two functions that can be used
for preprocessing images and preparing them for classification via pre-trained deep learning
models.
5.2 Algorithm:
Comparing the current frames captured with previous frames to detect motion: for
checking whether any motion is present in the live images, we compare the live images
being provided by the web cam with each other so that we can detect changes in these
frames and hence predict the occurrence of some motion.
PRE-PROCESSING:
Pre – Processing Is heavily dependent on feature extraction method and
input image type. Some common methods are :
Image Segmentation:
In the images research and application, images are often only interested in certain parts.
These parts are often referred to as goals or foreground (as other parts of the
background). In order to identify and analyze the target in the image, we need to isolate
them from the image. The image segmentation refers to the image is divided into regions,
each with characteristics and to extract the target of interest in the process.
int fire=6;
int buzz=5;
int pump=9;
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
pinMode(fire,INPUT);
pinMode(pump,OUTPUT);
pinMode(buzz,OUTPUT);
digitalWrite(buzz,HIGH);
}
void loop() {
// put your main code here, to run repeatedly:
if(digitalRead(fire)==LOW)
{
Serial.println("fire detected");
delay(1000);
digitalWrite(buzz,LOW);
delay(2000);
digitalWrite(buzz,HIGH);
delay(2000);
digitalWrite(pump,HIGH);
delay(2000);
digitalWrite(pump,LOW);
delay(2000);
}
else
{
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
pinMode(fire,INPUT);
pinMode(pump,OUTPUT);
pinMode(buzz,OUTPUT);
digitalWrite(buzz,HIGH);
delay(1000);
SERVERSerial.begin(9600);
}
void loop() {
// put your main code here, to run repeatedly:
if(digitalRead(fire)==LOW)
{
Serial.println("fire detected");
SERVERSerial.println("$fire detected#");
delay(1000);
digitalWrite(buzz,LOW);
delay(2000);
digitalWrite(buzz,HIGH);
delay(2000);
digitalWrite(pump,HIGH);
delay(2000);
digitalWrite(pump,LOW);
delay(2000);
}
else
{
Serial.println("fire not detected");
delay(1000);
digitalWrite(buzz,HIGH);
}
}
#include "CTBot.h"
#define msg_sender_id 1930230419
CTBot myBot;
String ssid = "SVA";
String pass = "12345678"; // REPLACE myPassword YOUR WIFI PASSWORD, IF ANY
String token = "1944799653:AAHF1PtYY3CuTXj_px2AXQJy54Pm7eZOLuI";
uint8_t led = D0; // the onboard ESP8266 LED.
// If you have a NodeMCU you can use the BUILTIN_LED pin // con
// (replace 2 with BUILTIN_LED)
char Start_buff[70];
int i,z;
char ch;
int str_len;
char textmessage[20];
TBMessage msg;
void MESSAGE_SEND();
void WAITING();
void setup()
{
// initialize the Serial
Serial.begin(9600);
Serial.println("Starting TelegramBot...");
// connect the ESP8266 to the desired access point
myBot.wifiConnect(ssid, pass);
// set the telegram bot token
myBot.setTelegramToken(token);
// myBot.setTelegramToken(token1);
// check if all things are ok
if (myBot.testConnection())
Serial.println("\ntestConnection OK");
else
Serial.println("\ntestConnection NOK");
TEST();
//MESSAGE_SEND();
}
void loop()
{
WAITING();
//MESSAGE_RECEIVE();
}
void MESSAGE_SEND()
{
myBot.sendMessage(msg_sender_id, "SEND START TO CONTINUE");
// myBot.sendMessage(msg_sender_id1, "WELCOME TO ERATION");
}
void MESSAGE_RECEIVE()
{
msg.text[0]='\0';
msg.text[1]='\0';
msg.text[2]='\0' ;
msg.text[3]='\0';
msg.text[4]='\0';
msg.text[5]='\0';
buffer1_clear();
while(1)
{
if (myBot.getNewMessage(msg))
{
WAITING();
}
else
{ // otherwise...
// generate the message for the sender
String reply;
reply = (String)"YOU HAVE ENTER WRONG FORMAT " + msg.sender.username
+ (String)". Try PROPER FORMAT";
myBot.sendMessage(msg.sender.id, reply); // and send
}
}
delay(500);
}
char Serial_read(void)
{
char ch;
while(Serial.available() == 0);
ch = Serial.read();
return ch;
}
void WAITING()
{
Serial.println("WAITE");
buffer_clear();
while(1)
{
if (Serial.available() > 0)
{
while(Serial_read()!='$');
i=0;
while((ch=Serial_read())!='#')
{
Start_buff[i] = ch;
i++;
}
Start_buff[i]='\0';
}
Serial.println(Start_buff);
myBot.sendMessage(msg_sender_id, Start_buff);
delay(1000);
}
}
void buffer_clear()
{
for(z=0;z<60;z++)
{
Start_buff[z]='\0';
// textmessage[z]='\0';
}
}
void buffer1_clear()
{
for(z=0;z<5;z++)
{
textmessage[z]='\0';
}
}
void TEST()
{
String ssid = "SVA"; // REPLACE mySSID WITH YOUR WIFI SSID
String pass = "12345678";
ap = argparse.ArgumentParser()
# load the COCO class labels our YOLO model was trained on
labelsPath = os.path.sep.join(["yolo-coco\coco.names"])
LABELS = open(labelsPath).read().strip().split("\n")
np.random.seed(42)
COLORS = np.random.randint(0, 255, size=(len(LABELS), 3),
dtype="uint8")
print("Loading ...................")
net = cv2.dnn.readNetFromDarknet('yolo-coco\yolov3.cfg', 'yolo-coco\yolov3.weights')
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
writer = None
(W, H) = (None, None)
try:
prop = cv2.cv.CV_CAP_PROP_FRAME_COUNT if imutils.is_cv2() \
else cv2.CAP_PROP_FRAME_COUNT
total = int(vs.get(prop))
print("[INFO] {} total frames in video".format(total))
# an error occurred while trying to determine the total
# number of frames in the video file
except:
print("[INFO] could not determine # of frames in video")
print("[INFO] no approx. completion time can be provided")
total = -1
while True:
(grabbed, frame) = vs.read()
## height,width,_=frame.shape
## print(height,width)
# if the frame was not grabbed, then we have reached the end
# of the stream
if not grabbed:
break
# if the frame dimensions are empty, grab them
if W is None or H is None:
(H, W) = frame.shape[:2]
# construct a blob from the input frame and then perform a forward
# pass of the YOLO object detector, giving us our bounding boxes
# and associated probabilities
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416),
swapRB=True, crop=False)
net.setInput(blob)
start = time.time()
layerOutputs = net.forward(ln)
end = time.time()
# initialize our lists of detected bounding boxes, confidences,
# and class IDs, respectively
boxes = []
confidences = []
classIDs = []
# loop over each of the layer outputs
for output in layerOutputs:
# loop over each of the detections
for detection in output:
text1="{}".format(LABELS[classIDs[i]])
if text1=='elephant' :
print("elephant detected...")
client.api.account.messages.create(
to="+91-9743797922",
from_="+15615718358",
body="elephant detected")
if text1=='person':
print("person detected...")
client.api.account.messages.create(
to="+91-9743797922",
from_="+15615718358",
body="person detected")
if text1=='bear':
print("bear detected...")
client.api.account.messages.create(
to="+91-9743797922",
from_="+15615718358",
body="bear detected")
if text1=='zebra':
print("zebra detected...")
client.api.account.messages.create(
to="+91-9743797922",
from_="+15615718358",
body="zebra detected")
if text1=='giraffe':
print("giraffe detected...")
client.api.account.messages.create(
to="+91-9743797922",
from_="+15615718358",
body="giraffe detected"
cv2.putText(frame, text1, (x, y - 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
cv2.imshow('name',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
Object.close()
break
# release the file pointers
print("[INFO] cleaning up...")
#writer.release()
vs.release()
matplotslib images, sorting contours, detecting edges and much more easier with
OpenCV.
Argparse module makes it easy to write user friendly command line interfaces.
The program defines what arguments it requires and argparse will figure out
how to parse these out of system arguments. The argparse module also
automatically generates help and usage messages and issues errors when user
gives the program in valid arguments.
OpenCV abbreviated as open source computer version is a library with functions
that mainly aim real-time computer vision.
Working with openCV:
With OpenCV one can perform face detection using pre-trained deep learning face
detection model which is shipped with the library.
Pro text file which defines model architecture.
Caffe model file which contains the weights for the actual layer.
The model is loaded into the net variable using cv2,dnn.read Net from caffe()
function for reading a network model stored in caffe frame work with arguments
for “pro text” and “model” file path.
With cv2.dnn.blob From Image() function we resize image.
BLOB;
Blob stands for binary large object and is used to store information in the
database.
Blob is a data type that can store binary data.
Where as data types like arrays strings and similar data types are used to store
normal data like integers characters, Blob can store multimedia files like images.
With cv2.dnn.blob From Image() function resizing the image to a required
resolution can be done. 1.0 is scale factor and here we use default value so there
is no scaling after that is spatial size that convolution neural network expects last
values are mean subtraction value in tuple and they are RGB means and at the
end function returns a “blob” which our input image after resizing , mean
subtraction and normalizing.
This requires more memory space compared to other data types.
DETECTION:
With setInput() we are setting the new input image value for the network.
Using forward() function we are running forward pass to compute output of the
layer.
Then the images are looped through the detections and secondary frames of
varying height and width are drawn over the resized image for detection.
We are extracting confidence and then comparing the confidence threshold.
If the confidence is a minimum threshold , we proceed to draw secondary frames
with the probability of detection.
Then the images inside the frame are converted to grey scale for comparison.
After the detection is confirmed the result is displayed along with the count.
5.5.1 Fire Monitoring and Intimation:
Fire SENSOR :( to recognize timberland fires)
Information produced from these sensors is consistently checked with the guide of Blynk
App. As for the sensors, their yield gadgets are enacted through hand-off switch. Arduino
Uno is a microcontroller board subject to the ATmega328. It has 14 propelled data/yield, 6
straightforward information sources, a 16 MHz aesthetic resonator, a USB affiliation, a
power jack, an ICSP header, and a reset catch. It contains everything expected to help the
microcontroller; just interface it to a PC with a USB connection or power it with an AC-to-
DC connector or battery to start. Fire sensors are contraptions that produce an electrical
banner that changes with an exact advancement. These sensors are used to check inclination
and tilt inside a compelled extent of development. Fire sensors basically used to sense the fire.
Whenever fire is sensed it intimates to concerned officer through Blynk.
CHAPTER 6
TESTING
Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not. Testing is executing a system in
order to identify any gaps, errors, or missing requirements in contrary to the actual
requirements.
Before applying methods to design effective test cases, a software engineer must
understand the basic principle that guides software testing. All the tests should be traceable
to customer requirements.
There are different methods that can be used for software testing. They are,
1. Black-Box Testing
The technique of testing without having any knowledge of the interior workings
of the application is called black-box testing. The tester is oblivious to the system
architecture and does not have access to the source code. Typically, while
performing a black-box test, a tester will interact with the system's user interface
by providing inputs and examining outputs without knowing how and where the
inputs are worked upon.
2. White-Box Testing
White-box testing is the detailed investigation of internal logic and structure of the
code. White-box testing is also called glass testing or open-box testing. In order to
perform white-box testing on an application, a tester needs to know the internal
workings of the code. The tester needs to have a look inside the source code and
find out which unit/chunk of the code is behaving inappropriately.
Levels of Testing
There are different levels during the process of testing. Levels of testing include different
methodologies that can be used while conducting software testing. The main levels of
software testing are:
This is a type of black-box testing that is based on the specifications of the software that is
to be tested. The application is tested by providing input and then the results are examined
that need to conform to the functionality it was intended for. Functional testing of software
is conducted on a complete, integrated system to evaluate the system's compliance with its
specified requirements. There are five steps that are involved while testing an application
for functionality.
This section is based upon testing an application from its non-functional attributes. Non-
functional testing involves testing software from the requirements which are non-functional
in nature but important such as performance, security, user interface, etc. Testing can be
done in different levels of SDLC.
Unit Testing
Unit testing is a software development process in which the smallest testable parts of an
application, called units, are individually and independently scrutinized for proper
operation. Unit testing is often automated but it can also be done manually. The goal of
unit testing is to isolate each part of the program and show that individual parts are
correct in terms of requirements and functionality. Test cases and results are shown in the
Tables.
Unit testing:
Remarks: - Pass.
Remarks: - Pass
Remarks: - Pass
Integration Testing:
Integration testing is a level of software testing where individual units are combined and
tested as a group. The purpose of this level of testing is to expose faults in the interaction
between integrated units. Test drivers and test stubs are used to assist in Integration
Testing. Integration testing is defined as the testing of combined parts of an application to
determine if they function correctly. It occurs after unit testing and before validation
testing. Integration testing can be done in two ways: Bottom-up integration testing and
Top-down integration testing.
1. Bottom-up Integration
This testing begins with unit testing, followed by tests of progressively higher-
level combinations of units called modules or builds.
2. Top-down Integration
In this testing, the highest-level modules are tested first and progressively, lower-
level modules are tested thereafter In a comprehensive software development
environment, bottom-up testing is usually done first, followed by top-down testing. The
process concludes with multiple tests of the complete application, preferably in scenarios
designed to mimic actual situations. Table 8.3.2 shows the test cases for integration
testing and their results.
Remarks: - Pass.
Remarks: - Pass.
System testing:
System testing of software or hardware is testing conducted on a complete, integrated
system to evaluate the system's compliance with its specified requirements. System testing
falls within the scope of black-box testing, and as such, should require no knowledge of the
inner design of the code or logic. System testing is important because of the following
reasons:
System testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional and technical
specifications.
The application is tested in an environment that is very close to the production environment
where the application will be deployed.
System testing enables us to test, verify, and validate both the business requirements as well
as the application architecture.
Remarks: - Pass
Acceptance Testing
Acceptance testing is sometimes performed with realistic data of the client to demonstrate
that the software is working satisfactorily. This testing in FDAC focuses on the external
behavior of the system.
Acceptance Testing
Validation Testing
Chapter 7
Conclusion/Future scope
The system designed shown in the block diagram performs the detection and counting of the
Wild Animals. The raspberry pi is used to make the system portable and affordable by both
small scale and large scale livestock producers. The Flowchart shows the flow of operation
done to detect the particular livestock and count them accordingly that is shown in result.
Here first the image is captured by using a camera and which is then converted to a grey scale
image to make it feasible for comparison with the existing data set values. The existing
systems like bar code scanners and manually counting of livestock is not beneficial as it
consumes a lot of time and the error margin becomes high so to overcome such hurdles we
have designed a real time system that performs such a task with efficiency and is cost
effective.
Future scope:
A sub server unit can be used in between the transmitter unit and main receiver unit to
make the whole process take comparatively less time to alert the forest officer to take
preventive action.
The system can also be upgraded with low-power elements, higher versions of Zigbee in
order to make the system run for longer periods with increased efficiency.
With the advancement in the portable computers like raspberry pi in regards with
the memory, processing speed, and networking capabilities the accuracy and
performance of this system can be improved significantly since a with the current
specifications of the raspberry pi 3 b+ the utilization of resources by the deep
learning algorithm is high which hinders the performance and thereby reducing the
accuracy to a certain level and the temperature on the processor reaches
temperature that are harmful for the processor itself so with a better cooling
systems like heatsinks attached to the processor can increase the performance
significantly because when the processor reaches a threshold temperature it
undergoes thermal throttling which shuts down the system immediately to protect
the system from harming itself.
By interconnecting such systems with a central computer will help in accumulating
the data and hence the performance of the whole network is benefited with this
exchange of data since it will help to train the algorithm better.
References:
Elena Stringa and Carlo S.Regazzoni, Content-based Retrieval and Real Time Detection
from Video Sequences Acquired by Surveillance Systems(2001).
Ross Cutler and Larry S. Davis, Robust Real-Time Periodic Motion Detection(2012).
Prof. Joshi Vilas, Mergal Bhauso, BorateRohan, Motion Detection for Security
Surveillance(2016).
Burghardt, T., Calic, J.: Real-time face detection and tracking of animals. In: Neural
Network Applications in Electrical Engineering. pp. 27{32. IEEE (2006).
Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection
with discriminatively trained part-based models. IEEE TPAMI 32(9), 1627-
1645(2010).
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In:
CVPR. pp. 770-778 (2016).