0% found this document useful (0 votes)
48 views38 pages

Home Surveillance & Automation Using Device Positioning: Rohith M Sunny G Tenson Tomy Vishal TP

This document describes a home automation and surveillance system that uses device positioning. The system uses a wearable device and central control unit. The wearable device detects which devices the user is looking at using sensors, image recognition, and a database of device locations. It then allows the user to control the selected device by button presses on the wearable device. The central control unit receives signals from the wearable device and operates the home appliances. The system also includes home surveillance with face recognition to detect intruders and notify the homeowner.

Uploaded by

Francis Tomy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views38 pages

Home Surveillance & Automation Using Device Positioning: Rohith M Sunny G Tenson Tomy Vishal TP

This document describes a home automation and surveillance system that uses device positioning. The system uses a wearable device and central control unit. The wearable device detects which devices the user is looking at using sensors, image recognition, and a database of device locations. It then allows the user to control the selected device by button presses on the wearable device. The central control unit receives signals from the wearable device and operates the home appliances. The system also includes home surveillance with face recognition to detect intruders and notify the homeowner.

Uploaded by

Francis Tomy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Home Surveillance &

Automation
using Device Positioning

Rohith M
Sunny G
Tenson Tomy
Vishal TP
Objective
 Implement a reliable, smart and efficient home
automation system which works on the device
positioning mechanism.

 Develop a wearable smart device which automatically


detects the devices (lights,fans,etc.)

 Since we are implementing the project with device


positioning technique, we can overcome the problem
faced by existing home automation systems.
Overview
 The objective of our project is to incorporate
such a system in a wearable device which
makes it more user friendly.
 The system works by determining the device
positions by making use of compass and
gyroscope.
 A microcontroller controls all the devices in
the house.
Overview
 Hence the whole working of the automation
system is based on the wireless
communication between the wearable device
at one end and the microcontroller at the
other.

 An additional home surveillance feature is


incorporated which alerts the owner when
some other person enters the house.
Tool Requirement
 Hardware Tools
9 RFID Tag
1. Raspberry Pi Zero 10 NRF Modules
2. Raspberry Pi camera module 11 Control
Buttons
3. Arduino Mega 12 IR LED

4. Relay Module
Software Tools
5. Arduino Pro Mini
1. Python 3.6.5
6. MPU 6050 ( 6 – DOF sensor) 2. Arduino IDE
7. HMC5883L 3 Axis Compass 3. Processing IDE
8. RFID card reader
Methodology
 The system consists of two major devices -
i . Wearable device
ii. Central Controlling unit
i. Wearable Unit:
• It's a device which is worn by the user
• Wearable Unit is a smart device which automatically detects the device
(home appliance - light, fan, TV etc) which the user wants to control
and gives the full control of that device at his fingertip.
ii. Central Controlling Unit:
The Central Controlling Unit consist of microcontroller and relays which
are used to control different devices in the house based on the signal
from the wearable device.
Determining Controlling Room
• In order to identify the device which the user wants to control, first of all
we need to find the room where the user is located.
• The location of the user is found based on the RFID tags.
• The wearable unit consist of RFID card reader and RFID tags with unique
ID are placed at the entrances to each room.
• Once the user enters a room, the RFID tags are detected automatically
and based on the unique ID of RFID tag, we could determine the room
entered by the user.
• While leaving the room also this RFID tag is detected, thus we could
identify that the user has left that room.
Detecting device the user wants to control
• The major function of the wearable unit is to determine the device which
the user is looking at.

• For this initially we create a database for each room, which contains the
location (mainly based on orientation) of each device in that room.

• In order to be accurate during identification, we also incorporate image


processing (object detection)for device recognition.

• For object recognition we create a database of images of different


objects.
Detecting device the user wants to control
• The combination of orientation and image processing will give an
accurate result.
• The data from database is later used to identify device which the user is
looking at.
• This wearable unit consists of camera(for object detection)sensors like
gyroscope, compass which is used to find the orientation and position of
the user and thus determine the device (home appliance) which he/she is
looking based on the database created initially.
Controlling the device
• The wearable unit consist of buttons which are used to control the
devices.
• So in order to turn ON/OFF a device we just have to look towards that
particular device (the device is detected automatically based on
previous step) and press this control button in the wearable unit.
• Apart from switching ON/OFF we could use the control button to
adjust the speed (in case of fan), adjust volume, change channel, etc.
• Based on the device the functionality of this control button varies, ie
for TV we could adjust volume, change channel etc. For light we could
adjust brightness ,colour. For AC we could adjust the temperature and
so on.
Transmission of data from wearable
unit to Central Controlling Unit
• Once the user presses the operation button a signal is send from the
wearable device towards the central control device via NRF module (RF
Communication)
• The signal contain two information
I. Device ID
II. Operation ID
• Device ID : Every device (Home Appliance) will be having a unique id in
order to identify device by the control unit.
• Operation ID : It specify the type of operation performed by the user.
• Once the device ID and operation ID is known the control unit it performs
the desired operation on that particular device.
Block Diagram
Home Surveillance
 We have installed a Camera which is used to alert the owner
about the person entering the house.

 The face database of the owner is stored.

 The camera detects the presence of an unknown person using


face recognition and sends a notification to the owners mobile.
Project-30%

 The 30% of the project included the Face Recognition (For


home Surveillance) & Object detection.

 Whenever an intruder is detected, the owner is notified via an


email along with the picture of the intruder.

 Face Recognition was programmed in Python 3.6 using OpenCV


library.
Project-30%
 The object detection was done for the common day to day objects
like bottles, notebooks… etc.

 The database used for this purpose is the COCO Dataset.

 The COCO dataset is an excellent object detection dataset with


90 classes, 80,000 training images and 40,000 validation images.
20%-Simulation
 20% of the project includes the position determination using
MPU6050 and wireless interfacing.

 The MPU6050 devices combine a 3-axis gyroscope and a 3-axis


accelerometer.

 The nRF24L01 module is used for transmitting data wirelessly


from remote to main control hub.

 nRF24L01 are transceivers which means that each module can


transmit and receive data.
20%- Methodology
 Orientation determination using MPU6050.

 Collecting control operations ( ON/OFF, Volume adjust…etc).

 Transmission of information from remote controller to the


main control hub.

 The main control hub receives the information, and


transmits it to serial port.

 Processing software collects data and converts it into a


graphical interface.
TensorFlow
 TensorFlow is a free and open-source software library for dataflow and
differentiable programming across a range of tasks.

 It is a symbolic math library, and is also used for machine


learning applications such as neural networks.

 TensorFlow has always provided a direct path to production, TensorFlow


lets you train and deploy your model easily, no matter what language or
platform you use.
Object Detection
• Object Detection is the process of finding real-world object instances like
humans ,TV ,bulb,car in still images and in videos.

• It allows for the recognition , localization and detection of multiple objects


with in an image.

• It is commonly used in applications such as image


retrieval,security,surveillance.
Object Detection using Image
Processing
• Creating database

• Gathering and labelling pictures

• Training Phase

• Testing Phase
Creating Database

• In order to train a good classifier, we need a large number of images at


different position, orientation and lighting conditions.

• The database consists of appliances like TV, Table Fan, LED Bulb, Tube
Light.

• Images where the desired object is partially obscured, overlapped with


something else, or only halfway in the picture where also captured.
Gathering and labelling pictures
• The captured images are then labelled using the ‘labellmg’ software.

• LabelImg saves a .xml file containing the label data for each image.

• These .xml files will be used to generate TFRecords, which are one of the
inputs to the TensorFlow trainer.

• Once you have labelled and saved each image, there will be one .xml file
for each image in the \test and \train directories.
Labellmg
• Labellmg is a graphical image annotation tool.
Training Phase
• Training has been done using “Google Colab’.

• First, the image .xml data will be used to create .csv files containing all the
data for the train and test images.

• The training procedure takes approximately 8 hours to complete.


Google Colab
• Google provides a free cloud service based on Jupyter Notebooks that
supports free GPU.

• It also allows absolutely anyone to develop deep learning applications


using popular libraries such
as PyTorch, TensorFlow, Keras, and OpenCV.

• Colaboratory is a free Jupyter notebook environment that requires no setup


and runs entirely in the cloud.
SSD (Single Shot Multibox
Detector)
• Some of the Current leading results on databases like PASCAL VOC, MS
COCO, and ILSVRC are all based on Faster R-CNN (Object Detection
Algorithm).

• Although accurate, these approaches have been too computationally


intensive for embedded systems and, even with high-end hardware, too
slow for real-time or near real-time applications.

• Our improvements include using a small convolutional filter to predict


object
categories and offsets in bounding box locations, using separate
predictors (filters) for
different aspect ratio detections, and applying these filters to multiple
feature maps from
SSD (Single Shot Multibox
Detector)

Fig. 1: SSD framework :


(a) SSD only needs an input image and ground truth boxes for each object during training.
(b) In a convolutional fashion, we evaluate a small set (e.g. 4) default boxes of different aspect
ratios at each location in several feature maps with different scales (e.g. 8 8 and 4 4 in (b)
and (c)).

For each default box, we predict both the shape offsets and the confidences for all object
categories ((c1,c2,…,cp)). At training time, we first match these default boxes to the ground
truth boxes. For example, we have matched two default boxes with the cat and one with the
dog, which are treated as positives and the rest as negatives. The model loss is a weighted
SSD - Implementation
 Matching strategy

• At the time of training we actually define a correspondence between the


ground truth box and the default boxes.
• Procedure is to match each ground truth box to the default box with
the best jaccard overlap.
• We then match default boxes to any ground truth with jaccard overlap
higher than a threshold (0.5).
SSD - Implementation
SSD - Implementation
 Hard negative mining :

• In Fig. 1, the dog is matched to a default box in the 4 4 feature map, but
not to any default boxes in the 8 8 feature map. This is because those
boxes have different scales and do not match the dog box, and therefore
are considered as negatives during training.

• After the matching step, most of the default boxes are negatives, especially
when the number of possible default boxes is large. This introduces a
significant imbalance between the positive and negative training examples.

• Instead of using all the negative examples, we sort them using the highest
confidence for each default box and pick the top ones so that the ratio
between the negatives and positives is at most 3:1.
Novelty
 Highly efficient and real time system that makes smart decisions to
control the devices.

 Provides control of all the devices at the user’s fingertip. The remote
controller provides the control of all the devices like switching
ON/OFF of a particular device, adjust fan speed, volume of a
television etc.

 The system also provides additional security by incorporating a


surveillance camera, thereby notifying the owner about the person
who enters the room.

 With such smart security system, any attempt by an intruder to gain


access to the home is denied.
Literature Survey

 Home automation system achieved great popularity in the last


decades and it increases the comfort and quality of life.

 The technologies that are used in this project are image


processing position determination (using gyroscope), and RF
communication.

 Image Processing is used for detecting the home appliances.


Literature Survey
 Position of the home appliances is obtained using gyroscope
and accelerometer.

 Radio frequency signals are used for communication between


the central unit and the devices

 Some of the drawbacks of existing home automation systems


are:
 For Automation by Speech Recognition, systems may
fail in presence of noisy environments.
 For Automation using apps, the app may not respond.
Work Split-Up
15.00% 20.00%

10.00%
Face Recognition

Object Detection using


image processing

Object Detection using


positon

Device Recognition
15.00%
Wireless Transmission
30.00%
10.00%
Controlling the devices
Time Chart
Percentage of Work Date of completion Description
20% 15 Nov 2018 Face Recognition
10% 20 Nov 2018 Object detection using
image processing
( Interfacing)
10% 11 Feb 2019 Object detection using
device positioning
10% 11 Feb 2019 Wireless Transmission

20% 25 Feb 2019 Object detection using


image processing
(Training)

15% 3 April Device recognition


15% 25 April 2019 Controlling the Devices
Remaining 30%
 Device Recognition: The object detection using image processing and
position are combined in order to recognize.

 Controlling the devices: The remote and central unit is connected. The
devices (LED bulb, fan, TV, AC) can be controlled using the remote.

 The various control operations are:


 ON/OFF,
 Volume adjust,
 Channel change,
 Temperature adjust.
Future Scope

• Using Wi-Fi Indoor Positioning instead of RFID to detect


the control room

• Adding voice recognition along with our implementation

• Incorporating Gesture control


REFERENCES
[1] XueMei Zhao and ChengBing Wei, “A real-time face recognition system
based on the improved LBPH algorithm" , IEEE 2nd International Conference
on Signal and Image Processing (ICSIP),2017
[2] Website:
https://github.com/tensorflow/models/tree/master/research/object_detection
[3] Leonard Eddison, Python Programming: A Step By Step Guide For
Beginners,2017
[4]
https://cloud.google.com/solutions/creating-object-detection-application-ten
sorflow
[5] Wei Liu1, Dragomir Anguelov2, Dumitru Erhan3, Christian Szegedy3,
Scott Reed4, Cheng-
Yang Fu1, Alexander C. Berg1 “SSD: Single Shot Multi Detector”.

You might also like