0% found this document useful (0 votes)
451 views

Design & Implementation of CCTV Using Android

This document describes a student project to design and implement a CCTV surveillance system using Android. It was created by three Computer Science students - Adegoke Jeffrey, Ibhagbosoria Blessing, and Lasisi Adeniyi Kassim - at Babcock University in Nigeria as a partial fulfillment of the requirements for their Bachelor of Science degree in Computer Science. The project involved using an Arduino board, camera module, sensors, WiFi module, and a mobile application to allow users to view live video feeds and receive notifications from the surveillance system remotely using their mobile devices.

Uploaded by

Segun Olumorin
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
451 views

Design & Implementation of CCTV Using Android

This document describes a student project to design and implement a CCTV surveillance system using Android. It was created by three Computer Science students - Adegoke Jeffrey, Ibhagbosoria Blessing, and Lasisi Adeniyi Kassim - at Babcock University in Nigeria as a partial fulfillment of the requirements for their Bachelor of Science degree in Computer Science. The project involved using an Arduino board, camera module, sensors, WiFi module, and a mobile application to allow users to view live video feeds and receive notifications from the surveillance system remotely using their mobile devices.

Uploaded by

Segun Olumorin
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 57

DESIGN AND IMPLEMENTATION OF A

“CCTV USING ANDRIOD”

BY

ADEGOKE JEFFREY 17/2047


IBHAGBOSORIA BLESSING 17/2155
LASISI ADENIYI KASSIM 17/2228

A PROJECT WORK SUBMITTED IN PARTIAL FULFILMENT OF THE


REQUIREMENT FOR THE AWARD OF BACHELOR OF SCIENCE      B.
SC (HONS) DEGREE IN COMPUTER SCIENCE (TECHNOLOGY)

 TO THE DEPARTMENT OF COMPUTER SCIENCE


 SCHOOL OF COMPUTING AND ENGINEERING SCIENCES BABCOCK
UNIVERSITY, ILISHAN – REMO, OGUN STATE, NIGERIA

APRIL, 2021

i
CERTIFICATION 

We certify that this project DESIGN AND IMPLEMENTATION OF CCTV


USING ANDRIOD was carried out by ADEGOKE JEFFREY, IBHAGBOSORIA
BLESSING and LASISI ADENIYI KASSIM under my supervision in the
Department of Computer Science, Babcock University, Ilishan – Remo, Ogun
State, Nigeria.

……………………………. ……………………. 
PROF. OMOTOSHO O.J` DATE
PROJECT SUPERVISOR 

………………………………… …………………….
EXTERNAL SUPERVISOR DATE

……………………………….. ……………………..
DR. FOLASHADE KUYORO DATE
HOD, COMPUTER SCIENCE

ii
DECLARATION

We declare that this project work “Design and Implementation of CCTV Using
Android” has been carried out by the following students:

 ---------------------------  ---------------------------
ADEGOKE JEFFREY DATE
17/2047

 ---------------------------  ---------------------------
IBHAGBOSORIA BLESSING DATE
17/2155

 ---------------------------  ---------------------------
LASISI ADENIYI DATE
17/2228

iii
DEDICATION
We dedicate this project to GOD almighty for his protection and provision
throughout the path to completing this project, to our parents who have nurtured
and brought us up to this point of our lives, to the lecture of this great institution
who have guided our paths in the quest of knowledge.

iv
ACKNOWLEGDMENT
I appreciate God who has seen me through the struggles. I acknowledge my
parents for their support throughout my journey to this point. A thank you to my
course mates and friends Ahuchogu Divine, Ugochukwu Prudence, Ajileye
Jemimah and Olusesi Kayode for being a part of my journey through this
institution. GOD bless you all.
-Adegoke Jeffrey

I am thankful to almighty GOD for making my academic milestone a reality, for


his favor, mercy and blessings shown to me. A thank you to my parents for helping
in every way possible to reach this milestone. My appreciation to my project
supervisor Prof. Omotosho O.J for his guidance towards completing this project.
May GOD’s blessing extend to you all.
-Ibhagbosoria Blessing

I am forever thankful and grateful to GOD the most for guiding and protecting me
up till this point of my life. A big thank you to my parents for their assistance
morally, emotionally, financially and spiritually, I am forever grateful. And to my
friends Adewale Adebiyi, Akinlua Damilola, Bassy Raymond, Obielum
Godspower, Sotayo Ibukun and Nzenwa Dumebi for your consistent support
throughout this journey. May GOD’s favor find you wherever you may find
ourselves in the future.
-Lasisi Adeniyi Kassim

v
TABLE OF CONTENT

Title Page i
Certification ii
Declaration iii
Dedication iv
Acknowledgement v
Table of Content vi
Table of Content vii
List of Figures viii
List of Tables ix
Abstract x
CCTV using Android 1-37
Reference 38-40
Appendix 41-47

vi
TABLE OF CONTENT

CHAPTER ONE: INTRODUCTION


1.1 Background 1
1.2 Research Motivation 4
1.3 Statement of Problem 4
1.4 Aims and Objectives 5
1.5 Justification 5

CHAPTER TWO: LITERATURE REVIEW


2.1 Introduction 6
2.2 Motion Recognition Strategies 8
2.3 Motion Detectors Based on Context 9
2.4 Motion Dectection Motion Sensors 10
2.5 Review of Related Work 15
CHAPTER THREE: METHODOLOGY
3.1 Absolute Differential Estimate Algorithm 19
3.2 Detection of Cauchy Distribution Motion 21

CHAPTER FOUR: RESULT AND DISCUSSION


4.1 Overview 26
4.2 Tracking Result 27
4.3 Motion Identification Assessment 30
4.4 Quantitative and Qualitative Strategy 32
CHAPTER FIVE: CONCLUSION AND RECOMMENDATION
5.1 Conclusion 36
5.2 Recommendation 36
5.3 Future Work 37

vii
LIST OF FIGURES

Figure 2.1 Frame Detection method 10

Figure 2.2 Background Subtraction method 10

Figure 2.3 Passive Infrared Sensor 11

Figure 2.4 Operation of PIR Sensors 13

Figure 2.5 Red Shadow Represents motion 14

Figure 3.1 Block Diagram 19

Figure 3.2 Circuit Diagram 20

Figure 3.3 Life Cycle of GCM 23

Figure 4.1 Insignificant Motion 28

Figure 4.2 Attacker Object cover in the area 28

Figure 4.3 Android Mobile User Authorization Screen 29

Figure 4.4 Motion Detector Evaluation 31

Figure 4.5 Initial Frame 32


Figure 4.6 Video Segment for Motion Detected 32
Figure 4.7 Hardware 35

viii
LIST OF TABLES

Table 1: The History of Surveillance System over the years 6


Table 2: Experimental evaluation of Motion Detection 30
Table 3: Qualitative Comparison of various Methods 33

ix
ABSTRACT

In today’s life with the disrupt and vast usage of Internet of Things (IoT), a surveillance system

become an essential need no longer a luxury facility to home residents, buildings and other

important premises. This project presents a CCTV and mobile app surveillance system using

Arduino Uno and its supporting components i.e., Camera module, sensor, WIFI module &

mobile application. In this system, the proposed components used includes Arduino because it is

compatible with many sensors and hence the system can be considered as cost effective. The

function of Arduino is to transmit and receive data from the sensor to the serial monitor when the

sensor detects movement. Besides, Arduino is compatible with GSM module as a function to

send and receive the data. This proposed solution can be implemented over the internet using

CCTV and mobile devices from anywhere and anytime. The cameras automatically stream live

video and then sends the feeds to the android application or mobile devices. It is considered as a

cost-effective solution, customizable and easy to implement by the home residents outside their

home residents in comparison with other commercial surveillance system products such as IP

Camera, etc. This research paper presents the Video Surveillance System, CCTV architecture to

improve surveillance applications.

x
CHAPTER ONE
INTRODUCTION

1.1 BACKGROUND

In our day-to-day life, people don’t want to waste their valuable time in monitoring the videos

for security purpose. (Koper, Lum, Hibdon, 2014) had made observations on the impact IT

(Information Technology has brought to key areas like smart phones, video surveillance system

and CCTV. In the ever-evolving era of technological development, people want everything to

happen at their door step without stress and at their comfort. In this system the user can view the

videos directly from their phone without needing to be physically present, also we can

upload/download the files from anywhere in the world. That means we want multitasking

facilities to happen at any remote area [ CITATION DSh12 \l 1033 ].

In video surveillance system, videos can be monitored from anywhere through mobile phone

with web camera using the phone camera or a peripheral using an android application. There are

different surveillance methodologies like alarm system, CCTV, PC based video system are used

to ensure the security [ CITATION DSh12 \l 1033 ] . But using all these systems, it is not possible for

a user to monitor the security of his or her location when they are outside. Meaning using all

these systems, the person requires to continuously monitoring the video for security purpose and

this is the main drawback.

Due to the importance of mobile phones in our day to day lives, it would be knowledgeable to

build a surveillance system that can be integrated into a phone for easy and remote access. By

using mobile, people can monitor the videos of their environment even when they are outside.

This video is recorded in the system by using a camera & people can monitor the videos through

1
the mobile phones. The security problem is resolved by video surveillance system and made easy

using smart phones with the aid of the application. We can view the information through the

mobile from remote CCTV from anywhere in the world [ CITATION Mas03 \l 1033 ]. To reduce the

pressure on the mobile phone, the video data is stored in the cloud, if the user wants to view a

certain log, the file directory stored in the application will grant access to the corresponding

version available on the web, by doing this, memory and CPU usage is reduced drastically. The

main requirement is that the system must be switched ON with internet connection enabled and

the mobile should have GPRS connection [ CITATION DSh12 \l 1033 ].

In view of the rise of criminal cases in Nigeria, surveillance system has been considered as an

important facility in today’s life. The surveillance comes from a French word which denotes as

“watching over” [ CITATION DSh12 \l 1033 ] . With the advancement of the evolution of the

surveillance system has rapidly improved and innovated from a “Dumb camera”, CCTV and now

IoT (Internet of Things) system which available with low to high cost depending on the system

components, its functions, and features. There are three essential items in any visual surveillance

system: the front-end video capturing tools, the central control system and the end user. Firstly,

the front-end video capturing tool consists of a camera and/or digital video recorder (DVR).

Secondly, after the camera captured the video footage, it sends to the central control system to

compress the video setting and deliver the footage to end user. There are several types of the

surveillance system in today’s market, for example, Closed Circuit Television (CCTV) camera

and Internet Protocol (IP) camera. The CCTV system requires continual monitoring of every

action which is not at ease, makes utilization of CCTV camera are expensive but with the

inclusion of an android application, it makes it easier to record and store surveillance for later

use. Meanwhile, the operation of IP camera is also quite costly and could cause a huge problem

2
when it becomes open to hackers via internet while watching for the cameras [ CITATION Jae11 \l

1033 ].

The aim of this project is to design and develop a cost-effective and affordable CCTV-like

surveillance system using Arduino microprocessor that manageable via an internet connection

using a mobile application. This system is easy to implement by users from any mobile devices.

This is because it can counter the problem of surveillance system without saving the recording

video in storage but still analyzing live incident that happen to them [ CITATION Cho04 \l 1033 ].

A security system designed in [ CITATION Jar14 \l 1033 ] using Arduino Uno is as its

microcontroller. In this system, Arduino compatible with many sensors and hence the system can

be considered as cost effective. The function of Arduino is to transmit and receive data from the

sensor to the serial monitor when the sensor detects movement of a human. Besides, Arduino is

compatible with GSM module as a function to send and receive the data.

A smart surveillance system using PIR sensor network and GSM was developed in. The system

used PIR sensor and video camera on peripheral interface controller (PIC). The PIC is

inexpensive, easy to program and can control all the components with low computational

requirement.

Technically, this surveillance system comprises Arduino microprocessor, camera with mounted

servo-motor, PIR motion sensor, Ultrasonic sensor, buzzer and LED. The Passive Infrared sensor

(PIR) is useful in a low-cost surveillance system through which it provides infrared radiation in

detecting human presence from human body temperature. Besides that, this system will be

programmed with a mobile app for its management and configured for internet connection.

3
It enables remote monitoring of homes or any other priority areas at any time and from anywhere

by controlling the movement of the camera that can detect, and then simultaneously record the

image of the home intruder into the system storage. Unlike commercial CCTV system, the

system allows user to view the live streaming video through an application (apps) in the android

which can reduce the cost of the system.

1.2 RESEARCH MOTIVATION

The motivation of this research topic is to improve upon existing gaps. And the system would be

an alternative for expensive security systems being used in the present day. Arduino Uno has

proved to be ideal as the core of such a system. This system enhances the capabilities of

technologies, integrating them, to introduce the CCTV system and contribute to the current

security system.

1.3 STATEMENT OF THE PROBLEM

Security issues is a challenge that impairs the general growth of nations, undermining the mental

and material wellbeing. It also compromises human dignity while promoting a climate of fear

and violence. From 18th century, various methods have been adopted to address security and

crime disorder. Many developed countries have led their countries to their current status by

continuously controlling security tension through all possible and applicable strategies including

modern information communication technology. ICT most importantly mobile computing has

experienced tremendous growth in its application in all sectors. Modernization of the service

continues to get keen government focus in effort to address the insecurity in the country. This

modernization entails adoption of information technology in crime prevention approaches which

4
has so far not made the society free from danger and thereby creating a gap which ought to be

filled. This study will examine how the use of mobile phone technology and CCTV has helped to

improve security.

1.4 AIMS AND OBJECTIVES OF THE STUDY

The aim is to develop android based CCTV. Objectives are

i. Using the Absolute Differential Estimate to track any environmental adjustments

ii. to build Cauchy Distribution model that isolates from the context the moving foreground

object

iii. To give the signal warning when motion is observed, to use the android phone.

1.5 JUSTIFICATION

Intelligent video monitoring is the application of automated video systems to track diverse

circumstances closely. Visual surveillance is also vital to the protection of various sites,

including banks, stores, shops and even buildings. The proposed Smart Video Monitoring device

utilizes motion detection to identify the appearance in the area of any intruder(s). The system

used for the detection of motion is the distribution model Cauchy, which contrasts the present

picture as the reference frame in the continuous video series with the earlier frame. The idea is to

monitor an d detect motion if the gap between the current frame and the reference frame reaches

a certain threshold function. A modern approach that sends an alert to the Android smartphone

5
CHAPTER TWO

LITERATURE REVIEW

2.1 INTRODUCTION

In recent years, video surveillance has become an important existence. While some oppose the

concept of video recording, tracking greatly increases protection standards in industrialized

democracies. The video monitoring system is a means to track moving items by video frames.

Modern society needs the creation of intelligent control systems to increase quality of life(Guo,

X. Zhan,P. Wu,H. (2017)). The history of surveillance systems over the years is listed in Table

1.

Year System of Monitoring


1965 The beginning of public observation cameras
1969 Home surveillance cameras invention.
1976 CCD cameras invented to be used in low-light conditions.
2018 The first nanny camera with a higher quality was smaller.
1996 The first IP camera release
Today When modern technologies become conscious of the value of automatic video

control systems, they are clever and utilizing the Internet and wireless

communications.
Source: Hemamalini, S. Simon, P. (2018)

Motion detection is an essential component of any automated system of monitoring which

detects movements and uses extracted information for further study, such as action classification,

object tracking, etc. (Helsin, J. Hobbs, F. (2019)). Motion tracking methods have been used for

several years in control devices, for civil and government agencies, for bank and parking

surveillance systems, for military applications and law enforcement. Sees examples also raised

6
the need for automatic control systems, which provide effective 24-hour surveillance (Huang, M.

Huang, C.( 2014)).

There is a lot of study into the usage of motion detector methods for monitoring devices. (Jiang,

W. Yan, Z. Hu, Z. (2018)) suggested three systems, context module, warning module and motion

tracking module, to completely monitor moving objects. The researcher suggested an improved

history module to boost entity motion detection. During (Lin, M. Chen, J. (2017)) explores the

usage of mathematical context modeling and edge detection to track motions of mobile objects.

(Lam, Y. Li,K. (2010))used in real time motion detector systems Gaussian-based approach. He

contrasted five motion detector methods and noticed that Gaussian provides the strongest

outcome for real-time protection systems.

Li, J. Pan, Q.( 2017) proposed motion identification dependent on details on frame colors and

complementary pictures of depth. This system can track moving events without background

noise. The frame gap Lu, Y. Zhou, J. Qin,H. Wang, Y. Zhang, Y.( 2018)used a Gaussian filter

and frame difference in head motion. Laplacian filter was often used for frame edge detection.

Their device has been checked indoors and outdoors. Liao, C. (2017) has suggested a new

approach for the identification of citizens in busy areas by other researchers utilizing Arduino

micro-controllers to introduce a surveillance device. The approach proposed collecting

knowledge using an Arduino microcontroller-related camera and using predefined patterns.

All the above approaches are suggested as a complex computer system, which fits all conditions

(indoor and outdoor) but only involves a quick, accurate, real-time implementation. This

proposal suggests the establishment of a basic tracking framework to deter illegal acts in a small

region. The device incorporates two movement detection methods to enhance the system's

overall performance. The first is a motion detection sensor for the identification of the initial

7
movement presence and the second is a frameshift system which only stores the video frame of

behavior. This mix of strategies minimizes electricity usage and reduces storage room needs. The

device is also protected by biometric authentication property, where the contents saved through

the tracking app can only be accessed by authorized persons.

2.2. MOTION RECOGNITION STRATEGIES

Liao, C. (2018) Tracking and automatic detection of the video field operation are a challenge or

activity detection in security systems (Mandal, B. Roy, K. (2020)). Examples involve a motion

sensor that activates the opening of a door or lights or a warning in case of security systems.

There are several various forms of motion sensors, including infrared (passive and active

sensors), vibration (seismic, inertia-switch sensors), radio frequency energy (radar, microwave,

and tomographic movement detectors), echo (microphones, and acoustical sensors), and

(magnetic sensors and magnetometers).

Various methods of motion detection include (Muslu, M. (2015)):

1) Frame Difference: compare the shifts in the pixels on a video path.

2) Optical Flow: senses movement patterns, artifacts and video surfaces and tracks them.

3) Context Subtraction: senses movement by elimination of all static pixels.

4) Force Sensors: electronic instruments converting activity to the electric signal. In order to

incorporate a simpler monitoring method, the planned system incorporates frame gap (1) and

motion sensors (2). A motion detector focused on frame variations is carried out by subtracting

the previous frame from each frame in a video stream. Any pixel that doesn't alter the value in

two or three frames in the video stream would be deleted. The value of a threshold is used to

8
determine what constitutes a calculated shift in the value of the pixel to be chosen depending on

the expected moving object speed (Mohammadi-ivatloo, B. Rabiee, A. Soroudi, A. Ehsan, M.

(2017)). Fig. Fig. 1 demonstrates the method of frame difference. B. This approach uses a

follow-up variable on the moving objects that integrates complicated computing techniques. This

technique is ideally adapted for movement camera applications and is usually used to segment

artifacts based on their movements in tracing applications. A dense field of displacement vectors

is generated with the optical flow method, which determines translations in each pixel

region(Niknam,T. Golestaneh,F. Sadeghi, M. (2017))

2.3. MOTION DETECTORS BASED ON CONTEXT

1) This procedure is an image subtraction technique which subtracts the current frame from a

selected background image and can be modified during the provided analysis time. This

approach is clear and computer-specific, but is heavily influenced by complex shifts in the scene,

such as lighting and other extraneous occurrences. This approach is therefore rather reliant on the

required context model collection (Niknam, T. Azizipanah-Abarghooee, R. Roosta, A.( 2017)).

Figure 2.1 and Figure 2.2 indicates the process of context subtraction

Figure 2.1 frame difference method.

9
Source: Palanichamy, C. Babu, S. (2018)

Figure 2.2: background subtraction method

Source: Peng, C. Sun, H. Guo, J. Liu, G.( 2017)

2.4 MOTION DECTECTION MOTION SENSORS

Motion sensors are electronic instruments capable of sensing actual motion in the fields of view

by converting the sensed motion into an electrical signal at a given distance. These are widely

found in remote alarm monitoring systems. When the sensor senses some activity, it produces an

action-induced signal. (Pandi, R. Panigrahi, K.( 2018))There are numerous kinds of movements,

and several work better for indoor areas such as visible/infrared (LED/Laser) illumination, touch

transition, piezo-electric sensors and piezoresistive sensors. Other prototypes for outdoor areas

include the active/passive infrared motion sensor, the ultrasound motion sensor, the detection

phase sensor, the Doppler microwave, camera, the passive infrared detector [PIR], ultrasound,

microwave and tomography." Based on the application, each form has strengths and

disadvantages. Some implementations incorporate many forms of activity sensors to improve the

overall efficiency, such as transmitting a warning signal only when all sensors are active. It is
10
also necessary to note that all motion sensors have blind spots that need to be considered during

device development (Pandit, N. Tripathi, A. Tapaswi, S. (2017)). Fig. Figure 2.3 displays the

indoor sensor sample.

Figure 2.3 Passive infrared sensor

Source: Pandita, N. Tripathia, A. Tapaswia, S. Pandit, M.( 2017)

In conclusion, both of the previous techniques have some limitations such as the subtraction of

the backdrop by light transition and advanced analytical approaches are used for optics follow.

Many of the techniques used for target identification (indoors and outdoors) are part of a

dynamic framework. This challenge uses computing resources and electricity, resulting in

delicate surveillance scenarios. Motion may be observed by sensors but not registered with the

assistance of a camera. In this project a simple low-cost control device is proposed to deter

unauthorized steps in a limited area where the project requires a simple measurement method,

the proposed solution combined the usage of motion sensors and an amended frame difference

procedure, with effective motion detection, with a reduction in storage space and power usage.

The proposed system combines software-based frame difference motion detection with a

motion detection sensor. This hybrid approach increases the reliability of the system, minimizes

errors, and saves power. The first component of the system is an IP camera connected directly

11
to the Wi-Fi module. As this type of camera may be accessed through the IP of the network, the

surveillance photo will be sent directly to the user’s android device.

The second component of the device is an Arduino microcontroller movement sensor. The

Passive Infrared (PIR) sensor used in the PIR system consists of a "free line lens, an infrared

detector and a supportive detection circuit." The lens is based on an infrared detector. As the

human body emits infrasound heat, the sensor will sense the motion and transform it to a 5V

signal. The following chart demonstrates the working of the PIR.

Figure 2.4: Operation of PIR Sensors

Source: Vaisakh, K. Praveena, P. Rao, M.

Meah, K.( 2017)

When a motion sensor senses movement, the

monitoring system software sends a signal to initiate the IP camera and save the motion

recording. Due to its flexibility and swift action, the frame difference approach was used for

12
implementing the monitoring device program. The initial frame differential algorithm poses a

problem as two consecutive frames are subtracted and artifacts travel too slowly and cannot be

identified. In order to increase this efficiency, frames are subtracted from the first frame, which

is refreshed every five minutes as a simple backdrop. As motion shows in front of the eye, the

view window depicts the motion showing a red outline, and this frame is saved as seen in Figure

2.12 Just saving the frames of observed activity minimizes the usage of on-board storage.

Figure 2.5 red shadow represent motion

Source: Xuebin L.( 2019)

2.5 REVIEW OF RELATED WORK

GCM – Google Cloud Messaging is the latest solution that our technology implements to give a

message to an Android device customer when a movement is identified. Arul, et al (2020)This

machine helps to solve issues with background motion detection, incorrect motion detection with

shift of lighting, camouflage impact and incorrect motion detection due to the irregular pace of

moving objects. We are searching at options to incorporate a virtual panic button for potential

addition to this framework. In the area of monitoring, there is substantial literature. While the

current structures are designed to address similar issues, the methods and algorithms employed

differ considerably, the usefulness of each approach is therefore important. Various methods for

13
creating context and detecting motion in a continuous stream of incoming video are usable.

Many notable strategies are defined and explored in order to build an electronic surveillance

framework that overcomes all environmental problems.

One of the more impressive works is the models in which each pixel is taken as a Gaussian

combination Aghaei, et al (2020), but this model is only concerned with gradual shift in lighting

and is therefore not suitable for sudden changes in lighting. The Illumination-Invariant Motion

Detection (Azizipanah (2020)) addresses this issue since the device is resilient to fast shifts in

lighting. Gaussian mixture model, RGB, HSV, and local linear filter responses have been used. It

also addresses multimodal distributions induced by shadows and secularity. When the backdrop

returns, it easily recovers and has an automatic pixel threshold. Color-based lighting Invariable

motion recognition model with lighting adjustments is more effective owing to a practical

reflection model with shades.

Illumination-Invariant Motion Detection reveals false positive as context movements are

detected as foreground objects under illumination (Alsumait, et al, 2010). The statistic modeling

of complex backgrounds (Basu, 2016) uses a Bayesian method for integrating spatial, spectral

and temporal characteristics in context modelling to handle stationary and non-stationary

background artifacts. In a dynamic context, foreground artifacts are efficiently detected.

Improved adaptive background mixing model (Basu, 2018) utilizes the tracker system of

Adaptive Background Mixing Models developed by Bhattacharjee,(2014) but adds the shadow

detection. This indicates a greater result in segmentation. While adaptive context mixing models

(Bhattacharjee, et al, 2014) use a multicolored background pixel model, slow learning at the

beginning is not feasible and moving shadows cannot be separated from moving objects. The

enhanced adaptive context model (Balamurugan, 2019)overcomes these inconveniences. these

14
inconveniences. A powerful adaptive algorithm using a Gaussian probability mixture density is

used in the enhanced Gaussian Mixture (Elaiw, 2017). This indicates both less loading time and

improved segmentation. Thanks to the recursive equations used to continuously change the

parameters and simultaneously choose the required number of components, it will automatically

completely adjust to the scene.

Appropriate History Subtraction method (El-Keib, 2016) is used to track environmental behavior

trends. The segmentation of motion is accomplished by taking each pixel as a Gaussian mixture.

Per pixel has a threshold value that allows shadows, swinging leaves, the reappearance of the

backdrop and progressive shifts in lighting to be effectively observed. The method of motion

segmentation is done by the Context Neural Network (BNN) procedure (Farag, A. Al-Baiyat, S.

Cheng, C. (1995)). The BNN process is independent of pixel characteristics other than its

intensity. With the pixel's characteristics, we can solve disparities due to the shadow impact.

Learning the movement trends with real-time monitoring tackles slow shift in illumination by

adapting the Gaussians' values steadily. This deals with multimodal distributions induced by

shadows, secularities, swinging branches, screen displays and other disturbing real-world

features not commonly listed in the computer vision. The backdrop rapidly recovers and has an

automatic pixel threshold. In order to improve segmentation performance, BNN approach is

independent of the features used to achieve segmentation and use features other than intensity

values, particularly in the shadow suppression region.

The enhancement of the methodology to utilizing input from higher object detection processing

modules is currently being explored to improve segmentation. Such top-down management may

be used to deal with the issue of the backdrop absorption of foreground artifacts. Cucchiara, et al.

(Granelli, G. Montagna, M. Pasini, GL, Maranino, P.(2018)), have established a system for

15
daytime spatio-temporal analysis and night morphological analysis to identify activity. (Gaing,

L.(2015)) improvises this by adjusting the procedures to different environmental environments

(nebulous, wet, humid, sunny, snowy) as well as lighting (daytime, night-time, dawn, dusk). The

presented traffic surveillance device (Guo, X. Zhan,P. Wu,H. (2017))This exceptional since it

can watch for 24 hours. It formally distinguishes low-level image processing modules from high-

level modules. A complex visual model to track essential movements of surrounding objects is

suggested by critical motion detection technique (Hemamalini, S. Simon, P. (2018)). It is guided

by the human visual system which involves auditory, cognitive and mental analyzers. It is used

along with an episodic memory by way of which hierarchy, configurability, adaptive reaction

and selective attention are carried out.

A method that can better manage the complex scene (Helsin et al, 2019) containing shifting

backgrounds, subtle shifts in lighting, camouflage and shadow-incongruities is explored. The

system uses a self-organizing process to track environmental variations. The context model

estimation period for high resolution photos can be minimized as the It is necessary to process

pixel values simultaneously. This overcomes the problems (Huang et al, 2014) where

camouflage effects and lighting adjustments are unreliable. A Self-Adaptive context matching

framework (Jiang et al, 2018) suggests the selection of a background pixel in each frame and

also calculates an appropriate binary motion mask.

Contiguous Outliers may be identified by low-rank representation technology (Lin, etal, 2017)

also in complicated contexts such as non-stiff motion and a diverse context. Detection of artifacts

Self-ordering history subtraction is essentially parallel, with no coordination that decreases

execution time for sequences with high resolution. This is ideal for layered implementation.

Independent approach of portion analysis is extremely tolerable with room lighting shifts. By

16
adding color templates, this can be improvised. While there has been a lot of work done in visual

surveillance for humans and vehicles, several problems are still subject to further study,

particularly in areas such as occlusion handling, 2D and 3D tracking, three-dimensional

modeling of human beings and vehicles, visual monitoring and personal recognition, competency

awareness, the detection of anomalies, and behavior avoidance.

CHAPTER THREE

METHODOLOGY

3.1 ABSOLUTE DIFFERETIAL ESTIMATE ALGORITHM

The algorithm below defines the operation of the device suggested.

Inputs: IP camera video feed, signal from the Motion tracker recognition.

Output: video stream processed.

Phase 1. Step 1. If the motion sensor senses motion,

Then the machine starts the IP camera and sends a notification to the user’s andriod

Registered in the Blynk application.

. Phase 2. Step 1. Begin the software of motion detection.

Phase 3. Step 3. If motion with the frame difference algorithm is observed, then save the frame.

Step 4. end

For enhancing device protection, the software generates a specific pattern based on the algorithm

mentioned in section (3) such that only the intended individual has access to the monitoring data

held. Figure 3.1 reflects the whole architecture of the device. The proposed method overcomes

17
this issue by only capturing the frames that include gestures in which the necessary storage space

is reduced. Device implemented with biometric access to avoid unwanted deletion of stored

records.

Figure 3.1: Block Diagram

Source: Xu, J. Lam, AYS, L. (2018)

18
Figure 3.1: Circuit Diagram

We track an attacker using a mobile application. A telephone is used to detect if there is an

intruder in the field. It includes software and hardware. The Atmega328 microcontroller is used

and the code for driving the whole circuit is encoded in it. A serial camera monitors the

controller and is interfaced with it. The Wi-Fi modules have also been attached. The wifi module

links the microcontroller to the internet and here the Iot idea falls in. The PIR sensor senses

vicinity movement. As commands are detected, signals are sent to the microcontroller and the

camera snap where the wireless Internet signal is transmitted through the smartphone app. The

bank app installed on our Android phone gives us access to the snap picture.

19
3.2 DETECTION OF CAUCHY DISTRIBUTION MOTION

The main purpose of this module is to detect activity in a specific location. Cameras are

configured at the location to be monitored. The video is recorded when the monitoring system is

set up and activated. Cauchy distribution and Absolute Differential Estimation are used for

motion detection. Absolute Differential Estimates are used to compare the background frame and

video frames to find out if the incoming video frame has changed. In the input video frame

where motion is detected, the Cauchy distribution model is used to detect the pixel of the moving

object. The action mask is used for distinguishing if the movement is in the front or background.

The threshold function is used to detect whether the detected motion is important. The

background model is used to compare video frames for motion input.

Step 1 : Initiate when Activation _ mod e  1

t  t, f 
Step 2 : Read the continuously streamed video input

Step 3 : Production of the background format is computed by

 i
 t  t, f     
 p

   r 0  t  t , f 
i 1

Step 4 : Detection of movement is formulated as

 t  t , f   m0x 

   t  t, f   n  t , f 

20
 n  t , f    t 1  t , f 

Step 5 : When movement is detected, Identify Object movement as

  t  t , f  ; d , e   1  e 2 2 
  F e 

F  t  t, f   

   i 0 l  n /  i  0 n
max e max e

Step 6 : When there is no significantly distinct movement, loop back to step 2. Step 7 :

1   2 and  t  t , f   0 .
Confirmation of the Background movement occurs when

1   2 and  t  t , f   1
Step 8 : Confirmation of the Foreground movement occurs when

Step 9 : If any confirmation is registered, store the acquired image on server.

Step 10: Otherwise loop back to step 2.

Step 11: Terminate when Deactivation _ mod e  1

3.2.1 Sending GCM alert

Google Cloud Messaging (GCM) is a tool for Android that helps the consumer to transfer data

from the server on the Android Operating System to the user's smartphone. It could be a subtle

notification to suggest that fresh data is accessible on the server or a message comprising up to 4

kb of payload data. The Google server has the task of collecting and transmitting messages from

third-party app servers to the Android user. The aim of this module is to submit a GCM warning

when motion is detected to the Android smart phone of the consumer. In the previous module, a

21
picture is recorded and saved on the server when a motion is observed. The GCM warning is

activated and sent to warn the consumer when the server is loaded with a new image.

Figure 3.2 Life Cycle of GCM

The key processes involved in the GCM cloud-to-device phase enable GCM to submit a request

and receive a message. In "Enabling GCM," an application which operates for receiving GCM

messages on the Android Smartphone registers. In the next stage, a third-party app server will

deliver the message via registration ID to Android Smartphone. In the final phase of the GCM

life cycle, the Android app receives the notification on the computer.

The Google API console produces the sender ID and device ID. To submit a GCM alert, the

GCM program must be registered for the smartphone of the customer. A unique registration ID

for the user's smartphone is created. GCM Intent must be enabled on the Android smartphone to

create this ID. This is achieved with the Android Manifest file that configures the Android

program. If a motion is observed and a new picture is loaded into the server, a warning is sent to

the computer using the registration ID. Permissions have to be added in the android manifested

file in order to have access to the GCM functions. The licenses used are as follows:

Online – allowing C2D MESSAGE to use the Android framework – Send cloud to computer

messages

GET ACCOUNTS – The Google Accounts Registry Registration – As GCM allows the usage of

Google Accounts

22
RECEIVE – Getting the system GCM Registration ID

WAKE LOCK - to wake the system while it's in sleep mode, all these permissions are applied to

the Manifest Android file to control the device even though the device is not working. If the

software is named, the service is only available while the Android application runs on the

computer. GCM Intent Service class is generated in the Android framework code to allow GCM

service on the mobile device. For registration, the sender ID and program ID are submitted to

GCM server. A transmission recipient class is set up to obtain the system registration ID from

the GCM server. The Registration ID is used for submitting a message "motion detected" to the

registered device in the Message sender class. Code for managing notification settings including

symbol, post, sound and vibrating functionality is written in GCM Purpose services.

Algorithm 2: GCM warning for sending and receiving

Phase 1: Build Sender ID and Program ID in the

http://developer.Android.com/google/gcm/gs.html#createproj console

Phase 2: Insert the sender ID and registration ID you have bought in the Android app code.

Phase 3: Install Purpose Services in the Android Manifesto to register GCM devices and get

GCM warning on the computer with the packages "com. google. android. c2dm.intent.

REGISTER"

Phase 4: Update the Android Mobile user program.

Phase 5: Enter the registration ID created in the Motion Detection System during the installation

process.

23
Phase 6: Give a GCM Warning to the system if motion is observed. Phase 7: If the GCM

Warning is issued, the mobile notification bar shows a fresh picture on your server.

After all modules are executed, the user authentication and the picture are displayed.

24
CHAPTER FOUR

RESULT AND DISCUSSION

4.1 OVERVIEW

The proposed device has high precision in motion detection and image acquisition. Webcams are

affordable and simple to set up in the field. The consumer may view these photographs from a

remote position as the images in which the attacker is detected are saved on the android device,

thereby eliminating the time and distance restrictions. The number of photos that are saved on

the cloud and not on the local hard drive is infinite. The user gets a warning when the motion is

observed such that the user knows the scenario in real time in the tracked region. When the

notification is sent, the notification sound is repeated on a loop, and it does not quit playing until

the consumer opens a message, which is the only way to mute the tone. Therefore, the likelihood

that the recipient ignores the interruption by not hearing the alarm is significantly diminished.

The sound may be tailored to the needs of the consumer. Since the consumer understands the

real-time situation, urgent protective steps may be taken, helping the user to safeguard property

and life. Constant human supervision is not needed when the consumer is alerted to an intrusion.

This ensures that the customer would not need to hire staff to supervise, saving deployment

costs. Google Cloud Messaging for Android (GCM) is a tool that enables users to transfer data

from the server to Android-enabled users. Lightweight messages to inform the program to

retrieve fresh data from the server or to send messages consisting up to 4 kb of payload data.

GCM service manages all facets of message lines and distribution to the target Android app on

the target device. GCM Alarm is really fast and free. The customer would not have to

compensate for getting updates from the device. There would be no identification of trivial

25
motion like moving butterflies, fluttering curtains or calendars. The customer would also not

benefit from false alerts. The Google server warns the user's device and this is reflected in the

notification bar. When the alarm is issued, the user android device plays a custom message sound

in a loop before the user silences it when the alarm is opened. This is intended to guarantee that

the customer does not miss the intruder warning notification. Opening the notification leads to

the device login tab. The users must authenticate themselves to access the recorded picture using

their user name and password. This prohibits unauthorized workers from obtaining and misusing

records. You should save the picture on the Android Device of the customer. Saving the photo

will help the consumer in future display the shot or as an evidence of intruder detection.

4.2 TRACKING RESULT

The Juno IDE Eclipse was used to build Java projects and Android projects. The vocabulary of

Java programming is used for code writing. Java code is normally compiled to byte code and will

operate on any Java virtual machine (JVM) without regard to the design of the program. It adopts

the policy of 'Write Once, Run Everywhere.' Android's operating system is specifically meant for

handheld touchscreen devices including smartphones and tablets. It is a certified open-source

operating system. Apache Tomcat has been created by the Apache Software Foundation as an

open-source web server and servlet container. Java Media System is a Java library that can be

applied to Java appliances and applets with audio, video and other time-based media and makes

multimedia cross-platform applications. The MySQL Server is a server for the maintenance of

open-source databases. SQLYog is a tool for MySQL's Graphical User Interface. To link Eclipse

to SQLYog, the SQL Connector module is used. Both the packages and applications included

with this framework are open source, rendering it economical. Every Android Smartphone may

use the device independently of the vendor. So, you will see that the device has a large consumer

26
base as Android smartphones are probably the most commonly adopted smartphones. The

intelligent video monitoring device was mounted in an enclosed environment. The experiment

was done for motion tracking, GCM warnings and GCM alerts.

Figure. 4.1 Insignificant Motion

Figure 4.2 Attacker object cover in the area 5

Figure 4.1 indicates the history of a motion object. As this is an irrelevant motion, the consumer

has not got a warning. Figure 4.2 indicates an attacker approaching the controlled area. The red

lines show the intruder's object mask. The threshold value is used to differentiate important and

insignificant movements. If the movement crosses the threshold value, it shall be deemed

relevant and a warning shall be issued. On the other hand, the motion is deemed negligible when

27
the threshold value is not met and warning is not sent. When the GCM warning was sent to the

user, a custom noisy sound was played and a user's Android mobile notification was sent. The

sound only ended when the consumer opened the post. The sound may be picked from the sound

list defined in the application. This makes it easier for the consumer to identify the sound as the

alarm..

Figure 4.3: Android Mobile User Authorization Screen

The recorded picture was shown while signed. This is seen in Figure 4.3 . Finally, this indicates

the picture identified on the Android Smartphone of the customer. This system's output is

determined by the time needed to sense motion and notify the user. The experimental findings

indicate that the average time delay between sensing the motion and getting the warning is 5

seconds. The efficiency is often calculated by the precision of the intrusion detection device. A

28
series of test cases is conducted to test whether or not the device senses the gestures. Compared

to current systems, construction costs for this device are smaller.

4.3 MOTION IDENTIFICATION ASSESSMENT

Table 2 displays the experimental effects of threshold motion detection. Table 2. Different test

cases seen in the table were given to the framework to determine its efficiency. The machine

threshold value was 50 and any motion moving less than 50 squares was to be treated as small

gestures, while any motion changing 50 squares or more was to be seen as major moves.

Warning must only be sent to the Android Smartphone of the customer if the motion is

important. The present frame appeared the same as the frame of reference where there was no

visible movement in the field. As a consequence, no squares have been modified and thus a

warning was not provided to the customer. When an individual reached the field, 114 squares

were adjusted, which obviously surpassed the threshold value. The motion was observed and the

consumer got an alert. A calendar flooded in the background, a mosquito flying through the

room and a housefly flying through the room revealed 27, 7 and 9 square shifts. All these values

did not reach the threshold value and the queries were found negligible.

TABLE 2 EXPERIMENTAL EVALUATION OF MOTION DETECTION

Input Video Segment Number of Squares Changed Result


No apparent movement in 0 No motion detected
area
Human entering monitored 114 Motion detected
area
Calendar fluttering 27 No motion detected
Mosquito flying across the 7 No motion detected
room
Housefly flying across the 9 No motion detected
room

29
Moderate earthquake 39 No motion detected
Strong earthquake 433 Motion detected
Door closing 67 Motion detected
Curtain fluttering 35 No motion detected
Insect crawling on the wall 3 No motion detected

When the results of a mild earthquake were simulated, the displacement was negligible and 39

squares were changed. As the results of a significant earthquake were simulated, the threshold

value was adjusted to 433 squares and thus motion and alarm was observed. Door closing is used

as a background step, but because 67 places have shifted, motion was mistakenly sensed and the

consumer was alert. The background motions of curtain and insect fluttering on the wall were not

detected as needed.

Figure 4.4: Motion Detector Evaluation

Figure 4.4 demonstrates the graphic measurement of detection motion efficiency using the

Cauchy Distribution Model. The input on the X-axis is taken and the number of squares modified

on the Y-axis is taken. And when a significant earthquake is simulated and a door is locked in

mind is the device improperly observed. The graph is drawn to demonstrate the module's answer

30
to the given inputs. This indicates explicitly that the system's efficiency is correct 80 percent of

the time.

4.4 QUALITATIVE AND QUANTITATIVE STRATEGY

The consistency assessment of various motion detection systems is taken into account in the

video inputs from different environments. Qualitative samples are made indoors and outside

Figure 4.5 Initial frame, observed movements with (b) MTD, (c) GMM, (d) MSDE, (e) BMMC

Figure 4.6: Video Segment for motion detected through the CCTV

Highway, street and lane are the measured outside areas. In this segment, motion detector

methods, MTD, GMM, MSDE and BMMC are contrasted. The utility of these methods is

measured dependent on the exactness of the motion mask provided by any of these methods. The

quantitative metrics for MTD, GMM, MSDE and BMMC are presented in Table 3.

TABLE 3: QUANTITATIVE COMPARISONS OF VARIOUS METHODS

31
Video Segment MTD GMM MSDE BMMC
Room 0.8184 0.6155 0.3920 0.8851
Hallway 0.5773 0.5665 0.5438 0.7651
Mall 0.6900 0.9016 0.4294 0.9181
Highway 0.6929 0.7021 0.5672 0.8558
Street 0.5798 0.7318 0.3779 0.7848
Road 0.5941 0.4551 06173 0.7784

Know, consistency, F1 and similarities are the parameters used to approximate the quantitative

values. The simple truth establishes the universal basis for the evaluation of these approaches.

Remembering is also regarded as the degree of identification. It gives the proportion of

objectively optimistic variables found in contrast with the overall amount of true positive truths.

Recall alone is not adequate to equate various approaches and is typically used together with

precision. Precision is often considered a positive predictor, which provides the percentage of

true positive items identified in accordance with the overall number of items detected by the

system. The harmonic mean of accuracy and reminder is given by the F1 metric, which is also

known as F-Merit Figure. This helps one to achieve a standardized metric which can be used to

distinguish multiple processes. Finally, the Correlation criterion is used to evaluate the outcomes

of the numerous approaches. A comparative calculation is achieved by using the above-

mentioned methods to compare the different methods shown in Table 2. Through this we can

assume that the Cauchy distribution centered on the history mixture produces a most satisfying

result based on quantitative calculation. A technique to build a context model without ghost

objects for motion detection needs to be built to compensate for the shortcomings of these

techniques. A context matching system is required to pick the possible background pixel for each

frame by using the global video stream trend in order to achieve this aim. BMMC overcomes

this. Based on the BMM, each potential background pixel is chosen to change the adaptive

background model correctly in each picture. BMMC often uses the conditional Cauchy model to
32
identify moving items, which produces an exact motion mask, instead of the single threshold

function. We thus assume from the data that the Cauchy Distribution History Mixture Model

provides the most successful outcome dependent on quantitative calculation.

33
Figure 4.7: Hardware

34
CHAPTER FIVE

CONCLUSION AND RECOMMENDATION

5.1 CONCLUSION

Monitoring devices are a central component of daily existence. Many sophisticated automatic

surveillance systems are usable. The proposed device is low cost in the proposed research and

simple to incorporate for small space indoor surveillance. In order to track the wave, the study

merged two PIR sensor and frame difference approaches. It modifies the frame gap algorithm to

address the issue of slow-motion tracking of artifacts. The hybrid architecture saves resources

and storage and increases the system's stability.

5.2 RECOMMENDATION

The proposed video monitoring method offers high precision moving object tracking and quick

user alerts in case of intrusion. It uses an Absolute Differential Approximation to identify shifts

in the environment and dynamically adjust the reference structure. The Cauchy distribution

model isolates from the context the traveling foreground entity. As critical motion is observed,

GCM warnings are sent to the Android Smartphone of the consumer and a noisy, personalized

sound is played on it. The consumer will view the detected picture stored in the server after

authentication. The consumer may even store the picture on the Android device that allows

possible future comparisons. The system's efficiency was tested and it was observed that video

monitoring systems in different locations, such as banks, malls, libraries, etc. were quickly

deployed in real time. In certain ways, this system is constrained. The consumer is expected to

use the smartphone much of the time, or at least to remain within a distance that can hear the

35
alarm signal. If not, the alarm can go ignored. The user's circumstances must encourage the user

to display the picture and respond to it by taking immediate protective action.

5.3 FUTURE WORK

Research should be performed in future to see the video-clip of everything that transpired when

the motion was observed. In addition, a panic-button video monitoring device may be planned,

created and installed. This panic button can be used to trigger a system for catching the attacker

when motion is identified. Since clicking an alarm button, the local police could also be told of

which police department is adjacent to the controlled region such that urgent intervention is

feasible. This method helps users to avoid the crime even though the user is not available or

conditions hinder the user from functioning personally.

36
REFERENCE

Arul, R. Ravi, G. Velusami, S. (2020).Chaotic self-adaptive differential harmony search

algorithm based dynamic economic dispatch. International Journal Electrical Power

Energy Systems; 50:85–96.

Aghaei, J. Niknam,T. Azizipanah-Abarghooee,R. Arroyo, M.( 2020). Scenario-based dynamic

economic emission dispatch considering load and wind power uncertainties. International

Journal Electrical Power Energy Systems; 47:351–367.

Azizipanah-Abarghooee, R.( 2020). A new hybrid bacterial foraging and simplified swarm

optimization algorithm for practical optimal dynamic load dispatch. International Journal

Electrical Power Energy Systems; 49:414–429.

Alsumait, S. Qasem ,M. Sykulski, K. Al-Othman, K. (2010). An improved pattern search based

algorithm to solve the dynamic economic dispatch problem with valve-point effect.

Energy Conversion and Management; 51(1):2062–2067.

Basu, M. (2016). Particle swarm optimization based goal-attainment method for dynamic

economic emission dispatch. Electric Power Company Systems; 34(9):1015–1025.

Basu, M.(2018).Dynamic economic emission dispatch using nondominated sorting genetic

algorithm-II. International Journal Electrical Power Energy Systems; 30(2):140–149.

Bhattacharjee, K. Bhattacharya, A. Halder nee Dey, S. (2014).Oppositional real coded chemical

reaction optimization for different economic dispatch problems. International Journal

Electrical Power Energy Systems; 55:378–391.

37
Bhattacharjee, K. Bhattacharya, A. Halder nee Dey, S.( 2014). Chemical reaction optimisation

for different economic dispatch problems. IET Generation Transmission and Distribution;

8(3):530–541.

Balamurugan, R. Subramanian, S.( 2019). Emission-constrained dynamic economic dispatch

using opposition-based selfadaptive differential evolution algorithm. Energy; 10(4):267–

276.

Choon Hoong Ding, S. N. (2004). Peer-to-Peer Networks for Content Sharing. Grid Computing

and Distributed Systems Laboratory,.

Elaiw,A. Xia, X. Shehata, A.( 2017). Application of model predictive control to dynamic

dispatch of generation with emission limitations. Electrical Power Systems Research;

84(1): 31–44.

El-Keib, A. Ma, H. Hart, L. (2016). Economic dispatch because of the clean air act of IEEE

Transastions on Power Systems 1994; 9(2):972–978.

Farag, A. Al-Baiyat, S. Cheng, C. (1995). Economic load dispatch multiobjective optimization

procedures using linear programming techniques, IEEE Transactions on Power Systems;

10(2):731–8.

Granelli, G. Montagna, M. Pasini, GL, Maranino, P.(2018). Emission constrained dynamic

dispatch. Electric Power System Reseaech; 24(1):55–64.

Gaing, L.(2015). Constrained dynamic economic dispatch solution using particle swarm

optimization. IEEE Power Engineering Society General Meeting; 1:153–158.

38
Guo, X. Zhan,P. Wu,H. (2017). Dynamic economic emission dispatch based on group search

optimizer with multiple producers. Electrical Power Systems Research; 86:8–16.

Jae Kyu Lee, J. Y. (2011). Android Programming Techniques for Improving Performance. IEEE

3rd International Conference on Awareness Science and Technology (iCAST), Page(s)

386-389.

Jari Porras, P. H. (2014). Peer-topeer Communication Approach for a Mobile Environment”.

Lappeenranta University of Technology.

Koper, C. S., Lum, C., & Willis, J. J. (2014). Optimizing the use of technology in policing:

Results and implications from a multi-site study of the social, organizational, and

behavioral aspects of implementing police technologies. Policing: A Journal of Policy

and Practice, 8(2), 212-221

Masahiro Hamasaki, H. T. (2003). Proposal of Decentralized Information Sharing System using

Local Matchmaking. Graduate Universities of Advanced Studies National Institute of

Informatics.

Scholar, D. S. (2012). Video Surveillance System And Content Sharing Between Mobile And PC

Using Android. Dept of Computer Science and Engineering RMK Engineering College,

Anna University of Technology, Kavaraipettai, Chennai.

Yadav, G. C. (2017). Arduino based Security System. An Application of IOT, 209–212.

39
SOURCE CODE

#include "esp_camera.h"

#include <WiFi.h>

#include <WiFiClient.h>

#include <BlynkSimpleEsp32.h>

const char* ssid = "Lion_Heart";

const char* password = "powerflo";

char auth[] = "dW7ecYc6kKNm1z-WxBSEamWyIsP3bky7"; // Auth_Key Sent by Blynk

// Select camera model

#define CAMERA_MODEL_AI_THINKER // Has PSRAM

#include "camera_pins.h"

#define PIR 13

#define PHOTO 14

#define LED 4

40
String local_IP;

void startCameraServer();

void takePhoto()

digitalWrite(LED, HIGH);

delay(200);

uint32_t randomNum = random(50000);

Serial.println("http://"+local_IP+"/capture?_cb="+ (String)randomNum);

Blynk.setProperty(V1, "urls", "http://"+local_IP+"/capture?_cb="+(String)randomNum);

digitalWrite(LED, LOW);

delay(1000);

void setup() {

Serial.begin(115200);

pinMode(LED,OUTPUT);

Serial.setDebugOutput(true);

41
Serial.println()

camera_config_t config;

config.ledc_channel = LEDC_CHANNEL_0;

config.ledc_timer = LEDC_TIMER_0;

config.pin_d0 = Y2_GPIO_NUM;

config.pin_d1 = Y3_GPIO_NUM;

config.pin_d2 = Y4_GPIO_NUM;

config.pin_d3 = Y5_GPIO_NUM;

config.pin_d4 = Y6_GPIO_NUM;

config.pin_d5 = Y7_GPIO_NUM;

config.pin_d6 = Y8_GPIO_NUM;

config.pin_d7 = Y9_GPIO_NUM;

config.pin_xclk = XCLK_GPIO_NUM;

config.pin_pclk = PCLK_GPIO_NUM;

config.pin_vsync = VSYNC_GPIO_NUM;

config.pin_href = HREF_GPIO_NUM;

config.pin_sscb_sda = SIOD_GPIO_NUM;

config.pin_sscb_scl = SIOC_GPIO_NUM;

42
config.pin_pwdn = PWDN_GPIO_NUM;

config.pin_reset = RESET_GPIO_NUM;

config.xclk_freq_hz = 20000000;

config.pixel_format = PIXFORMAT_JPEG;

// if PSRAM IC present, init with UXGA resolution and higher JPEG quality

// for larger pre-allocated frame buffer.

if(psramFound()){

config.frame_size = FRAMESIZE_UXGA;

config.jpeg_quality = 10;

config.fb_count = 2;

} else {

config.frame_size = FRAMESIZE_SVGA;

config.jpeg_quality = 12;

config.fb_count = 1;

// camera init

43
esp_err_t err = esp_camera_init(&config);

if (err != ESP_OK) {

Serial.printf("Camera init failed with error 0x%x", err);

return;

sensor_t * s = esp_camera_sensor_get();

// initial sensors are flipped vertically and colors are a bit saturated

if (s->id.PID == OV3660_PID) {

s->set_vflip(s, 1); // flip it back

s->set_brightness(s, 1); // up the brightness just a bit

s->set_saturation(s, -2); // lower the saturation

// drop down frame size for higher initial frame rate

s->set_framesize(s, FRAMESIZE_QVGA);

WiFi.begin(ssid, password);

44
while (WiFi.status() != WL_CONNECTED) {

delay(500);

Serial.print(".");

Serial.println("");

Serial.println("WiFi connected");

startCameraServer();

Serial.print("Camera Ready! Use 'http://");

Serial.print(WiFi.localIP());

local_IP = WiFi.localIP().toString();

Serial.println("' to connect");

Blynk.begin(auth, ssid, password);

void loop() {

// put your main code here, to run repeatedly:

45
Blynk.run();

if(digitalRead(PIR) == LOW){

Serial.println("Send Notification");

Blynk.notify("Alert:Some one has been here.");

Serial.println("Capture Photo");

takePhoto();

delay(3000);

if(digitalRead(PHOTO) == HIGH){

Serial.println("Capture Photo");

takePhoto();

******************************************************************

46
47

You might also like