0% found this document useful (0 votes)
112 views6 pages

Synopsis of Real Time Security System: Submitted in Partial Fulfillment of The Requirements For The Award of

This document provides an overview of a real-time security system project submitted by Shanu Naval Singh. The project involves developing object recognition using YOLO to detect objects in images and video streams. Chapter 1 introduces the field and scope of the project. Chapter 2 discusses a feasibility study on implementing convolutional neural networks for object detection. Chapter 3 describes the methodology including background subtraction and template matching techniques. Chapter 4 divides the work among team members, with Shanu focusing on setting up the environment, collecting training data, and developing the YOLO model. Chapter 5 lists the software and hardware requirements including TensorFlow, Anaconda, a camera. Chapter 6 provides references on computer vision and object detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views6 pages

Synopsis of Real Time Security System: Submitted in Partial Fulfillment of The Requirements For The Award of

This document provides an overview of a real-time security system project submitted by Shanu Naval Singh. The project involves developing object recognition using YOLO to detect objects in images and video streams. Chapter 1 introduces the field and scope of the project. Chapter 2 discusses a feasibility study on implementing convolutional neural networks for object detection. Chapter 3 describes the methodology including background subtraction and template matching techniques. Chapter 4 divides the work among team members, with Shanu focusing on setting up the environment, collecting training data, and developing the YOLO model. Chapter 5 lists the software and hardware requirements including TensorFlow, Anaconda, a camera. Chapter 6 provides references on computer vision and object detection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

SYNOPSIS OF REAL TIME SECURITY SYSTEM

Submitted in partial fulfillment of the Requirements for the award of

Degree of Bachelor of Technology in Computer Science & Engineering

Submitted By:-

Name: Shanu Naval Singh

University ID: 17BCS2127

SUBMITTED TO: Gagandeep Kaur

Department of Computer Science & Engineering


CHANDIGARH UNIVERSITY
Gharuan, Mohali
CHAPTER 1:- INTRODUCTION
A few years ago, the creation of the software and hardware image processing systems was
mainly limited to the development of the user interface, which most of the programmers of each
firm were engaged in. The situation has been significantly changed with the advent of the
Windows operating system when most the developers switched to solving the problems of image
processing itself. However, this has not yet led to the cardinal progress in solving typical tasks of
recognizing faces, car numbers, road signs, analyzing remote and medical images, etc. Each of
these "eternal" problems is solved by trial and error by the efforts of numerous groups of the
engineers and scientists. As modern technical solutions are turn out to be excessively expensive,
the task of automating the creation of the software tools for solving intellectual problems is
formulated and intensively solved abroad. In the field of image processing, the required tool kit
should be supporting the analysis and recognition of images of previously unknown content and
ensure the effective development of applications by ordinary programmers. Just as the Windows
toolkit supports the creation of interfaces for solving various applied problems.

FIELD OF THE PROJECT: -


Image classification also involves assigning a class label to an image, whereas object localization
involves drawing a bounding box around one or more objects in an image. Object detection is
always more challenging and combines these two tasks and draws a bounding box around each
object of interest in the image and assigns them a class label. Together, all these problems are
referred to as object recognition.
Object recognition refers to a collection of related tasks for identifying objects in digital
photographs. Region-based Convolutional Neural Networks, or R-CNNs, is a family of
techniques for addressing object localization and recognition tasks, designed for model
performance. You Only Look Once, or YOLO is known as the second family of techniques for
object recognition designed for speed and real-time use.

SCOPE OF THE PROJECT: -


The aim of object detection is to detect all instances of objects from a known class, such as
people, cars or faces in an image. Generally, only a small number of instances of the object are
present in the image, but there is a very large number of possible locations and scales at which
they can occur and that need to somehow be explored. Each detection of the image is reported
with some form of pose information. This is as simple as the location of the object, a location
and scale, or the extent of the object defined in terms of a bounding box. In some other
situations, the pose information is more detailed and contains the parameters of a linear or non-
linear transformation. For example for face detection in a face detector may compute the
locations of the eyes, nose and mouth, in addition to the bounding box of the face.

CHAPTER 2:- FEASIBILITY STUDY


Convolutional implementation of the sliding windows Before we discuss the implementation of
the sliding window using convents, let us analyze how we can convert the fully connected layers
of the network into convolutional layers.

A fully connected layer can be converted to a convolutional layer with the help of a
1Dconvolutional layer. The width and height of this layer is equal to one and the number of
filters are equal to the shape of the fully connected layer.

We can apply the concept of conversion of a fully connected layer into a convolutional layer to
the model by replacing the fully connected layer with a 1-D convolutional layer. The number of
filters of the 1D convolutional layer is equal to the shape of the fully connected layer. This
representation is shown in

The stride of the sliding window is decided by the number of filters used in the Max Pool layer.
In the example above, the Max Pool layer has two filters, and for the result, the sliding window
moves with a stride of two resulting in four possible outputs to the given input. The main
advantage of using this technique is that the sliding window runs and computes all values
simultaneously. Consequently, this technique is fast. The weakness of this technique is that the
position of the bounding boxes is not very accurate.

CHAPTER 3:- METHODLOGY/ PLANNING OF WORK


Object detection is an important task, yet challenging vision task. It is a critical part of many
applications such as image search, image auto-annotation and scene understanding, object
tracking. Moving object tracking of video image sequences was one of the most important
subjects in computer vision. It had already been applied in many computer vision fields, such as
smart video surveillance (Arun Hampapur 2005), artificial intelligence, military guidance, safety
detection and robot navigation, medical and biological application. In recent years, several
successful single-object tracking system appeared, but in the presence of several objects, object
detection becomes difficult and when objects are fully or partially occluded, they are obtruded
from the human vision which further increases the problem of detection. Decreasing illumination
and acquisition angle. The proposed MLP based object tracking system is made robust by an
optimum selection of unique features and also by implementing the Adaboost strong
classification method.

Background Subtraction: -
The background subtraction method by Horprasert et al (1999), could cope with local
illumination changes, such as shadows and highlights, even globe illumination changes. In this
method, the background model was statistically modelled on each pixel. Computational color
mode, include the brightness distortion and the chromaticity distortion which was used to
distinguish shading background from the ordinary background or moving foreground objects.
The background and foreground subtraction method used the following approach. A pixel was
modelled by a 4-tuple [Ei, si, ai, bi], where Ei- a vector with expected color value, si - a vector
with the standard deviation of color value, ai - the variation of the brightness distortion and bi
was the variation of the chromaticity distortion of the ith pixel. In the next step, the difference
between the background image and the current image was evaluated.

Template Matching: -
Template Matching is the technique of finding small parts of an image which match a template
image. It slides the template from the top left to the bottom right of the image and compares for
the best match with the template. The template dimension should be equal to the reference image
or smaller than the reference image. It recognizes the segment with the highest correlation as the
target. Given an image S and an image T, where the dimension of S was both larger than T,
output whether S contains a subset image I where I and T are suitably similar in pattern and if
such I exists, output the location of I in S as in Hager and Bellhumear (1998). Schweitzer et al
(2011), derived an algorithm which used both upper and lowers bound to detect ‘k’ best matches.
Euclidean distance and Walsh transform kernels are used to calculate match measure.

CHAPTER 4:- MODULE AND TEAM MEMBER WISE


DISTRIBUTION
Contribution of roll number 17BCS2127: -
Roll Number 17BCS2127 i.e. Shanu Naval Singh, will work on developing the environment for
the running of the further project. Also he will work on setting up all the packages, installing
them and making the environment ready for the further tasks and the smooth functioning of the
further project. will work on the dataset collection which will further help in the training the
YOLO model. It will also help in increasing the accuracy of the model so that we get desired and
accurate output as much as possible. will work on developing the YOLO model. He will consider
the data already collected and train the YOLO model based on the dataset already collected and
work on the working of the model and ensure the proper working of it.

CHAPTER 5:- SOFTWARE AND HARDWARE


REQUIREMENTS

SOFTWARE: -
Tensor Flow: -
It is an open source artificial intelligence library, using data flow graphs to build models. It
allows developers to create large-scale neural networks with many layers. Tensor Flow is
mainly used for: Classification, Perception, Understanding, Discovering, Prediction and
Creation.
Tensor Flow is a Python library for fast numerical computing created and released by Google. It
is a foundation library that can be used to create Deep Learning models directly or by using
wrapper libraries that simplify the process built on top of Tensor Flow.
Anaconda: -
Anaconda is a scientific Python distribution. It has no IDE of its own. The default IDE bundled
with Anaconda is Spyder which is just another Python package that can be installed even
without Anaconda.

HARDWARE: -

Camera: -
Detecting objects in videos and camera feeds using Keras, OpenCV, and ImageAI. Object
detection is a branch of Computer Vision, in which visually observable objects that are in images
of videos can be detected, localized, and recognized by computers.

ImageAI is a python library built to empower developers, researchers and students to build
applications and systems with self-contained Deep Learning and Computer Vision capabilities
using simple and few lines of code.

CHAPTER 6:- BIBLIOGRAPHY

Vision Based Assistive System for Label and Object Detection

Book by Manikanda Sucheta

Object Detection and Tracking using Dynamic Image Processing

Book by Amr M. Nagy, Hala H. Zayed, Ali Ahmed

https://www.caggle.com

https://www.github.com

You might also like