0% found this document useful (0 votes)
99 views51 pages

Final Document

This document summarizes a mini project submitted for a Bachelor of Technology degree in Computer Science and Engineering. The project involves developing a real world object detection system. It was completed by three students under the guidance of an assistant professor. The project included designing object detection software using Python libraries like OpenCV and building a graphical user interface with Tkinter. Testing of the developed system was also performed to validate that it meets requirements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views51 pages

Final Document

This document summarizes a mini project submitted for a Bachelor of Technology degree in Computer Science and Engineering. The project involves developing a real world object detection system. It was completed by three students under the guidance of an assistant professor. The project included designing object detection software using Python libraries like OpenCV and building a graphical user interface with Tkinter. Testing of the developed system was also performed to validate that it meets requirements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Real World Object Detection

Mini project submitted in partial fulfilment of the requirement for the award of the degree of

BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING

Under the esteemed guidance of


Mr. K Vijay kumar
Assistant Professor
By
Ratnala Ashwini (19R15A0512)
Ramavath Lavanya (18R11A05D4)
Rachakonda Gopi Krishna (18R11A05D3)

Department of Computer Science and Engineering


Accredited by NBA

Geethanjali College of Engineering and Technology


(UGC Autonomous)
(Affiliated to J.N.T.U.H, Approved by AICTE, New Delhi)
Cheeryal (V), Keesara (M), Medchal.Dist.-501 301.

November-2021

i
Geethanjali College of Engineering and Technology
(UGC Autonomous)
(Affiliated to J.N.T.U.H, Approved by AICTE, New Delhi)

Cheeryal (V), Keesara (M), Medchal Dist.-501 301


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Accredited by NBA

CERTIFICATE

This is to certify that the B.Tech Mini Project report entitled “REAL WORLD OBJECT
DETECTION” is a bonafide work done by Ratnala Ashwini (19R15A0512), Ramavath
Lavanya (18R11A05D4) and Rachakonda Gopi Krishna (18R11A05D3) in partial fulfillment
of the requirement of the award for the degree of Bachelor of Technology in “Computer
Science and Engineering” from Jawaharlal Nehru Technological University, Hyderabad
during the year 2020-2021.

Internal Guide HOD - CSE


Mr. K Vijay Kumar Dr. A Sree Lakshmi
Assistant Professor Professor

External Examiner

ii
Geethanjali College of Engineering and Technology
(UGC Autonomous)
(Affiliated to JNTUH Approved by AICTE,New Delhi)
Cheeryal (V), Keesara(M) Medchal .Dist.-501 301.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Accredited by NBA

DECLARATION BY THE CANDIDATE

We, Ratnala Ashwini (19R15A0512), Ramavath Lavanya (18R11A05D4), Rachakonda Gopi


Krishna (18R11A05D3) hereby declare that the project report entitled “REAL WORLD
OBJECT DETECTTION” is done under the guidance of Mr. K Vijay Kumar, Assistant
Professor, Department of Computer Science and Engineering, Geethanjali College of
Engineering and Technology, is submitted in partial fulfillment of the requirements for the
award of the degree of Bachelor of Technology in Computer Science and Engineering.

This is a record of Bonafede work carried out by me in College and the results embodied in
this project have not been reproduced or copied from any source. The results embodied in this
project report have not been submitted to any other University or Institute for the award of
any other degree or diploma.

Ratnala Ashwini (19R15A0512)


Ramavath Lavanya (18R11A05D4)
Rachakonda Gopi Krishna (18R11A05D3)
Department of CSE,
Geethanjali College of Engineering and Technology

iii
ACKNOWLEDGEMENT

We are greatly indebted to the Management of Geethanjali College of Engineering and


Technology, Cheeryal, Hyderabad, for proving us the necessary facilities to successfully
carry put this mini project work titled “REAL WORLD OBJECT DETECTION”.

Firstly, we thank and express our solicit gratitude to DR. A SREE LAKSHMI, HOD, CSE
department, Geethanjali College of Engineering and Technology, for her invaluable help and
support which helped us a lot in successfully completing our mini project.

Moreover, we also express our gratitude to Assistant Prof.MR. K VIJAY KUMAR, our guide
and patron, for his continued support throughout our endeavour to make our project
successfully done.

We would like to express our sincere gratitude to our Principal Dr.S. UDAY KUMAR for
providing the necessary infrastructure to complete our project.

We convey our gratitude to our Chairman, Mr. G. RAVINDER REDDY, for his invaluable
support and encouragement for propelling our innovation forward.

Finally, we would like to express our heartfelt gratitude to our parents and all our peers who
were very supportive and for their encouragement to achieve our goals.

Ratnala Ashwini (19R15A0512)

Ramavath Lavanya (18R11A05D4)

Rachakonda Gopi Krishna (18R11A05D3)

iv
ABSTRACT
Object Detection is a technology that falls under the broader domain of computer vision. It
deals with identifying and tracing the object present in image and videos. Object detection
has multiple applications such as face detection, vehicle detection, pedestrian counting, self-
driving cars, etc.

There are two major objectives of object detection. To identify all the objects, present in an
image. Filter out the target object.

In this project, only the authorized users can open the application. When the user clicks the
start button then the object is captured and after the detection the image is saved “saved.jpg”.
When the user clicks capture button then the image is saved in images folder and an alert
message is poped showing the status of the saved imaged.

v
List of Figures
Names of Figures Page no.
Fig: 2.2.1.1 Features of python 03
Fig: 2.2.1.2 Interpretation of python 03
Fig: 2.2.1.3 Application of python 04
Fig: 2.2.2.1 MVC Architecture 06
Fig: 4.2.1 Class diagram 14
Fig: 4.2.2 Use case diagram 14
Fig: 4.2.3.1 Sequence diagram-1 15
Fig: 4.2.3.2 Sequence diagram-2 16
Fig: 4.2.4 Activity diagram 16
Fig: 7.1 Output screenshots 35

vi
INDEX
TITLE PAGE NO.
1. Introduction
1.1. Existing System 01
1.2. Proposed System 01
2. Literature Survey
2.1. Project Literature 02
2.2. Introduction To Python 02
2.2.1. Python Technology 02
2.2.2. MVC Architecture 05
2.2.3. Tkinter 06
2.2.4. Libraries Specific To Project 07
2.2.4.1. Imgutils 07
2.2.4.2. Numpy 07
2.2.4.3. Argparse 08
2.2.4.4. Opencv 08
3. System Analysis And Requirements
3.1. Feasibility Study 10
3.1.1. Economical Feasibility 10
3.1.2. Technical Feasibility 10
3.1.3. Social Feasibility 10
3.2. Software And Hardware Requirements 11
3.2.1. Hardware Requirements 11
3.2.2. Software Requirements 11
3.3. Performance Requirements 11
4. Software Design
4.1. Introduction 12
4.2. Uml Diagrams 12
4.2.1. Class Diagram 13
4.2.2. Use Case Diagram 14
4.2.3. Sequence Diagram 15
4.2.4. Activity Diagram 16
5. Coding Templates / Code

vii
5.1. App Code 17
5.2. Controller Code 18
5.3. Model Code 24
5.4. View Code 25
6. System Testing
6.1. Introduction 31
6.2. Types Of Tests 31
6.2.1. Unit Testing 31
6.2.2. Integration Testing 31
6.2.3. Functional Test 32
6.2.4. System Test 32
6.2.5. White Box Testing 32
6.2.6. Black Box Testing 32
6.2.7. Acceptance Testing 33
6.3. Test Approach 33
7. Results And Validation
7.1. Output Screens 35
8. Conclusion 39
9. Bibliography 40
10. References 40

viii
1. INTRODUCTION
Object Detection is a technology that falls under the broader domain of computer
vision. It deals with identifying and tracking objects present in image and videos.
Object detection has multiple applications such as face detection, vehicle detection,
pedestrian counting, self-driving cars, etc.

There are two major objectives of object detection:


 To identify all the objects, present in an image.
 Filter out the target object.
1.1. Existing System:
Object Detection can be done using TensorFlow with python or by using any other
Deep Learning Framework, but most of the people end up with errors during
installation and find it difficult to get rid of that errors.

The following are some of the commonly used deep learning approaches for
object detection:
 ImageAI
 Single Shot Detectors
 YOLO
 Region-based Convolutional Neural Networks
Disadvantages:

 Installation of TensorFlow, dlib, are comparatively not easy.


 For the beginners who are having basic idea regarding machine learn it
is difficult to understand any other deep learning approaches for object
detection when compared with yolo.
1.2. Proposed System:
The proposed system is Object Detection using OpenCV with python and the deep
learning approach used is yolo.

Advantages:

 Rules out the problem faced during installation as OpenCV is very is to


install in an IDE.
 Simple code (can be understood by anyone who are having basic
knowledge of python).

1
2. LITERATURE SURVEY
2.1. Project literature
Bhumika Gupta (2018), proposed object detection is a well-known computer technology
connected with computer vision and image processing that focuses on detecting objects or its
instances of a certain class in digital images and videos. In this study, various basic concepts
used in object detection while making use of OpenCV library of python 2.7, improving the
efficiency and accuracy of object detection are presented.

Kartik Umesh Sharma (2019), proposed an object detection system finds objects of the real
world present either in a digital image or a video, where the object can belong to any class of
objects namely humans, cars, etc. This paper presents a review of the various techniques that
are used to detect an object, localise an object, categorise an object, extract features,
appearance information, and many more, in images and videos.

2.2. Introduction to Python


2.2.1. Python Technology
Python is an interpreted, object-oriented, high-level programming language with dynamic
semantics. Its high-level built-in data structures, combined with dynamic typing and dynamic
binding, make it very attractive for Rapid Application Development, as well as for use as a
scripting or glue language to connect existing components together.

Python's simple, easy to learn syntax emphasizes readability and therefore reduces the cost
of program maintenance.

Python supports modules and packages, which encourages program modularity and code
reuse.

Features of Python:

1. Easy

2. Free and Open source

3.Expensive
4.High level
5.Object Oriented
6.Portable

2
7. Embedabl

e
8.Extensible
9.Interpreted
8. GUI programming

9.Dynamically Typed
10.Large Standard Library

Fig: 2.2.1.1 Features of python


In various books of python programming, it is mentioned that python language is interpreted.
But that is half correct the python program is first compiled and then interpreted. The
compilation part is hidden from the programmer thus, many programmers believe that it is an
interpreted language. The compilation part is done first when we execute our code and this
will generate byte code and internally this byte code gets converted by the python virtual
machine (p.v.m) according to the underlying platform (machine + operating system).

Fig: 2.2.1.2 Interpretation of python


A platform is the hardware or software environment in which a program runs. Any Python
code that you write is converted into Python bytecode. The bytecode is a general computer
3
science

4
which is used is some other languages too like Java and refers to a set of lower-level
instructions that are meant to be understood by some “virtual” machine. The set of
instructions are lower-level in the sense that they are not meant to be understood by the user
(in this context, the programmer). The virtual machine is then responsible for converting
(“interpreting”) the bytecode to an even lower-level set of instructions which are meant to be
understood by the machine.

The Python bytecode generated for code that you've written is interpreted by the Python
Virtual Machine (PVM). As long as two platforms have the same version of Python installed
on them, the bytecode generated for a particular program will be the same on those machine.
This bytecode will run on any number of platforms which have the same version of the PVM.
In essence, the Python bytecode and the PVM act as a gateway between the user (the
programmer) and the machine on which the code is written and ran. This makes Python code
platform- independent.

Python is a cross-platform language: a Python program written on a Macintosh computer


will run on a Linux system and vice versa. Python programs can run on a Windows computer,
as long as the Windows machine has the Python interpreter installed (most other operating
systems come with Python pre-installed).

Fig: 2.2.1.3 Application of python


Python is a general-purpose coding language—which means that, unlike HTML, CSS, and
JavaScript, it can be used for other types of programming and software development besides
web development. That includes back end development, software development, data science
and writing system scripts among other things.

5
Python for developing desktop GUI applications, websites and web applications. Also,
Python, as a high level programming language, allows you to focus on core
functionality of the application by taking care of common programming tasks.

Python can be used in the applications:

1) Web Applications
2) Desktop GUI Applications
3) Software Development
4) Scientific and Numeric
5) Business Applications
6) Console Based Application
7) Audio or Video based Applications
8) 3D CAD Applications
2.2.2. MVC Architecture
MVC Stands for "Model-View-Controller." MVC is an application design model comprised
of three interconnected parts. The MVC model or "pattern" is commonly used for developing
modern user interfaces. It is provides the fundamental pieces for designing a programs for
desktop or mobile, as well as web applications.

MVC is a widely used software architectural pattern in GUI-based applications. It has three
components, namely a model that deals with the business logic, a view for the user interface,
and a controller to handle the user input, manipulate data, and update the view. The following
is a simplified schematic that shows the basic interactions between the various components:

Model:
The model component of the MVC architecture represents the data of the application. It also
represents the core business logic that acts on such data. The model has no knowledge of the
view or the controller. When the data in the model changes, it just notifies its listeners about
this change. In this context, the controller object is its listener.
View:
The view component is the user interface. It is responsible for displaying the current state of
the model to the user, and also provides a means for the user to interact with the application.
View represents the HTML files, which interact with the end user.
Controller:

6
It acts as an intermediary between view and model. It listens to the events triggered by view
and queries model for the same. Controller interacts with the model which fetches all the
records displayed to the end user.

Fig: 2.2.2.1 MVC Architecture


2.2.3 Tkinter
Tkinter − Tkinter is the Python interface to the Tk GUI toolkit shipped with Python.

Tkinter is the standard GUI library for Python. Python when combined with Tkinter provides
a fast and easy way to create GUI applications. Tkinter provides a powerful object-oriented
interface to the Tk GUI toolkit.

Creating a GUI application using Tkinter is an easy task. All you need to do is perform the
following steps −

 Import the Tkinter module.

 Create the GUI application main window.

 Add one or more of the above-mentioned widgets to the GUI application.

 Enter the main event loop to take action against each event triggered by the user.

 Tkinter has several strengths. It’s cross-platform, so the same code works on
Windows, macOS, and Linux. Visual elements are rendered using native operating
system elements, so applications built with Tkinter look like they belong on the
platform where they’re run.

7
 Although Tkinter is considered the de-facto Python GUI framework, it’s not without
criticism. One notable criticism is that GUIs built with Tkinter look outdated. If you
want a shiny, modern interface, then Tkinter may not be what you’re looking for.
 However, Tkinter is lightweight and relatively painless to use compared to other
frameworks. This makes it a compelling choice for building GUI applications in
Python, especially for applications where a modern sheen is unnecessary, and the top
priority is to build something that’s functional and cross-platform quickly.

Tkinter Widgets

Tkinter provides various controls, such as buttons, labels and text boxes used in a GUI
application. These controls are commonly called widgets.
The foundational element of a Tkinter GUI is the window. Windows are the containers in
which all other GUI elements live. These other GUI elements, such as text boxes, labels, and
buttons, are known as widgets. Widgets are contained inside of windows.

2.2.4. Libraries to project


2.2.4.1 Imutils:

Imutils are a series of convenience functions to make basic image processing functions such
as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier
with OpenCV and both Python 2.7 and Python 3.

Installation

Provided you already have NumPy, SciPy, Matplotlib, and OpenCV already installed,
the imutils package is completely pip-installable:

$ pip install imutils

2.2.4.2 Numpy
NumPy, which stands for Numerical Python, is a library consisting of multidimensional array
objects and a collection of routines for processing those arrays. Using NumPy, mathematical

8
and logical operations on arrays can be performed. This tutorial explains the basics
of Numpy such as its architecture and environment.

Numpy is a general-purpose array-processing package. It provides a high-performance


multidimensional array object, and tools for working with these arrays. It is the fundamental

Package for scientific computing with Python. Besides its obvious scientific uses, Numpy can
also be used as an efficient multi-dimensional container of generic data.
How do I install NumPy?
To install Python NumPy, go to your command prompt and type “pip install numpy”. Once
the installation is completed, go to your IDE (For example: PyCharm) and simply import it
by typing: “import numpy as np”
2.2.4.3. Argparse
argparse — Parser for command-line options, arguments and sub-commands.
The argparse module makes it easy to write user-friendly command-line interfaces. The
program defines what arguments it requires, and argparse will figure out how to parse those
out of sys. argy.

The argparse module makes it easy to write user-friendly command-line interfaces. The
program defines what arguments it requires, and argparse will figure out how to parse those
out of sys.argv. The argparse module also automatically generates help and usage messages
and issues errors when users give the program invalid arguments.

`Numpy is a highly optimized library for numerical operations. It gives a MATLAB-style


syntax. All the OpenCV array structures are converted to-and-from Numpy arrays. So
whatever operations you can do in Numpy, you can combine it with OpenCV, which
increases number of weapons in your arsenal. Besides that, several other libraries like SciPy,
Matplotlib which supports Numpy can be used with this.

2.2.4. OpenCV-Python

OpenCV (Open Source Computer Vision Library) is a library of programming


functions mainly aimed at real-time computer vision. Originally developed by Intel, it was
later supported by Willow Garage then Itseez (which was later acquired by Intel). The library
is cross-platform and free for use under the open-source BSD license.

9
Python is a general purpose programming language started by Guido van Rossum, which
became very popular in short time mainly because of its simplicity and code readability. It
enables the programmer to express his ideas in fewer lines of code without reducing any
readability.

And the support of Numpy makes the task more easier. Numpy is a highly optimized library
for numerical operations. It gives a MATLAB-style syntax. All the OpenCV array structures
are converted to-and-from Numpy arrays. So whatever operations you can do in Numpy, you
can combine it with OpenCV, which increases number of weapons in your arsenal. Besides
that, several other libraries like SciPy, Matplotlib which supports Numpy can be used with
this.

So OpenCV-Python is an appropriate tool for fast prototyping of computer vision problems.


OpenCV introduces a new set of tutorials which will guide you through various functions
available in OpenCV-Python. This guide is mainly focused on OpenCV 3.x version (although
most of the tutorials will work with OpenCV 2.x also).

A prior knowledge on Python and Numpy is required before starting because they won’t be
covered in this guide. Especially, a good knowledge on Numpy is must to write optimized
codes in OpenCV-Python.

10
3. SYSTEM ANALYSIS AND REQUIREMENTS
3.1. FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth
with a very general plan for the project and some cost estimates. During system
analysis, the feasibility study of the proposed system is to be carried out. This is to
ensure that the proposed system is not a burden to the company. For feasibility
analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are:

♦ ECONOMICAL FEASIBILITY
♦ TECHNICAL FEASIBILITY
♦ SOCIAL FEASIBILITY
3.1.1. ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be
purchased.
3.1.2. TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the available
technical resources. This will lead to high demands being placed on the client. The
developed system must have a modest requirement, as only minimal or null changes
are required for implementing this system.
3.1.3. SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must
not
11
feel threatened by the system, instead must accept it as a necessity. The level of
acceptance by the users solely depends on the methods that are employed to educate
the user about the system and to make him familiar with it. His level of confidence
must be raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
3.2. SOFTWARE AND HARDWARE REQUIREMENTS
3.2.1. Hardware Requirements
 System : Pentium IV 2.4 GHz or Above
 Hard Disk :80 GB.
 WebCam : 1.44 Mb.
 Monitor : 15 VGA Colour
 Ram : 2 GB
3.2.2. Software Requirements
 OS : Windows XP Professional/Vista/7/8/8.1 or Linux
 Front End : python, tkinter
 Tool : NetBeans
3.3. PERFORMANCE REQUIREMENTS
Performance is measured in terms of the output provided by the application.
Requirement specification plays an important part in the analysis of a system. Only
when the requirement specifications are properly given, it is possible to design a
system, which will fit into required environment. It rests largely with the users of the
existing system to give the requirement specifications because they are the people
who finally use the system. This is because the requirements have to be known during
the initial stages so that the system can be designed according to those requirements.
It is very difficult to change the system once it has been designed and on the other
hand designing a system, which does not cater to the requirements of the user, is of no
use.
The requirement specification for any system can be broadly stated as given below.
The system should be able to interface with the existing system.
 The system should be accurate.
 The system should be better than the existing system.
 The existing system is completely dependent on the user to perform all the
duties.

12
4. SOFTWARE DESIGN

4.1. INTRODUCTION

software design is the process or art of defining the architecture, components,


modules, interfaces, and data for a system to satisfy specified requirements. One could
see it as the application of systems theory to product development. There is some
overlap and synergy with the disciplines of systems analysis, systems architecture and
systems engineering.

4.2 UML Diagrams

Unified model language

The Unified Modeling Language allows the software engineer to express an analysis
model using the modeling notation that is governed by a set of syntactic semantic and
pragmatic rules.

A UML system is represented using five different views that describe the system from
distinctly different perspective. Each view is defined by a set of diagrams, which is as
follows.
 User Model View
This view represents the system from the user’s perspective.
The analysis representation describes a usage scenario from the end-user’s
perspective.
 Structural model view
In this model, the data and functionality are arrived from inside the
system. This model view models the static structures.
 Behavioral Model View
It represents the dynamic of behavioral as parts of the system, depicting the
interactions of collection between various structural elements described in the
user model and structural model view.
 Implementation Model View
In this the structural and behavioral as parts of the system are represented as
they are to be built.
 Environmental Model View

13
In this the structural and behavioral aspect of the environment in which the
system is to be implemented are represented.

UML is specifically constructed through two different domains they are:

● UML Analysis modeling, this focuses on the user model and structural model
views of the system.
● UML design modeling, which focuses on the behavioral modeling,
implementation modeling and environmental model views.
Use case Diagrams represent the functionality of the system from auser’s point of
view. Use cases are used during requirements elicitation and analysis to represent the
functionality of the system. Use cases focus on the behavior of the system from
external point of view.

Actors are external entities that interact with the system. Examples of actors include
users like administrator, bank customer …etc., or another system like central
database.

4.2.1. CLASS DIAGRAM

The class diagram is the main building block of object oriented modeling. It is used
both for general conceptual modeling of the systematic of the application, and for
detailed modeling translating the models into programming code. Class diagrams can
also be used for data modeling. The classes in a class diagram represent both the main
objects, interactions in the application and the classes to be programmed.

A class exists with three sections. In the diagram, classes are represented with boxes
which contain three parts:

● The upper part holds the name of the class.


● The middle part contains the attributes of the class.
● The bottom part gives the methods or operations the class can take it.

14
Fig: 4.2.1 Class Diagram
4.2.2. USE CASE DIAGRAM
A use case diagram at its simplest is a representation of a user's interaction with the
system and depicting the specifications of a use case. A use case diagram can portray
the different types of users of a system and the various ways that they interact with the
system. This type of diagram is typically used in conjunction with the textual use case
and will often be accompanied by other types of diagrams as well.

Fig: 4.2.2 Use Case Diagram

15
4.2.3. SEQUENCE DIAGRAM
A sequence diagram is a kind of interaction diagram that shows how processes
operate with one another and in what order. It is a construct of a Message Sequence
Chart. A sequence diagram shows object interactions arranged in time sequence. It
depicts the objects and classes involved in the scenario and the sequence of messages
exchanged between the objects needed to carry out the functionality of the scenario.
Sequence diagrams are typically associated with use case realizations in the Logical.

Fig: 4.2.3.1 Sequence diagram-1

16
Fig: 4.2.3.2 Sequence Diagram-2

4.2.4. ACTIVITY DIAGRAM

Activity diagram is another important diagram in UML to describe dynamic aspects of


the system. It is basically a flow chart to represent the flow form one activity to
another activity. The activity can be described as an operation of the system. So the
control flow is drawn from one operation to another.

Fig: 4.2.4 Activity Diagram

17
5. CODING TEMPLATES/CODE:

5.1. CODE TEMPLATE:

APP CODE:

from views.AuthView import AuthView

from views.DetectionView import DetectionView

class MyApp:

def run(self):

av = AuthView()

av.transfer_control=self.detector

av.load()

def detector(self):

dv = DetectionView()

dv.load()

app = MyApp()

app.run()

CONTROLLERS

AUTHCONTROLLER

from models.AuthModel import AuthModel

class AuthController:

def login(self,username,password):

if len(username) == 0 :

message = "Username cannot be empty"

return message

if len(password) == 0:

18
message = "Password cannot be empty"

return message

am = AuthModel()

result = am.getUser(username,password)

if result:

message = 1

else:

message = f'User not found'

return message

def register(self,name,phone,email,username,password,role):

am = AuthModel()

result = am.createUser(name,phone,email,username,password,role)

if result:

print("Successfully inserted")

message = 'Successfully created the user. You can login now'

return message

else:

print("Some problem")

message = 'There is some issue in storing the data, kindly retry'

return message

DETECT

# USAGE

# python yolo.py --image images/baggage_claim.jpg --yolo yolo-coco

19
# import the necessary packages

#from cap1 import cap1

import numpy as np

import argparse

import time

from cv2 import cv2

import os

class detect:

def load(self,img):

#print("calling fun from prev")

#self.l=c.capture(self)

#img = cv2.imread("capic.jpg")

# construct the argument parse and parse the arguments

#ap = argparse.ArgumentParser()

#ap.add_argument("-i", "--image", required=False,

# help="path to input image")

#ap.add_argument("-y", "--yolo",

required=True, # help="base path to YOLO

directory")

#ap.add_argument("-c", "--confidence", type=float,

default=0.5, # help="minimum probability to filter

weakdetections") #ap.add_argument("-t", "--threshold",

type=float, default=0.3,

# help="threshold when applyong non-maxima suppression")

#args = vars(ap.parse_args())
20
args = {}

21
args['yolo'] = 'yolo-coco'

args["image"]=img

args['confidence'] = 0.5

args['threshold'] = 0.3

# load the COCO class labels our YOLO model was trained on

#labelsPath = os.path.sep.join([args["yolo"], "coco.names"])

labelsPath="C:\\Users\\SharatKumar\\OneDrive\\Desktop\\pro\\controlle
rs\\yolo-coco\\coco.names"

LABELS = open(labelsPath).read().strip().split("\n")

# initialize a list of colors to represent each possible class label

np.random.seed(42)

COLORS = np.random.randint(0, 255, size=(len(LABELS), 3),

dtype="uint8")

# derive the paths to the YOLO weights and model configuration

#weightsPath = os.path.sep.join([args["yolo"], "yolov3.weights"])

#configPath = os.path.sep.join([args["yolo"], "yolov3.cfg"])

weightsPath="C:\\Users\\vrrre\\OneDrive\\Desktop\\pro\\controllers\\yol
o-coco\\yolov3.weights"

configPath="C:\\Users\\vrrre\\OneDrive\\Desktop\\pro\\controllers\\yolo
-coco\\yolov3.cfg"

22
# load our YOLO object detector trained on COCO dataset (80 classes)

#print("[INFO] loading YOLO from disk...")

net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)

# load our input image and grab its spatial dimensions

image = cv2.imread(args["image"])

(H, W) = image.shape[:2]

# determine only the *output* layer names that we need from YOLO

ln = net.getLayerNames()

ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

# construct a blob from the input image and then perform a forward

# pass of the YOLO object detector, giving us our bounding boxes and

# associated probabilities

blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416),

swapRB=True, crop=False)

net.setInput(blob)

start = time.time()

layerOutputs = net.forward(ln)

end = time.time()

# show timing information on YOLO

23
#print("[INFO] YOLO took {:.6f} seconds".format(end - start))

# initialize our lists of detected bounding boxes, confidences, and

# class IDs, respectively

boxes = []

confidences = []

classIDs = []

objects = []

# loop over each of the layer outputs

for output in layerOutputs:

# loop over each of the detections

for detection in output:

# extract the class ID and confidence (i.e., probability)

# the current object detection

scores = detection[5:]

classID = np.argmax(scores)

confidence = scores[classID]

# filter out weak predictions by ensuring the detected

# probability is greater than the minimum probability

if confidence > args["confidence"]:

# scale the bounding box coordinates back relative to the

# size of the image, keeping in mind that YOLO actually

24
# returns the center (x, y)-coordinates of the bounding

# box followed by the boxes' width and height

box = detection[0:4] * np.array([W, H, W, H])

(centerX, centerY, width, height) =


box.astype("int")

# use the center (x, y)-coordinates to derive the top and

# and left corner of the bounding box

x = int(centerX - (width / 2))

y = int(centerY - (height / 2))

# update our list of bounding box coordinates, confidences,

# and class IDs

boxes.append([x, y, int(width), int(height)])

confidences.append(float(confidence))

classIDs.append(classID)

# apply non-maxima suppression to suppress weak, overlapping


bounding

# boxes

idxs = cv2.dnn.NMSBoxes(boxes, confidences, args["confidence"],

args["threshold"])

# ensure at least one detection exists

if len(idxs) > 0:

# loop over the indexes we are keeping

for i in idxs.flatten():

25
# extract the bounding box coordinates

(x, y) = (boxes[i][0], boxes[i][1])

(w, h) = (boxes[i][2], boxes[i][3])

# draw a bounding box rectangle and label on the image

color = [int(c) for c in COLORS[classIDs[i]]]

cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)

text = "{}: {:.4f}".format(LABELS[classIDs[i]],


confidences[i])

#print("Object Detected is -",LABELS[classIDs[i]])

objects.append(LABELS[classIDs[i]])

cv2.putText(image, text, (x, y - 5),


cv2.FONT_HERSHEY_SIMPLEX,

0.5, color, 2)

# show the output image

cv2.imwrite("saved.jpg", image)

return objects

#cv2.waitKey(0)

#d=detect()

MODEL

AUTHMODEL:

from lib.db import *

class AuthModel:

def init (self):

26
self.conn = connect('app.db')

def getUser(self,username,password):

query = f"SELECT * FROM users WHERE username='{username}' and


password ='{password}' "

result = fetchone(self.conn, query)

print(result)

return result

def createUser(self,name,phone,email,username,password,role):

query = f"INSERT INTO users (name,phone,email,username,password,role)


VALUES ('{name}',{phone},'{email}','{username}','{password}','{role}')"

try:

insert(self.conn,query)

return 1

except:

print("Some database error")

return 0

if name == ' main ':

am = AuthModel()

am.createUser('Rajesh',7777777777,'[email protected]','rajesh','rajesh123')

VIEW

AUTHVIEW:

from tkinter import *

from tkinter import ttk

from tkinter import messagebox

27
from controllers.AuthController import AuthController

class AuthView:

def load(self):

self.window = Tk()

self.window.title("object detection Application")

self.window.geometry('320x300')

tab_control = ttk.Notebook(self.window)

login_tab =
Frame(tab_control,bg="lightpink",width=350,height=300,padx=10,pady=10)

register_tab = Frame(tab_control,bg="lightpink",padx=10,pady=10)

tab_control.add(login_tab, text="Login")

tab_control.add(register_tab, text = "Register")

self.login(login_tab)

self.register(register_tab)

tab_control.grid()

self.window.mainloop()

def login(self,login_tab):

window = login_tab

ul = Label(window, text="Username",bg="lightblue",fg="darkblue",padx=10)

ul.grid(row=0,column=0,padx=10,pady=10)

ue = Entry(window, width=20)

ue.grid(row=0,column=1)

28
ue.focus()

pl = Label(window, text="Password",bg="lightblue",fg="darkblue",padx=10)

pl.grid(row=1,column=0)

pe = Entry(window, show='*',width=20)

pe.grid(row=1,column=1)

b = Button(window, text="Login",bg="lightblue",fg="red",command=lambda:
self.loginControl(ue.get(),pe.get()),padx=10,pady=10)

b.grid(row=2,column=1,padx=10,pady=20)

def loginControl(self,username,password):

ac = AuthController()

#print('Username',username)

#print('Password',password)

message = ac.login(username,password)

if message==1:

self.window.destroy()

self.transfer_control()

else:

messagebox.showinfo('Alert',message)

def register(self,register_tab):

window = register_tab

# Create name label and entry

nl=Label(window,text="Name",bg="lightblue",fg="darkblue",padx=10)

nl.grid(row=0, column=0,padx=10,pady=10)

ne = Entry(window, width=20)

ne.grid(row=0, column=1)

29
ne.focus()

# create email label and entry

el=Label(window,text="Email",bg="lightblue",fg="darkblue",padx=10)

el.grid(row=1, column=0,padx=10,pady=10)

ee = Entry(window, width=20)

ee.grid(row=1, column=1)

# create phone label and entry

phl=Label(window,text="Phone",bg="lightblue",fg="darkblue",padx=10)

phl.grid(row=2, column=0,padx=10,pady=10)

phe = Entry(window, width=20)

phe.grid(row=2, column=1)

# Create username label and entry

ul=Label(window,text="Username",bg="lightblue",fg="darkblue",padx=10)

ul.grid(row=3, column=0,padx=10,pady=10)

ue = Entry(window, width=20)

ue.grid(row=3, column=1)

# Create password label and entry

pl=Label(window,text="Password",bg="lightblue",fg="darkblue",padx=10)

pl.grid(row=4, column=0,padx=10,pady=10)

pe = Entry(window, show='*', width=20)

pe.grid(row=4, column=1)

# create a button register

30
b = Button(window, text="Register",bg="lightblue",fg="red", command=lambda:
self.registerControl(ne.get(),phe.get(),ee.get(),
ue.get(),pe.get()),padx=10,pady=10)

b.grid(row=5, column=1,padx=10,pady=20)

def registerControl(self,name,phone,email,username,password):

ac = AuthController()

message = ac.register(name,phone,email,username,password,'student')

if message:

messagebox.showinfo('Alert',message)

av = AuthView()

DETECTIONVIEW

from lib.db import *

class AuthModel:

def init (self):

self.conn = connect('app.db')

def getUser(self,username,password):

query = f"SELECT * FROM users WHERE username='{username}' and


password ='{password}' "

result = fetchone(self.conn, query)

print(result)

return result

def createUser(self,name,phone,email,username,password,role):

query=f"INSERT INTO users (name,phone,email,username,password,role)


VALUES ('{name}',{phone},'{email}','{username}','{password}','{role}')"

try:

31
insert(self.conn,query)

return 1

except:

print("Some database error")

return 0

if name == ' main ':

am = AuthModel()
am.createUser('Rajesh',7777777777,'[email protected]','rajesh','rajesh123')

32
6. SYSTEM TESTING
6.1. INTRODUCTION

The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished product It
is the process of exercising software with the intent of ensuring that the Software
system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

6.2 TYPES OF TESTING

6.2.1 Unit testing


Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge
of its construction and is invasive. Unit tests perform basic tests at component level
and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected results.
6.2.2 Integration testing
Integration tests are designed to test integrated software components to determine if
they actually run as one program. Testing is event driven and is more concerned with
the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically
aimed at exposing the problems that arise from the combination of components.
6.2.3 Functional test

33
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and
user manuals.

Functional testing is centered on the following items:

 Valid Input : identified classes of valid input must be accepted.


 Invalid Input : identified classes of invalid input must be rejected.
 Functions : identified functions must be exercised.
 Output : identified classes of application outputs must be
exercised.
 Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to identify
Business process flows; data fields, predefined processes, and successive processes
must be considered for testing. Before functional testing is complete, additional tests
are identified and the effective value of current tests is determined.

6.2.4 System Test

System testing ensures that the entire integrated software system meets requirements.
It tests a configuration to ensure known and predictable results. An example of system
testing is the configuration oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration
points.

6.2.5 White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge of
the inner workings, structure and language of the software, or at least its purpose. It is
purpose. It is used to test areas that cannot be reached from a black box level.
6.2.6 Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing
34
in which the software under test is treated, as a black box. you cannot “see” into it.
The test provides inputs and responds to outputs without considering how the software
works.

6.2.7 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

6.3 TEST APPROACH


Testing can be done in two ways

 Bottom up approach
 Top down approach
Bottom up Approach

Testing can be performed starting from smallest and lowest level modules and
proceeding one at a time. For each module in bottom up testing a short program
executes the module and provides the needed data so that the module is asked to
perform the way it will when embedded with in the larger system. When bottom level
modules are tested attention turns to those on the next level that use the lower level
ones they are tested individually and then linked with the previously examined lower
level modules.

Top down approach

This type of testing starts from upper level modules. Since the detailed activities
usually performed in the lower level routines are not provided stubs are written. A
stub is a module shell called by upper level module and that when reached properly
will return a message to the calling module indicating that proper interaction occurred.
No attempt is made to verify the correctness of the lower level module.

The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished product It
is the process of exercising software with the intent of ensuring that the software
system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a

35
specific testing requirement.

36
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

37
7. RESULTS AND VALIDATION
7.1 OUTPUT SCREENS

Fig: 7.1.1 Login details

38
Fig: 7.1.2 Registration

39
Fig: 7.1.3 Login Credentials

40
Fig: 7.1.4 Results

41
8. Conclusion

This paper shows that Object Detection is a technology that falls under the broader
domain of computer vision. It deals with identifying and tracking objects present in
image and videos. Object detection has multiple applications such as face detection,
vehicle detection, pedestrian counting, self-driving cars, etc.

The proposed system is Object Detection using OpenCV with python and the deep
learning approach used is yolo.

It rules out the problem faced during installation as OpenCV is very easy to install in
an IDE.

It is the simple code (can be understand by any one who are having basic knowledge
of python)

42
9. Bibilography
https://
www.python.org/
 This link is for the packages required installation setup
 https://www.pyimagesearch.com/2018/11/12/yolo-object-detection-with-
opencv/
 code information

10. Referrences
https://www.pyimagesearch.com/2018/11/12/yolo-object-detection-with-opencv/
https://circuitdigest.com/tutorial/real-life-object-detection-using-opencv-python-
detecting-objects-in-live-video
https://www.diva-portal.org/smash/get/diva2:1414033/FULLTEXT02

43

You might also like