Final Document
Final Document
Mini project submitted in partial fulfilment of the requirement for the award of the degree of
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
November-2021
i
Geethanjali College of Engineering and Technology
(UGC Autonomous)
(Affiliated to J.N.T.U.H, Approved by AICTE, New Delhi)
Accredited by NBA
CERTIFICATE
This is to certify that the B.Tech Mini Project report entitled “REAL WORLD OBJECT
DETECTION” is a bonafide work done by Ratnala Ashwini (19R15A0512), Ramavath
Lavanya (18R11A05D4) and Rachakonda Gopi Krishna (18R11A05D3) in partial fulfillment
of the requirement of the award for the degree of Bachelor of Technology in “Computer
Science and Engineering” from Jawaharlal Nehru Technological University, Hyderabad
during the year 2020-2021.
External Examiner
ii
Geethanjali College of Engineering and Technology
(UGC Autonomous)
(Affiliated to JNTUH Approved by AICTE,New Delhi)
Cheeryal (V), Keesara(M) Medchal .Dist.-501 301.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Accredited by NBA
This is a record of Bonafede work carried out by me in College and the results embodied in
this project have not been reproduced or copied from any source. The results embodied in this
project report have not been submitted to any other University or Institute for the award of
any other degree or diploma.
iii
ACKNOWLEDGEMENT
Firstly, we thank and express our solicit gratitude to DR. A SREE LAKSHMI, HOD, CSE
department, Geethanjali College of Engineering and Technology, for her invaluable help and
support which helped us a lot in successfully completing our mini project.
Moreover, we also express our gratitude to Assistant Prof.MR. K VIJAY KUMAR, our guide
and patron, for his continued support throughout our endeavour to make our project
successfully done.
We would like to express our sincere gratitude to our Principal Dr.S. UDAY KUMAR for
providing the necessary infrastructure to complete our project.
We convey our gratitude to our Chairman, Mr. G. RAVINDER REDDY, for his invaluable
support and encouragement for propelling our innovation forward.
Finally, we would like to express our heartfelt gratitude to our parents and all our peers who
were very supportive and for their encouragement to achieve our goals.
iv
ABSTRACT
Object Detection is a technology that falls under the broader domain of computer vision. It
deals with identifying and tracing the object present in image and videos. Object detection
has multiple applications such as face detection, vehicle detection, pedestrian counting, self-
driving cars, etc.
There are two major objectives of object detection. To identify all the objects, present in an
image. Filter out the target object.
In this project, only the authorized users can open the application. When the user clicks the
start button then the object is captured and after the detection the image is saved “saved.jpg”.
When the user clicks capture button then the image is saved in images folder and an alert
message is poped showing the status of the saved imaged.
v
List of Figures
Names of Figures Page no.
Fig: 2.2.1.1 Features of python 03
Fig: 2.2.1.2 Interpretation of python 03
Fig: 2.2.1.3 Application of python 04
Fig: 2.2.2.1 MVC Architecture 06
Fig: 4.2.1 Class diagram 14
Fig: 4.2.2 Use case diagram 14
Fig: 4.2.3.1 Sequence diagram-1 15
Fig: 4.2.3.2 Sequence diagram-2 16
Fig: 4.2.4 Activity diagram 16
Fig: 7.1 Output screenshots 35
vi
INDEX
TITLE PAGE NO.
1. Introduction
1.1. Existing System 01
1.2. Proposed System 01
2. Literature Survey
2.1. Project Literature 02
2.2. Introduction To Python 02
2.2.1. Python Technology 02
2.2.2. MVC Architecture 05
2.2.3. Tkinter 06
2.2.4. Libraries Specific To Project 07
2.2.4.1. Imgutils 07
2.2.4.2. Numpy 07
2.2.4.3. Argparse 08
2.2.4.4. Opencv 08
3. System Analysis And Requirements
3.1. Feasibility Study 10
3.1.1. Economical Feasibility 10
3.1.2. Technical Feasibility 10
3.1.3. Social Feasibility 10
3.2. Software And Hardware Requirements 11
3.2.1. Hardware Requirements 11
3.2.2. Software Requirements 11
3.3. Performance Requirements 11
4. Software Design
4.1. Introduction 12
4.2. Uml Diagrams 12
4.2.1. Class Diagram 13
4.2.2. Use Case Diagram 14
4.2.3. Sequence Diagram 15
4.2.4. Activity Diagram 16
5. Coding Templates / Code
vii
5.1. App Code 17
5.2. Controller Code 18
5.3. Model Code 24
5.4. View Code 25
6. System Testing
6.1. Introduction 31
6.2. Types Of Tests 31
6.2.1. Unit Testing 31
6.2.2. Integration Testing 31
6.2.3. Functional Test 32
6.2.4. System Test 32
6.2.5. White Box Testing 32
6.2.6. Black Box Testing 32
6.2.7. Acceptance Testing 33
6.3. Test Approach 33
7. Results And Validation
7.1. Output Screens 35
8. Conclusion 39
9. Bibliography 40
10. References 40
viii
1. INTRODUCTION
Object Detection is a technology that falls under the broader domain of computer
vision. It deals with identifying and tracking objects present in image and videos.
Object detection has multiple applications such as face detection, vehicle detection,
pedestrian counting, self-driving cars, etc.
The following are some of the commonly used deep learning approaches for
object detection:
ImageAI
Single Shot Detectors
YOLO
Region-based Convolutional Neural Networks
Disadvantages:
Advantages:
1
2. LITERATURE SURVEY
2.1. Project literature
Bhumika Gupta (2018), proposed object detection is a well-known computer technology
connected with computer vision and image processing that focuses on detecting objects or its
instances of a certain class in digital images and videos. In this study, various basic concepts
used in object detection while making use of OpenCV library of python 2.7, improving the
efficiency and accuracy of object detection are presented.
Kartik Umesh Sharma (2019), proposed an object detection system finds objects of the real
world present either in a digital image or a video, where the object can belong to any class of
objects namely humans, cars, etc. This paper presents a review of the various techniques that
are used to detect an object, localise an object, categorise an object, extract features,
appearance information, and many more, in images and videos.
Python's simple, easy to learn syntax emphasizes readability and therefore reduces the cost
of program maintenance.
Python supports modules and packages, which encourages program modularity and code
reuse.
Features of Python:
1. Easy
3.Expensive
4.High level
5.Object Oriented
6.Portable
2
7. Embedabl
e
8.Extensible
9.Interpreted
8. GUI programming
9.Dynamically Typed
10.Large Standard Library
4
which is used is some other languages too like Java and refers to a set of lower-level
instructions that are meant to be understood by some “virtual” machine. The set of
instructions are lower-level in the sense that they are not meant to be understood by the user
(in this context, the programmer). The virtual machine is then responsible for converting
(“interpreting”) the bytecode to an even lower-level set of instructions which are meant to be
understood by the machine.
The Python bytecode generated for code that you've written is interpreted by the Python
Virtual Machine (PVM). As long as two platforms have the same version of Python installed
on them, the bytecode generated for a particular program will be the same on those machine.
This bytecode will run on any number of platforms which have the same version of the PVM.
In essence, the Python bytecode and the PVM act as a gateway between the user (the
programmer) and the machine on which the code is written and ran. This makes Python code
platform- independent.
5
Python for developing desktop GUI applications, websites and web applications. Also,
Python, as a high level programming language, allows you to focus on core
functionality of the application by taking care of common programming tasks.
1) Web Applications
2) Desktop GUI Applications
3) Software Development
4) Scientific and Numeric
5) Business Applications
6) Console Based Application
7) Audio or Video based Applications
8) 3D CAD Applications
2.2.2. MVC Architecture
MVC Stands for "Model-View-Controller." MVC is an application design model comprised
of three interconnected parts. The MVC model or "pattern" is commonly used for developing
modern user interfaces. It is provides the fundamental pieces for designing a programs for
desktop or mobile, as well as web applications.
MVC is a widely used software architectural pattern in GUI-based applications. It has three
components, namely a model that deals with the business logic, a view for the user interface,
and a controller to handle the user input, manipulate data, and update the view. The following
is a simplified schematic that shows the basic interactions between the various components:
Model:
The model component of the MVC architecture represents the data of the application. It also
represents the core business logic that acts on such data. The model has no knowledge of the
view or the controller. When the data in the model changes, it just notifies its listeners about
this change. In this context, the controller object is its listener.
View:
The view component is the user interface. It is responsible for displaying the current state of
the model to the user, and also provides a means for the user to interact with the application.
View represents the HTML files, which interact with the end user.
Controller:
6
It acts as an intermediary between view and model. It listens to the events triggered by view
and queries model for the same. Controller interacts with the model which fetches all the
records displayed to the end user.
Tkinter is the standard GUI library for Python. Python when combined with Tkinter provides
a fast and easy way to create GUI applications. Tkinter provides a powerful object-oriented
interface to the Tk GUI toolkit.
Creating a GUI application using Tkinter is an easy task. All you need to do is perform the
following steps −
Enter the main event loop to take action against each event triggered by the user.
Tkinter has several strengths. It’s cross-platform, so the same code works on
Windows, macOS, and Linux. Visual elements are rendered using native operating
system elements, so applications built with Tkinter look like they belong on the
platform where they’re run.
7
Although Tkinter is considered the de-facto Python GUI framework, it’s not without
criticism. One notable criticism is that GUIs built with Tkinter look outdated. If you
want a shiny, modern interface, then Tkinter may not be what you’re looking for.
However, Tkinter is lightweight and relatively painless to use compared to other
frameworks. This makes it a compelling choice for building GUI applications in
Python, especially for applications where a modern sheen is unnecessary, and the top
priority is to build something that’s functional and cross-platform quickly.
Tkinter Widgets
Tkinter provides various controls, such as buttons, labels and text boxes used in a GUI
application. These controls are commonly called widgets.
The foundational element of a Tkinter GUI is the window. Windows are the containers in
which all other GUI elements live. These other GUI elements, such as text boxes, labels, and
buttons, are known as widgets. Widgets are contained inside of windows.
Imutils are a series of convenience functions to make basic image processing functions such
as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier
with OpenCV and both Python 2.7 and Python 3.
Installation
Provided you already have NumPy, SciPy, Matplotlib, and OpenCV already installed,
the imutils package is completely pip-installable:
2.2.4.2 Numpy
NumPy, which stands for Numerical Python, is a library consisting of multidimensional array
objects and a collection of routines for processing those arrays. Using NumPy, mathematical
8
and logical operations on arrays can be performed. This tutorial explains the basics
of Numpy such as its architecture and environment.
Package for scientific computing with Python. Besides its obvious scientific uses, Numpy can
also be used as an efficient multi-dimensional container of generic data.
How do I install NumPy?
To install Python NumPy, go to your command prompt and type “pip install numpy”. Once
the installation is completed, go to your IDE (For example: PyCharm) and simply import it
by typing: “import numpy as np”
2.2.4.3. Argparse
argparse — Parser for command-line options, arguments and sub-commands.
The argparse module makes it easy to write user-friendly command-line interfaces. The
program defines what arguments it requires, and argparse will figure out how to parse those
out of sys. argy.
The argparse module makes it easy to write user-friendly command-line interfaces. The
program defines what arguments it requires, and argparse will figure out how to parse those
out of sys.argv. The argparse module also automatically generates help and usage messages
and issues errors when users give the program invalid arguments.
2.2.4. OpenCV-Python
9
Python is a general purpose programming language started by Guido van Rossum, which
became very popular in short time mainly because of its simplicity and code readability. It
enables the programmer to express his ideas in fewer lines of code without reducing any
readability.
And the support of Numpy makes the task more easier. Numpy is a highly optimized library
for numerical operations. It gives a MATLAB-style syntax. All the OpenCV array structures
are converted to-and-from Numpy arrays. So whatever operations you can do in Numpy, you
can combine it with OpenCV, which increases number of weapons in your arsenal. Besides
that, several other libraries like SciPy, Matplotlib which supports Numpy can be used with
this.
A prior knowledge on Python and Numpy is required before starting because they won’t be
covered in this guide. Especially, a good knowledge on Numpy is must to write optimized
codes in OpenCV-Python.
10
3. SYSTEM ANALYSIS AND REQUIREMENTS
3.1. FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth
with a very general plan for the project and some cost estimates. During system
analysis, the feasibility study of the proposed system is to be carried out. This is to
ensure that the proposed system is not a burden to the company. For feasibility
analysis, some understanding of the major requirements for the system is essential.
♦ ECONOMICAL FEASIBILITY
♦ TECHNICAL FEASIBILITY
♦ SOCIAL FEASIBILITY
3.1.1. ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be
purchased.
3.1.2. TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the available
technical resources. This will lead to high demands being placed on the client. The
developed system must have a modest requirement, as only minimal or null changes
are required for implementing this system.
3.1.3. SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must
not
11
feel threatened by the system, instead must accept it as a necessity. The level of
acceptance by the users solely depends on the methods that are employed to educate
the user about the system and to make him familiar with it. His level of confidence
must be raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
3.2. SOFTWARE AND HARDWARE REQUIREMENTS
3.2.1. Hardware Requirements
System : Pentium IV 2.4 GHz or Above
Hard Disk :80 GB.
WebCam : 1.44 Mb.
Monitor : 15 VGA Colour
Ram : 2 GB
3.2.2. Software Requirements
OS : Windows XP Professional/Vista/7/8/8.1 or Linux
Front End : python, tkinter
Tool : NetBeans
3.3. PERFORMANCE REQUIREMENTS
Performance is measured in terms of the output provided by the application.
Requirement specification plays an important part in the analysis of a system. Only
when the requirement specifications are properly given, it is possible to design a
system, which will fit into required environment. It rests largely with the users of the
existing system to give the requirement specifications because they are the people
who finally use the system. This is because the requirements have to be known during
the initial stages so that the system can be designed according to those requirements.
It is very difficult to change the system once it has been designed and on the other
hand designing a system, which does not cater to the requirements of the user, is of no
use.
The requirement specification for any system can be broadly stated as given below.
The system should be able to interface with the existing system.
The system should be accurate.
The system should be better than the existing system.
The existing system is completely dependent on the user to perform all the
duties.
12
4. SOFTWARE DESIGN
4.1. INTRODUCTION
The Unified Modeling Language allows the software engineer to express an analysis
model using the modeling notation that is governed by a set of syntactic semantic and
pragmatic rules.
A UML system is represented using five different views that describe the system from
distinctly different perspective. Each view is defined by a set of diagrams, which is as
follows.
User Model View
This view represents the system from the user’s perspective.
The analysis representation describes a usage scenario from the end-user’s
perspective.
Structural model view
In this model, the data and functionality are arrived from inside the
system. This model view models the static structures.
Behavioral Model View
It represents the dynamic of behavioral as parts of the system, depicting the
interactions of collection between various structural elements described in the
user model and structural model view.
Implementation Model View
In this the structural and behavioral as parts of the system are represented as
they are to be built.
Environmental Model View
13
In this the structural and behavioral aspect of the environment in which the
system is to be implemented are represented.
● UML Analysis modeling, this focuses on the user model and structural model
views of the system.
● UML design modeling, which focuses on the behavioral modeling,
implementation modeling and environmental model views.
Use case Diagrams represent the functionality of the system from auser’s point of
view. Use cases are used during requirements elicitation and analysis to represent the
functionality of the system. Use cases focus on the behavior of the system from
external point of view.
Actors are external entities that interact with the system. Examples of actors include
users like administrator, bank customer …etc., or another system like central
database.
The class diagram is the main building block of object oriented modeling. It is used
both for general conceptual modeling of the systematic of the application, and for
detailed modeling translating the models into programming code. Class diagrams can
also be used for data modeling. The classes in a class diagram represent both the main
objects, interactions in the application and the classes to be programmed.
A class exists with three sections. In the diagram, classes are represented with boxes
which contain three parts:
14
Fig: 4.2.1 Class Diagram
4.2.2. USE CASE DIAGRAM
A use case diagram at its simplest is a representation of a user's interaction with the
system and depicting the specifications of a use case. A use case diagram can portray
the different types of users of a system and the various ways that they interact with the
system. This type of diagram is typically used in conjunction with the textual use case
and will often be accompanied by other types of diagrams as well.
15
4.2.3. SEQUENCE DIAGRAM
A sequence diagram is a kind of interaction diagram that shows how processes
operate with one another and in what order. It is a construct of a Message Sequence
Chart. A sequence diagram shows object interactions arranged in time sequence. It
depicts the objects and classes involved in the scenario and the sequence of messages
exchanged between the objects needed to carry out the functionality of the scenario.
Sequence diagrams are typically associated with use case realizations in the Logical.
16
Fig: 4.2.3.2 Sequence Diagram-2
17
5. CODING TEMPLATES/CODE:
APP CODE:
class MyApp:
def run(self):
av = AuthView()
av.transfer_control=self.detector
av.load()
def detector(self):
dv = DetectionView()
dv.load()
app = MyApp()
app.run()
CONTROLLERS
AUTHCONTROLLER
class AuthController:
def login(self,username,password):
if len(username) == 0 :
return message
if len(password) == 0:
18
message = "Password cannot be empty"
return message
am = AuthModel()
result = am.getUser(username,password)
if result:
message = 1
else:
return message
def register(self,name,phone,email,username,password,role):
am = AuthModel()
result = am.createUser(name,phone,email,username,password,role)
if result:
print("Successfully inserted")
return message
else:
print("Some problem")
return message
DETECT
# USAGE
19
# import the necessary packages
import numpy as np
import argparse
import time
import os
class detect:
def load(self,img):
#self.l=c.capture(self)
#img = cv2.imread("capic.jpg")
#ap = argparse.ArgumentParser()
#ap.add_argument("-y", "--yolo",
directory")
type=float, default=0.3,
#args = vars(ap.parse_args())
20
args = {}
21
args['yolo'] = 'yolo-coco'
args["image"]=img
args['confidence'] = 0.5
args['threshold'] = 0.3
# load the COCO class labels our YOLO model was trained on
labelsPath="C:\\Users\\SharatKumar\\OneDrive\\Desktop\\pro\\controlle
rs\\yolo-coco\\coco.names"
LABELS = open(labelsPath).read().strip().split("\n")
np.random.seed(42)
dtype="uint8")
weightsPath="C:\\Users\\vrrre\\OneDrive\\Desktop\\pro\\controllers\\yol
o-coco\\yolov3.weights"
configPath="C:\\Users\\vrrre\\OneDrive\\Desktop\\pro\\controllers\\yolo
-coco\\yolov3.cfg"
22
# load our YOLO object detector trained on COCO dataset (80 classes)
image = cv2.imread(args["image"])
(H, W) = image.shape[:2]
# determine only the *output* layer names that we need from YOLO
ln = net.getLayerNames()
# construct a blob from the input image and then perform a forward
# pass of the YOLO object detector, giving us our bounding boxes and
# associated probabilities
swapRB=True, crop=False)
net.setInput(blob)
start = time.time()
layerOutputs = net.forward(ln)
end = time.time()
23
#print("[INFO] YOLO took {:.6f} seconds".format(end - start))
boxes = []
confidences = []
classIDs = []
objects = []
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
24
# returns the center (x, y)-coordinates of the bounding
confidences.append(float(confidence))
classIDs.append(classID)
# boxes
args["threshold"])
if len(idxs) > 0:
for i in idxs.flatten():
25
# extract the bounding box coordinates
objects.append(LABELS[classIDs[i]])
0.5, color, 2)
cv2.imwrite("saved.jpg", image)
return objects
#cv2.waitKey(0)
#d=detect()
MODEL
AUTHMODEL:
class AuthModel:
26
self.conn = connect('app.db')
def getUser(self,username,password):
print(result)
return result
def createUser(self,name,phone,email,username,password,role):
try:
insert(self.conn,query)
return 1
except:
return 0
am = AuthModel()
am.createUser('Rajesh',7777777777,'[email protected]','rajesh','rajesh123')
VIEW
AUTHVIEW:
27
from controllers.AuthController import AuthController
class AuthView:
def load(self):
self.window = Tk()
self.window.geometry('320x300')
tab_control = ttk.Notebook(self.window)
login_tab =
Frame(tab_control,bg="lightpink",width=350,height=300,padx=10,pady=10)
register_tab = Frame(tab_control,bg="lightpink",padx=10,pady=10)
tab_control.add(login_tab, text="Login")
self.login(login_tab)
self.register(register_tab)
tab_control.grid()
self.window.mainloop()
def login(self,login_tab):
window = login_tab
ul = Label(window, text="Username",bg="lightblue",fg="darkblue",padx=10)
ul.grid(row=0,column=0,padx=10,pady=10)
ue = Entry(window, width=20)
ue.grid(row=0,column=1)
28
ue.focus()
pl = Label(window, text="Password",bg="lightblue",fg="darkblue",padx=10)
pl.grid(row=1,column=0)
pe = Entry(window, show='*',width=20)
pe.grid(row=1,column=1)
b = Button(window, text="Login",bg="lightblue",fg="red",command=lambda:
self.loginControl(ue.get(),pe.get()),padx=10,pady=10)
b.grid(row=2,column=1,padx=10,pady=20)
def loginControl(self,username,password):
ac = AuthController()
#print('Username',username)
#print('Password',password)
message = ac.login(username,password)
if message==1:
self.window.destroy()
self.transfer_control()
else:
messagebox.showinfo('Alert',message)
def register(self,register_tab):
window = register_tab
nl=Label(window,text="Name",bg="lightblue",fg="darkblue",padx=10)
nl.grid(row=0, column=0,padx=10,pady=10)
ne = Entry(window, width=20)
ne.grid(row=0, column=1)
29
ne.focus()
el=Label(window,text="Email",bg="lightblue",fg="darkblue",padx=10)
el.grid(row=1, column=0,padx=10,pady=10)
ee = Entry(window, width=20)
ee.grid(row=1, column=1)
phl=Label(window,text="Phone",bg="lightblue",fg="darkblue",padx=10)
phl.grid(row=2, column=0,padx=10,pady=10)
phe.grid(row=2, column=1)
ul=Label(window,text="Username",bg="lightblue",fg="darkblue",padx=10)
ul.grid(row=3, column=0,padx=10,pady=10)
ue = Entry(window, width=20)
ue.grid(row=3, column=1)
pl=Label(window,text="Password",bg="lightblue",fg="darkblue",padx=10)
pl.grid(row=4, column=0,padx=10,pady=10)
pe.grid(row=4, column=1)
30
b = Button(window, text="Register",bg="lightblue",fg="red", command=lambda:
self.registerControl(ne.get(),phe.get(),ee.get(),
ue.get(),pe.get()),padx=10,pady=10)
b.grid(row=5, column=1,padx=10,pady=20)
def registerControl(self,name,phone,email,username,password):
ac = AuthController()
message = ac.register(name,phone,email,username,password,'student')
if message:
messagebox.showinfo('Alert',message)
av = AuthView()
DETECTIONVIEW
class AuthModel:
self.conn = connect('app.db')
def getUser(self,username,password):
print(result)
return result
def createUser(self,name,phone,email,username,password,role):
try:
31
insert(self.conn,query)
return 1
except:
return 0
am = AuthModel()
am.createUser('Rajesh',7777777777,'[email protected]','rajesh','rajesh123')
32
6. SYSTEM TESTING
6.1. INTRODUCTION
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished product It
is the process of exercising software with the intent of ensuring that the Software
system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.
33
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and
user manuals.
System testing ensures that the entire integrated software system meets requirements.
It tests a configuration to ensure known and predictable results. An example of system
testing is the configuration oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration
points.
White Box Testing is a testing in which in which the software tester has knowledge of
the inner workings, structure and language of the software, or at least its purpose. It is
purpose. It is used to test areas that cannot be reached from a black box level.
6.2.6 Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing
34
in which the software under test is treated, as a black box. you cannot “see” into it.
The test provides inputs and responds to outputs without considering how the software
works.
Bottom up approach
Top down approach
Bottom up Approach
Testing can be performed starting from smallest and lowest level modules and
proceeding one at a time. For each module in bottom up testing a short program
executes the module and provides the needed data so that the module is asked to
perform the way it will when embedded with in the larger system. When bottom level
modules are tested attention turns to those on the next level that use the lower level
ones they are tested individually and then linked with the previously examined lower
level modules.
This type of testing starts from upper level modules. Since the detailed activities
usually performed in the lower level routines are not provided stubs are written. A
stub is a module shell called by upper level module and that when reached properly
will return a message to the calling module indicating that proper interaction occurred.
No attempt is made to verify the correctness of the lower level module.
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished product It
is the process of exercising software with the intent of ensuring that the software
system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
35
specific testing requirement.
36
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
37
7. RESULTS AND VALIDATION
7.1 OUTPUT SCREENS
38
Fig: 7.1.2 Registration
39
Fig: 7.1.3 Login Credentials
40
Fig: 7.1.4 Results
41
8. Conclusion
This paper shows that Object Detection is a technology that falls under the broader
domain of computer vision. It deals with identifying and tracking objects present in
image and videos. Object detection has multiple applications such as face detection,
vehicle detection, pedestrian counting, self-driving cars, etc.
The proposed system is Object Detection using OpenCV with python and the deep
learning approach used is yolo.
It rules out the problem faced during installation as OpenCV is very easy to install in
an IDE.
It is the simple code (can be understand by any one who are having basic knowledge
of python)
42
9. Bibilography
https://
www.python.org/
This link is for the packages required installation setup
https://www.pyimagesearch.com/2018/11/12/yolo-object-detection-with-
opencv/
code information
10. Referrences
https://www.pyimagesearch.com/2018/11/12/yolo-object-detection-with-opencv/
https://circuitdigest.com/tutorial/real-life-object-detection-using-opencv-python-
detecting-objects-in-live-video
https://www.diva-portal.org/smash/get/diva2:1414033/FULLTEXT02
43