0% found this document useful (0 votes)
22 views24 pages

Edited Copy of Real Time Speed Estimation and Traffic Management System Using Open CV

The document describes a proposed real-time speed estimation and traffic management system using OpenCV. It discusses the existing systems and their limitations, and outlines the key modules of the proposed system, including data pre-processing, training and testing of models, and prediction of output for traffic management. A feasibility study is also presented, covering the economic, technical and operational feasibility of the system.

Uploaded by

Raghu Ram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views24 pages

Edited Copy of Real Time Speed Estimation and Traffic Management System Using Open CV

The document describes a proposed real-time speed estimation and traffic management system using OpenCV. It discusses the existing systems and their limitations, and outlines the key modules of the proposed system, including data pre-processing, training and testing of models, and prediction of output for traffic management. A feasibility study is also presented, covering the economic, technical and operational feasibility of the system.

Uploaded by

Raghu Ram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Real Time speed estimation

and traffic management system using Open CV

ABSTRACT

For real-time speed tracking of moving vehicles, there have been multiple previous
implementations. Although prediction results achieved are promising, these traditional approaches
are still far from being highly accurate and efficient. This could lead to inaccurate predictions,
improper reports, and higher risks of road accidents, and many people who are at default for
violating the traffic rules may go completely unnoticed. The existing system only detects the
number of vehicles and also it is done with an object detection mechanism. In our proposed system,
real-time speed estimation, vehicle counting, and lane management functionalities are added.
When an input video is given, the speed of the vehicles moving will be detected and the lane in
which the vehicles are travelling is also detected and tracked. Also, all the details of speed
estimation and lane management are updated in a separate CSV file. The proposed system is
trained with real-time datasets and also the proposed system has high accuracy and efficiency

keywords: Traffic management, Lane detection, deep learning, object detection

1. INTRODUCTION
Throughout the last several decades, surveillance cameras have been installed all over the
transportation network to aid in traffic management, ensure the safety of drivers, and spot any
anomalies along with software for detecting cars is becoming more crucial for smart cities Many
cities have installed cameras along their streets, but it is impossible for a single person to keep
track of all of them at once. Thus, in order for people to accomplish intelligent traffic management,
a friendly intelligent traffic monitoring system is required. Accurate vehicle detection is the initial
step of intelligent traffic monitoring. Real-time road monitoring, intelligent tracking, and
intelligent traffic management are only some of the many uses for this method. The purpose of this
system is, thus, to identify vehicle categories and detect vehicles in real time. Over the last decade,
many vehicle identification standards with varying degrees of difficulty have been presented.
Meanwhile, advances in deep learning-based techniques have led to remarkable progress in the
realm of vehicle detection and object detection areas, and may be broken down into one-stage and
two-stage detection algorithms. The associate editor responsible for managing the evaluation of
this paper and giving final approval was Sudipta Roy.
1.2 MODULE DESCRIPTION
LIST OF MAIN MODULES

1.2.1 Module 1: Data Pre-Processing


The data pre-processing module is a crucial component of a real-time speed estimation and traffic
management system using OpenCV. This module is responsible for cleaning, transforming and
preparing the raw data for analysis and interpretation. The data pre-processing stage involves
several tasks such as image correction, image enhancement, image segmentation, and feature
extraction. Image correction tasks include correction of lens distortion, correction of perspective
distortion and correction of brightness and contrast. Image enhancement tasks involve sharpening
the images to improve their visual quality. Image segmentation is the process of dividing the image
into multiple segments to separate the objects of interest. Finally, feature extraction involves
identifying and extracting important features from the segmented images, such as the shape and
size of objects, and their position and orientation. The output of the data pre-processing module is
a set of clean and optimized images that are ready for further analysis and interpretation, which
helps in accurate speed estimation and efficient traffic management.

1.2.2 Module 2: Training and testing


The processed data which gets split into 2 parts, out of it the training data is then fed into that
system’s architecture to predict the output. Eventually the model gets trained using the dataset in
training period using different Algorithms and their efficiency gets evaluated. Algorithm’s
efficiency and accuracy plays an important role in determining which algorithm is suitable for the
model to predict output with high efficiency. RCNN is found to have better accuracy than the other
object recognition algorithms.

1.2.3 Module 3: Prediction of output


The model is then fine-tuned to recognize lane boundaries and a speed estimate threshold based
on the object recognition training. Then the output is displayed and generated in a web application,
from the model. For users to share their video stream and see their predictions in real time, a web
application is built in which the output data is displayed for users to share. A separate.csv file for
each group of projections to be generated and for future monitoring purposes.

2. SYSTEM STUDY
2.1 EXISTING SYSTEM
These systems typically employ computer vision techniques and algorithms to analyze video
feed from cameras placed on roads and highways. The captured video frames are processed in real-
time to detect and track vehicles, and their speed is estimated using image processing and pattern
recognition techniques. The collected speed data can be used for traffic management purposes,
such as detecting congestion, controlling traffic signals, and managing road accidents. These
systems can provide valuable information for traffic management agencies to improve the
efficiency of transportation systems.

2.2 PROPOSED SYSTEM


A proposed real-time speed estimation and traffic management system using OpenCV would
involve the following steps: video capture through a camera placed on the road or highway, image
processing using OpenCV to detect and track vehicles in real-time, speed estimation through the
analysis of the motion of the detected vehicles using techniques such as optical flow, background
subtraction, or feature matching, traffic management using the estimated speed data to control
traffic signals, detect congestion, and manage road accidents, and data analysis of the collected
speed data to gain insights into traffic patterns and improve the efficiency of transportation
systems. The specifics of the system would depend on the specific requirements and constraints of
the application.

2.3 FEASIBILITY STUDY


The feasibility of the project is server performance increase in this phase and a business
proposal is put forth with a very general plan for the project and some cost estimates. During
system analysis, the feasibility study of the proposed system is to be carried out. For feasibility
analysis, some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are

▪ Economical feasibility

▪ Technical feasibility

▪ Operational feasibility

2.3.1 ECONOMICAL FEASIBILITY


This study is carried out to check the economic impact that the system will have on the
organization. The amount of funds that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely available.
Only the customized products had to be purchased.

2.3.2 TECHNICAL FEASIBILITY


This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands being placed on the client. The developed system must
have modest requirements, as only minimal or null changes are required for implementing this
system.

2.3.3 OPERATIONAL FEASIBILITY


The aspect of the study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the users
solely depends on the methods that are employed to educate the user about the system and to make
him familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
3. REQUIREMENT ANALYSIS
3.1. BUSINESS SYSTEM

4. SYSTEM ANALYSIS

4.1. DATA FLOW DIAGRAM

A Data-Flow Diagram (DFD) is a way of representing a flow of a data of a process or a


system (usually an information system). The DFD also provides information about the outputs and
inputs of each entity and the process itself. A data-flow diagram has no control flow, there are no
decision rules and no loops. Specific operations based on the data can be represented by
a flowchart. There are several notations for displaying data-flow diagrams. For each data flow, at
least one of the endpoints (source and / or destination) must exist in a process. The refined
representation of a process can be done in another data-flow diagram, which subdivides this
process into sub-processes. The data-flow diagram is part of the structured-analysis modeling
tools. When using UML, the activity diagram typically takes over the role of the data-flow
diagram. A special form of data-flow plan is a site-oriented data-flow plan. Data-flow diagrams
can be regarded as inverted Petri nets, because places in such networks correspond to the semantics
of data memories. DFD consists of processes, flows, warehouses, and terminators.

Data Flow Diagram Symbols

● Data Flow

Data flows are pipelines through which packets of information flow. Label the arrows with
the name of the data that moves through it.

● External Entity

External entities are objects outside the system, with which the system communicates.
These are sources and destinations of the system’s inputs and outputs.
DATA FLOW DIAGRAM

Level 0

Level 1

4.2. ENTITY RELATIONSHIP DIAGRAM


● Definition

Entity-relationship diagram depicts relationship between data objects. The attribute of each
data objects noted in the entity-relationship diagram can be described using a data object
description.In software engineering, an entity-relationship model (ERM) is an abstract and
conceptual representation of data. Entity-relationship modeling is a database modeling method,
used to produce a type of conceptual schema or semantic data model of a system, often a relational
database, and its requirements in a top-down fashion. Diagrams created by this process are
called entity-relationship diagrams, ER diagrams, or ERDs. Data flow diagram serves two
purposes.

1. To provide an indication of how data is transformed as it moves through the system.


2. To depict the functions that transform the data flow.

1. One-to-One

One instance of entity (A) is associated with one other instance of another entity (B).

For example, in a database of sign in, each customer name (A) is associated with only one
security mobile number (B).

2. One-to-Many

One instance of an entity (A) is associated with zero, one or many instances of another entity
(B), but for one instance of entity B there is only one instance of entry A.

For example, for a company with all employees working in one building, the building name (A) is
associated with many different employees (B), but those employees all share the same singular
association with entity A.
3. Many-to-Many

One instance of an entity (A) is associated with one, zero or many instances of entity A.

For example, for a company in which all of its employees work on multiple projects, each instance
of an employee (A) is associated with many instances of a project (B), and at the same time, each
instance of a project (B) has multiple employees (A) associated with it.

● SYMBOLS USED

External entity –

Attribute –

Relationship –

Data flow –
● E-R DIAGRAM

4.3 Use Case Diagram

A use case diagram is used to represent the dynamic behavior of a system. It encapsulates the
system's functionality by incorporating use cases, actors, and their relationships. It models the
tasks, services, and functions required by a system/subsystem of an application. It depicts the high-
level functionality of a system and also tells how the user handles a system. Use case diagrams can
summarize the details of your system's users (also known as actors) and their interactions with the
system. To build one, you'll use a set of specialized symbols and connectors. A use case diagram
doesn't go into a lot of detail—for example, don't expect it to model the order in which steps are
performed. Instead, a proper use case diagram depicts a high-level overview of the relationship
between use cases, actors, and systems. Experts recommend that use case diagrams be used to
supplement a more descriptive textual use case.An effective use case diagram can help your team
discuss and represent:

● Scenarios in which your system or application interacts with people, organizations, or


external systems

● Goals that your system or application helps those entities (known as actors) achieve

● The scope of your system

● Symbols Used

Actor –

Data flow–

System Function–
Use Case Diagram

Fig 4.4 Use case Diagram


4.4 Sequence Diagram

A sequence diagram is a type of interaction diagram because it describes how—and in what


order—a group of objects works together. These diagrams are used by software developers and
business professionals to understand requirements for a new system or to document an existing
process. Sequence Diagrams are interaction diagrams that detail how operations are carried out.
They capture the interaction between objects in the context of a collaboration. Sequence Diagrams
are time focus and they show the order of the interaction visually by using the vertical axis of the
diagram to represent time what messages are sent and when.Sequence Diagrams captures:

● the interaction that takes place in a collaboration that either realizes a use case or an
operation (instance diagrams or generic diagrams)

● high-level interactions between user of the system and the system, between the system and
other systems, or between subsystems (sometimes known as system sequence diagrams)

Object symbol -

Activation Box -

Lifeline Symbol -

Asynchronous create message symbol -


Reply message symbol -

● Sequence Diagram
Fig 4.5 Sequence Diagram

4.5. Activity Diagram


The basic purpose of activity diagrams is similar to the other four diagrams. It captures the dynamic
behavior of the system. Other four diagrams are used to show the message flow from one object
to another but the activity diagram is used to show the message flow from one activity to
another.Activity is a particular operation of the system. Activity diagrams are not only used for
visualizing the dynamic nature of a system, but they are also used to construct the executable
system by using forward and reverse engineering techniques. The only missing thing in the activity
diagram is the message part.It does not show any message flow from one activity to another.
Activity diagram is sometimes considered as the flowchart. Although the diagrams look like a
flowchart, they are not. It shows different flows such as parallel, branched, concurrent, and
single.The purpose of an activity diagram can be described as −

● Draw the activity flow of a system.

● Describe the sequence from one activity to another.

● Describe the parallel, branched and concurrent flow of the system

Symbols Used
Initial State

Activity State

Action Flow

Decision Node
Activity Diagram

MRI image Dataset

Image preprocessing

Discrete Wavelet Transform

Fused coefficient

Wavelet transform

Fusion model

DTCWT

Segmented Image

Fig 4.6 Activity Diagram


5. SYSTEM REQUIREMENT SPECIFICATION

5.1. HARDWARE REQUIREMENTS

Processor : Pentium Dual Core 2.00GHZ


Hard disk : 120 GB
Mouse : Logitech
RAM : 2GB (minimum)
Keyboard : 110 keys enhanced

5.2. SOFTWARE REQUIREMENTS

Operating system : Windows7 (with service pack 1), 8, 8.1 and 10


IDE : Anaconda1
Backend : Python
Frontend : Html, CSS

5.3. SOFTWARE SPECIFICATIONS - ANACONDA

Anaconda is an open-source package manager for Python and R. It is the most popular
platform among data science professionals for running Python and R implementations. There are
over 300 libraries in data science, so having a robust distribution system for them is a must for any
professional in this field.Anaconda simplifies package deployment and management. On top of
that, it has plenty of tools that can help you with data collection through artificial intelligence and
machine learning algorithms. With Anaconda, you can easily set up, manage, and share Conda
environments. Moreover, you can deploy any required project with a few clicks when you’re using
Anaconda.There are many advantages to using Anaconda and the following are the most
prominent ones among them:Anaconda is free and open-source. This means you can use it without
spending any money. In the data science sector, Anaconda is an industry staple. It is open-source
too, which has made it widely popular. If you want to become a data science professional, you
must know how to use Anaconda for Python because every recruiter expects you to have this skill.
It is a must-have for data science.

It has more than 1500 Python and R data science packages, so you don’t face any
compatibility issues while collaborating with others. For example, suppose your colleague sends
you a project which requires packages called A and B but you only have package A. Without
having package B, you wouldn’t be able to run the project. Anaconda mitigates the chances of
such errors. You can easily collaborate on projects without worrying about any compatibility
issues.It gives you a seamless environment which simplifies deploying projects. You can deploy
any project with just a few clicks and commands while managing the rest. Anaconda has a thriving
community of data scientists and machine learning professionals who use it regularly. If you
encounter an issue, chances are, the community has already answered the same. On the other hand,
you can also ask people in the community about the issues you face there, it’s a very helpful
community ready to help new learners. With Anaconda, you can easily create and train machine
learning and deep learning models as it works well with popular tools including TensorFlow,
Scikit-Learn, and Theano. You can create visualizations by using Bokeh, Holoviews, Matplotlib,
and Datashader while using Anaconda.

How to Use Anaconda for Python


Now that we have discussed all the basics in our Python Anaconda tutorial, let’s discuss some
fundamental commands you can use to start using this package manager.
Listing All Environments
To begin using Anaconda, you’d need to see how many Conda environments are present in your
machine.
conda env list
It will list all the available Conda environments in your machine.
Creating a New Environment
You can create a new Conda environment by going to the required directory and use this command:
conda create -n <your_environment_name>
You can replace <your_environment_name> with the name of your environment. After entering
this command, conda will ask you if you want to proceed to which you should reply with y:
proceed ([y])/n)?
On the other hand, if you want to create an environment with a particular version of Python, you
should use the following command:
conda create -n <your_environment_name> python=3.6
Similarly, if you want to create an environment with a particular package, you can use the
following command:
conda create -n <your_environment_name>pack_name
Here, you can replace pack_name with the name of the package you want to use.
If you have a .yml file, you can use the following command to create a new Conda environment
based on that file:
conda env create -n <your_environment_name> -f <file_name>.yml
We have also discussed how you can export an existing Conda environment to a .yml file later in
this article.

Activating an Environment
You can activate a Conda environment by using the following command:
conda activate <environment_name>
You should activate the environment before you start working on the same. Also, replace the term
<environment_name> with the environment name you want to activate. On the other hand, if you
want to deactivate an environment use the following command:
conda deactivate

Installing Packages in an Environment


Now that you have an activated environment, you can install packages into it by using the
following command:
conda install <pack_name>
Replace the term <pack_name> with the name of the package you want to install in your Conda
environment while using this command.
Updating Packages in an Environment
If you want to update the packages present in a particular Conda environment, you should use the
following command:
conda update
The above command will update all the packages present in the environment. However, if you
want to update a package to a certain version, you will need to use the following command:
conda install <package_name>=<version>

Exporting an Environment Configuration


Suppose you want to share your project with someone else (colleague, friend, etc.). While you can
share the directory on Github, it would have many Python packages, making the transfer process
very challenging. Instead of that, you can create an environment configuration .yml file and share
it with that person. Now, they can create an environment like your one by using the .yml file.
For exporting the environment to the .yml file, you’ll first have to activate the same and run the
following command:
conda env export ><file_name>.yml
The person you want to share the environment with only has to use the exported file by using the
‘Creating a New Environment’ command we shared before.

Removing a Package from an Environment


If you want to uninstall a package from a specific Conda environment, use the following command:
conda remove -n <env_name><package_name>
On the other hand, if you want to uninstall a package from an activated environment, you’d have
to use the following command:
conda remove <package_name>

Deleting an Environment
Sometimes, you don’t need to add a new environment but remove one. In such cases, you must
know how to delete a Conda environment, which you can do so by using the following command:
conda env remove –name <env_name>
The above command would delete the Conda environment right away.

5.3.1. FRONT END SPECIFICATIONS

Front-end web development is the process of transforming the data to a graphical interface, through
the usage of CSS, HTML, and JavaScript so that the users can observe and network with that data.
The front end portion is built by using some languages which are discussed below: HTML: HTML
stands for Hypertext Markup Language. It is used to design the front-end portion of web pages
using a markup language. HTML is the combination of Hypertext and Markup language. The part
of a website that the user interacts with directly is termed the front end. It is also referred to as the
‘client side’ of the application. It includes everything that users experience directly: text colors and
styles, images, graphs and tables, buttons, colors, and a navigation menu. HTML, CSS, and
JavaScript are the languages used for Front End development. The structure, design, behavior, and
content of everything seen on browser screens when websites, web applications, or mobile apps
are opened up, is implemented by front End developers. Responsiveness and performance are two
main objectives of the Front End. The developer must ensure that the site is responsive i.e. it
appears correctly on devices of all sizes no part of the website should behave abnormally
irrespective of the size of the screen.

HTML: HTML stands for Hypertext Markup Language. It is used to design the front-end portion
of web pages using a markup language. HTML is the combination of Hypertext and Markup
language. Hypertext defines the link between the web pages. The markup language is used to
define the text documentation within the tag which defines the structure of web pages.

CSS: Cascading Style Sheets fondly referred to as CSS is a simply designed language intended to
simplify the process of making web pages presentable. CSS allows you to apply styles to web
pages. More importantly, CSS enables you to do this independent of the HTML that makes up
each web page.
Some other libraries and frameworks are Semantic-UI, Foundation, Materialize, Backbone.js,
Ember.js, etc.

5.3.2.BACKEND SPECIFICATION

Advantages of Python
1. Easy to Read, Learn and Write
Python is a high-level programming language that has English-like syntax. This makes it easier to
read and understand the code.
Python is really easy to pick up and learn, that is why a lot of people recommend Python to
beginners. You need less lines of code to perform the same task as compared to other major
languages like C/C++ and Java.
2. Improved Productivity
Python is a very productive language. Due to the simplicity of Python, developers can focus on
solving the problem. They don’t need to spend too much time in understanding
the syntax or behavior of the programming language. You write less code and get more things
done.
3. Interpreted Language
Python is an interpreted language which means that Python directly executes the code line by line.
In case of any error, it stops further execution and reports back the error which has occurred.
Python shows only one error even if the program has multiple errors. This makes debugging easier.
4. Dynamically Typed
Python doesn’t know the type of variable until we run the code. It automatically assigns the data
type during execution. The programmer doesn’t need to worry about declaring variables and their
data types.
5. Free and Open-Source
Python comes under the OSI approved open-source license. This makes
it free to use and distribute. You can download the source code, modify it and even distribute your
version of Python. This is useful for organizations that want to modify some specific behavior and
use their version for development.
6. Vast Libraries Support
The standard library of Python is huge, you can find almost all the functions needed for your task.
So, you don’t have to depend on external libraries.
But even if you do, a Python package manager (pip) makes things easier to import other great
packages from the Python package index (PyPi). It consists of over 200,000 packages.
7. Portability
In many languages like C/C++, you need to change your code to run the program on different
platforms. That is not the same with Python. You only write once and run it anywhere.
However, you should be careful not to include any system-dependent features.

6. SYSTEM DESIGN
6.1 Table design

FIELD DATATYPE PRIMARY KEY

Age Integer Not Null

Gender Integer Not Null

Emotion String Not Null

You might also like