Tamilfront
Tamilfront
ABSTRACT
There have been more and more applications for traffic rules violation monitoring in some
countries in recent years. The present work aims to analyze the vehicle speeds near road traffic
violation monitoring areas on main urban roads and determine the impact of road traffic violation
monitoring on vehicle speeds. A representative urban main road section was selected, and the camera
method recorded the traffic flow. The calculation method obtained the vehicle speeds before, within,
and after the road traffic violation monitoring area.
The speed data was classified and processed by SPSS software and mathematical methods to
establish the vehicle speed probability density models before, within, and after the road traffic
violation monitoring area. The results show that the average speed and maximum speed within the
traffic violation monitoring area are significantly slower than those before and after the traffic
violation monitoring area. 70.1% of the vehicles before the road traffic violation monitoring area were
speeding, and 80.2% of the cars were running after the road traffic violation monitoring area.
At the same time, within the road traffic violation monitoring area, the speeding vehicles were
reduced to 15.9%. When cars pass through the road traffic violation monitoring area, the vehicle
speeds tend to decrease and increase. In its active site, road traffic violation monitoring can effectively
regulate driving behavior and reduce speeding, but this effect is limited to the vicinity of the traffic
violation monitoring. The distribution of vehicle speeds can be calculated from the vehicle speed
probability density model.
1
2.OBJECTIVES OF RESEARCH WORK
The goal of the project is to automate the traffic rules violation detection system and make it easy
for the traffic police department to monitor the traffic and take action against the violated vehicle
owner in a fast and efficient way. Detecting and tracking the vehicle and their activities accurately is
the main priority of the system.
1. Enhance Road Safety: Develop an automated system to detect and deter traffic violations, thereby
reducing accidents and improving overall road safety.
2. Automated Detection: Implement algorithms and technologies capable of automatically
identifying various types of violations such as speeding, red light running, and illegal parking.
3. Real-time Monitoring: Enable real-time surveillance of roadways and intersections using cameras
and sensors to promptly detect and address violations.
4. Efficient Enforcement: Provide law enforcement agencies with accurate violation data and
evidence to streamline enforcement efforts and ensure compliance with traffic regulations.
5. Data Analysis for Insights: Analyze collected violation data to identify patterns, hotspots, and
peak violation times, enabling informed decision-making for future traffic management strategies.
2
3. SYSTEM ANALYSIS
3.1 INTRODUCTION
In the realm of traffic management and enforcement, the existing systems heavily lean on
manual intervention, where law enforcement officers play a pivotal role in detecting and addressing
traffic violations. However, this manual approach poses several challenges, including delayed response
times, limited coverage due to sparse surveillance infrastructure, and the considerable operational
expenses associated with maintaining a large workforce. In response to these shortcomings, a proposed
Traffic Violation Detection System aims to revolutionize the landscape by introducing automated
monitoring and detection capabilities.
The current traffic violation monitoring infrastructure predominantly relies on the vigilance of
law enforcement officers supplemented by sporadic camera surveillance at select locations. This
manual enforcement paradigm entails officers stationed at various points along roadways and
intersections to visually identify and report violations. However, this approach suffers from inherent
limitations
The Traffic Violation Detection System represents a paradigm shift from manual enforcement
to automated monitoring and detection, leveraging cutting-edge technologies such as computer vision
and machine learning. Key features of the proposed system. Automated Detection Technologies:
Advanced algorithms and computer vision systems will be deployed to automatically detect various
types of traffic violations, including speeding, red light running, and illegal parking, ensuring prompt
identification and action. A network of surveillance cameras and sensors will be strategically deployed
across roadways and intersections to achieve comprehensive coverage, eliminating blind spots and
ensuring continuous monitoring.
3
4. SYSTEM SPECIFICATION
PROCESSOR : Intel core i5 9th gen, Intel Xeon/ AMD Ryzen 3 3600
RAM : 32 GB or 64 GB
4
5. SYSTEM DESIGN
5.1 INTRODUCTION
The increasing number of cars in cities can cause high volumes of traffic, and implies that
traffic violations become more critical nowadays. This causes severe destruction of property and
more accidents that may endanger the lives of the people .To solve the alarming problem and
prevent such unfathomable consequences, traffic violation detection systems are needed. For which
the system enforces proper traffic regulations at all times, and apprehends those who do not comply. A
traffic violation detection system must be realized in real-time as the authorities track the roads all the
time .Hence, traffic enforcers will not only be at ease in implementing safe roads accurately, but also
efficiently; as the traffic detection system detects violations faster than humans. Consequently, a
traffic violation detection system may be first realized by detecting the most common traffic
violations, which are swerving and blocking the pedestrian lane. White line barriers are drawn in the
road to specify the lanes that will have their corresponding directions.
PYTHON
Python is a high-level, general-purpose programming language. Its design philosophy
emphasizes code readability with the use of significant readability with the use of significant
indentation
Python is dynamically typed and garbage-collected. It supports multiple programming
paradigms, including structured, object-oriented and functional programming.
PYCHARM
Pycharm is an integrated development environment used for programming in Python. It
provides code analysis, a graphical debugger, an integrated unit tester, integration with version control
systems, and supports web development with Django. PyCharm is developed by the Czech company
Jet Brains.
5
QT CREATER
Qt is a cross-platform framework that is used for the development of GUIs and applications. It
runs across operating systems like Linux, Windows, iOS, and Android. Qt enables simultaneous work
within one framework using such tools as Qt Creator, Qt Quick, Qt Design Studio, and others.
Qt is a C++ framework that supports the WOCA (Write Once, Compile Anywhere) principle,
which means Qt is a cross-platform framework. Its mainly used to develop application and graphical
user interface (GUIs) that can run across different operating systems.
OPEN CV
Open CV is a huge open-source library for computer vision, machine learning, and image
processing. Open CV supports a wide variety of programming language like Python, C++, Java, etc. It
can process images and videos to identify objects, faces, or even the handwriting of a human.
Open CV stands for Open Source Computer Vision. To put it simply, is a library used for
image processing. In fact, it is a huge open-source library used for computer vision application, in
areas powered by Artificial intelligence or Machine Learning algorithm, and for completing tasks that
need image processing.
Pixel data
Thus, 10 bits is used to represent the row and similarly 10 bits is used to represent the column
for a total of 20 bits. Correspondingly, a concatenation is done to simplify the matrix to a single row
vector, and the sequence of concatenation is done consistently by having the first 10 bits be the row
matrix of the image then followed by 10 bits for the column matrix of the image. Note that the image
chromosome represented by 20 bits is actually a location from the whole image. And since it is
predefined in this system that a constant cropped picture of 116 by 161 pixel image is gathered with
respect .When we see through our eyes. Our eyes provide lots of information about the object which
you see. Machines differentiate between objects based on some features like shape, color etc. of
object. Machines are capable of seeing and processing the vision into numbers and storing it in the
6
memory. We use pixel value to convert images into numbers. The pixel is the basic unit of
programmable colour on a computer display or in a computer image.
The picture intensity at the particular location is represented by the numbers. In the above
image, we have shown the pixel values for a grayscale image consist of only one value, the intensity
of the black colour at that location. There are two common ways to identify the images:
A. Grayscale: Grayscale images are those images which contain only two colours black and white.
The contrast measurement of intensity is black treated as the weakest intensity, and white as the
strongest intensity. When we use the grayscale image, the computer assigns each pixel value
based on its level of darkness.
B. RGB: And RGB is a combination of the red, green, blue colour which together makes a new colour.
The computer retrieves that value from each pixel and puts the results in a array
The Framework involves Open CV library used in VS code for recognising images. Open CV
algorithms work by analyzing image and find out some features (ie. Shape, colour, density) of the
image. Firstly, the goal is to convert an image into a form suitable for analysis, mostly in binary. After
7
that we perform operations such as exposure correction, colour balancing, image noise reduction, or
increasing image sharpness. We are using various thresholding techniques to process a colour full
image. Binary thresholding makes pixel values either to zero or 255 based on the threshold value
provided. Image smoothing is an important image processing technique that performs blurring and
noise filtering in an image.
A. Open CV Image Filters - Image filtering is the process of modifying an image by changing its
shades or colour of the pixel. It is also used to increase brightness and contrast.
B. Bilateral Filter – Open CV provides the bilateral Filter() function to apply the bilateral filter on the
image. The bilateral filter can reduce unwanted noise very well while keeping edges sharp.
C. Output
There are so many options provided by Python to develop GUI applications and PyQt5 is one of them.
PyQt5 is a cross-platform GUI toolkit, a set of python bindings for Qt v5. One can develop an
interactive desktop application with so much ease because of the tools and simplicity provided by this
library. A GUI application consists of Front-end and Back-end. PyQt5 has provided a tool called
‘QtDesigner’ to design the front-end by drag and drop method so that development can become faster
and one can give more time on back-end stuff. Installation
8
5.2 TABLE DESIGN
Rule Integer Date and time when the violation was detected
Camera Varchar(10) Identifier for the location where the violation occurred
Time Data time Date and time when the violation was detected
First sighted Varchar(10) Current status of the violation (e.g., pending, resolved)
9
Field Name Data Type Description
Location Varchar(10) Identifier for the location where the violation occurred
10
5.3 DATA FLOW DIAGRAM
The feasibility of the project is analyzed in this phase and a business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is not a
burden to the college/organization. For feasibility analysis, some understanding of the major
requirements for the system is essential.
1. Technical Feasibility
Technical feasibility evaluates the technical complexity of the Violation Detection System
and often involves determining whether the Violation Detection System can be implemented
with state-of-the art techniques and tools. In the case of a Violation Detection System, an
important aspect of technical feasibility is determining the model to which the GUI application will
be integrated. The model used for computation of image is an important determinant to
software's quality and makes it vital to the application's success.
2. Operational Feasibility
Operational feasibility refers to the ability of a system to perform all its operations effectively
and efficiently. Our project provides an easy to use user -friendly GUI so the user can perform
all his/her task with few clicks.
3. Economic Feasibility
Our project is economically feasible because we consider to buy bare minimum hardware
for calculating violation detection of vehicles.
• Initially the user will feed the input data into our software which then passed onto the model for
computation work.
• If the provided data is not sufficient or any hardware error occurs, then retry. • On successful
completion of task, send the data back to the source.
11
Fig : 5.3.1 Main flow chart
Level – 0
12
The data flow is multi-directional as the data flows from all the blocks in a pre- defined manner. The
data majorly flows between the user, software, and the model.
Sequence Diagram
Main Steps - The camera will feed the input data in the software, it can be local or a connected VM.
Once the software receives the input it will perform some scientific computation to make sure the data
is valid for the model. Model then performs its intended work and returns the result back to the
source.
Software Design - Detailed design of software components and modules within the system.
Specification of algorithms, data structures, and interfaces used for vehicle detection and
violation classification. Design of user interfaces and interaction workflows for system operators and
administrators.
Hardware Design - Specification of hardware requirements for implementing the automated traffic
violation detection system. Selection of cameras, sensors, processors, and storage devices suitable for
roadside deployment. Consideration of environmental factors such as weatherproofing and power
supply requirements.
13
Security Design - Implementation of security measures to protect data integrity, confidentiality, and
availability. Authentication and authorization mechanisms for system access and control. Encryption
techniques for securing communication channels and data storage.
Scalability and Performance Considerations - Strategies for scaling the system to handle increasing
volumes of traffic data and users. Performance benchmarks and optimization techniques for
improving detection accuracy and response time.
Flow chart - The procedure of violation detection is given. As shown, the system includes image
processing, vehicle tracking violation identification and information storage. The most important step
in image processing is background update. We propose a real-time adaptive background
update algorithm to improve vehicle detection. We track the vehicles by the center, and decide
whether they violate or not by thresholding. For example, for the centre when the coordinate.
14
Fig: 5.3.4 Main flow chart
15
6. SYSTEM TESTING AND IMPLEMENTATION
6.1 Testing
Testing is the phase where the errors remaining from the earlier from the phases also must be
detected. Hence, testing performs a very critical role for quality assurance and for ensuring the
reliability of software. During testing the program to be tested is executed with a set of test cases, and
the output of the program for the test cases is evaluate to determine if the program is performing as it
expected to.
We define software reliability as the probability that software will not undergo a failure for a
specified time under specified condition. Reliability is essential the level of the confidence once can
also be defined in terms of mean time to failure, which specifics time between the occurrences of two
failure.
Types of testing:
Statement coverage:
In this each statement in a program is executed at least once. This is done by checking in the
program in debug mode and by verifying each statement.
Branch coverage:
In this each and every branch cases are tested whether the different branch condition are true
or false. It is a stronger test criterion over statement coverage testing.
16
System testing:
System test are designed to validate a fully developed system with a view to assuring that it
meet its requirements.
• Alpha testing
• Beta testing
• Acceptance testing
Alpha Testing:
Alpha testing refers to the system testing that is carried out by the customer with the
organization along with the developed.
Beta Testing:
Beta testing is the system testing performed by a select groups of customers. The developers
are not present at the side and the user will inform the problem that is encountered. As a result of
problem reported during the beta test, the software developers make the modification and then prepare
for release of the software to the customer.
Acceptances Testing:
Acceptance testing is the system testing performed by the customer to whether or not accepts
the delivery of the system.
17
6.2 SYSTEM IMPLEMENTATION
Data collection is the process of gathering and measuring information on targeted variables
in an established system, which then enables one to answer relevant questions and evaluate outcomes.
Data collection is a research component in all study fields, including physical and social sciences,
humanities, and business. While methods vary by discipline, the emphasis on ensuring accurate and
honest collection remains the same. The goal for all data collection is to capture quality evidence that
allows analysis to lead to the formulation of convincing and credible answers to the questions that
have been posed. We have collected data from various resources for best possible training of our
model,
Data Pre-processing
There are multiple variants of violation detection, for our model we only take images from the videos
we gather from various resources, hence we neglect all the blurry videos from the databases. The
datasets have colour images then we change the images to grayscale.
1. Grays calling and blurring - As the part of pre-processing the input frame got from the CCTV
footage, the image is grays called and blurred with Gaussian Blur method.
2. Background Subtraction - Background subtraction method is used to subtract the current frame
from the reference frame to get the desired object’s area equation (1) shows the method.
18
3. Binary Threshold - Binarization 3 Meth0d is used to remove all the holes and noises from the
frame and get the desired object area accurately. Equation (2) shows how the binary threshold works.
4. Dilation and find the contour - After getting the image, it is dilated to fill the holes and the contour
is found from the image. Drawing rectangle box over the contours desired
5. Background Difference Method - The background difference method is commonly used in video
processing. A background image without vehicles is firstly obtained. This image is then subtracted
from the current input image, and the difference image is obtained. One can determine whether
vehicles exist in the input image by binarization of the difference image. This method is
computationally fast. However, it needs to update the background image in real time when the
environment changes.
6. Inter-frame Difference Method - The inter-frame difference method subtracts the current frame
from the previous frame, and then we can find the changing area by setting the threshold value. When
the target is in motion, there will be residual images in moving direction which the motive vehicles
can be detected by.
7. Edge Detection Method - The edge detection method applies edge detection on the input image,
and then applies image on the resulting image. The image is matched with a template image. If they
match each other, there is a vehicle. This method has few impacts on the environment. It outperforms
the background difference method. However, it has high computational complexity for vehicle model
matching. Thus, this method is not suitable for real-time processing.
8. Optical Flow Method - The optical flow method [6] is an effective way to detect moving targets. It
detects moving objects by the change in the time domain of pixel intensity of an image sequence, and
the relationship between the structure of objects and movement. This method is, however,
computationally intensive and is susceptible to noise.
9. Block Matching Method - The block matching algorithm (BMA) [7] is a video detection method
based on motion vectors. It splits an image into M × N macro blocks. We can get the motion vectors
by searching for optimal matching of a macro block of the current frame in thenext frame. A moving
vehicle is composed of many macro blocks which perform the same movement. This method is often
used in motion estimation.
19
10. Moving detection - It is very difficult to identify moving vehicles, then to track and classify them
in real time within a complex environment. There are a variety of approaches to vehicle detection in
video streams, including background difference, inter-frame difference, inter-frame corresponding,
and edge detection methods.
Database Structure
We have used SQLite database with python to manage the whole data of our application.
Here, in the relational database, we have used BCNF of 5 tables. The tables are:
1. Cars
2. Rules
3. Cameras
4. Violations
5. Groups
Cars:
This table will hold the recorded cars by the camera. A car entity is a car with a unique
identifier(id), colour(colour), license number of the car(license), where the car is first sighted (first
sighted), an image of the license number (license image), an image of the car(car image), number of
rules broken so far(num_rules_broken) and the owner of the car (owner).
Rules:
This table holds all the rules, their description (name), and the fine for breaking those
rules (fine).
Camera:
The camera table holds a unique identifier for the camera (id), location description
(location), longitude (coordinate_ x), and the latitude (coordinate_ y) of the location of the camera,
where the camera will feed its data video(feed) and in which group the camera is in(group).
Camera group:
This table simply holds the unique group names of the camera groups (name). Violations: This
table takes all the ids of other tables as foreign keys and creates a semantic record like this: A car with
this id has broken that rule at this time, which is captured by this camera.
20
7. FORMS
21
Form 3: Record page
22
8. SOURCE CODE
import time
import cv2
import qdarkstyle
from PyQt5 import QtCore, QtWidgets
from PyQt5.QtCore import Timer
from PyQt5.QtGui import QImage, QPixmap
from PyQt5.QtWidgets import QMainWindow, QStatusBar, QListWidget, QAction, qApp, QMenu
from PyQt5.uic import loadUi
from Archive import ArchiveWindow
from Database import Database
from processor.MainProcessor import MainProcessor
from processor.TrafficProcessor import TrafficProcessor
from ViolationItem import ViolationItem
from add_windows.AddCamera import Add Camera
from add_windows.AddCar import AddCar
from add_windows.AddRule import Add Rule
from add_windows.AddViolation import AddViolation
class MainWindow(QMainWindow):
def __init__(self):
super (MainWindow, self).__init__()
loadUi("./UI/MainWindow.ui", self)
self.live_preview.setScaledContents(True)
from PyQt5.QtWidgets import QSizePolicy
self.live_preview.setSizePolicy(QSizePolicy.Ignored, QSizePolicy.Ignored)
self.cam_clear_gaurd = False
self.statusBar = QStatusBar()
self.setStatusBar(self.statusBar)
self.statusBar.showMessage ("Welcome")
self.search_button.clicked.connect (self. Search)
self.clear_button.clicked.connect(self. clear)
self.refresh_button.clicked.connect(self. refresh)
23
self.database = Database.getInstance()
self.database.deleteAllCars()
self.database.deleteAllViolations ()
cam groups = self.database.getCamGroupList()
self.camera_group.clear ()
self.camera_group.addItems(name for name in cam groups)
self.camera_group.setCurrentIndex(0)
self.camera_group.currentIndexChanged.connect(self.camGroupChanged)
cams = self.database.getCamList(self.camera_group.currentText())
self.cam_selector.clear()
self.cam_selector.addItems(name for name, location, feed in cams)
self.cam_selector.setCurrentIndex(0)
self.cam_selector.currentIndexChanged.connect(self.camChanged)
self. processor = MainProcessor(self.cam_selector.currentText())
self.log_tabwidget.clear()
self.violation_list = QListWidget(self)
self.search_result = QListWidget(self)
self.log_tabwidget.addTab(self.violation_list, "Violations")
self.log_tabwidget.addTab(self.search_result, "Search Result")
self. Feed = None
self.vs = None
self.updateCamInfo()
self.updateLog()
self.initMenu()
self. timer = Timer(self)
self.timer.timeout.connect(self.update_image)
self.timer.start(50)
# trafficLightTimer = QTimer(self)
# trafficLightTimer.timeout.connect(self.toggleLight)
# trafficLightTimer. Start(5000)
def toggleLight(self):
self.processor.setLight('Green' if self.processor.getLight() == 'Red' else 'Red')
definitMenu(self):
menubar = self.menuBar()
fileMenu = menubar.addMenu('&File')
24
# File menu
## add record manually
addRec = QMenu("Add Record", self)
act = QAction('Add Car', self)
act.setStatusTip('Add Car Manually')
act.triggered.connect(self.addCar)
addRec.addAction(act)
act = QAction('Add Rule', self)
act.setStatusTip('Add Rule Manually')
act.triggered.connect(self.addRule)
addRec.addAction(act)
act = QAction('Add Violation', self)
act.setStatusTip('Add Violation Manually')
act.triggered.connect(self.addViolation)
addRec.addAction(act)
act = QAction('Add Camera', self)
act.setStatusTip('Add Camera Manually')
act.triggered.connect(self.addCamera)
addRec.addAction(act)
fileMenu.addMenu(addRec)
# check archive record ( Create window and add button to restore them)
act = QAction('&Archives', self)
act.setStatusTip('Show Archived Records')
act.triggered.connect(self.showArch)
fileMenu.addAction(act)
settingsMenu = menubar.addMenu('&Settings')
themeMenu = QMenu("Themes", self)
settingsMenu.addMenu(themeMenu)
act = QAction('Dark', self)
act.setStatusTip('Dark Theme')
act.triggered.connect(lambda:
act = QAction('White', self)
act.setStatusTip('White Theme')
act.triggered.connect(lambda:
25
themeMenu.addAction(act)
## Add Exit
fileMenu.addSeparator()
act = QAction('&Exit', self)
act.setShortcut('Ctrl+Q')
act.setStatusTip('Exit application')
act.triggered.connect(qApp.quit)
fileMenu.addAction(act)
def keyReleaseEvent(self, event):
if event.key() == QtCore.Qt.Key_G:
self.processor.setLight("Green")
elifevent.key() == QtCore.Qt.Key_R:
self.processor.setLight("Red")
elifevent.key() == QtCore.Qt.Key_S:
self.toggleLight()
def addCamera(self):
addWin = AddCamera(parent=self)
addWin.show()
def addCar(self):
addWin = AddCar(parent=self)
addWin.show()
def addViolation(self):
pass
addWin = AddViolation(parent=self)
addWin.show(
def addRule(self):
addWin = AddRule(parent=self)
addWin.show()
def showArch(self):
addWin = ArchiveWindow(parent=self)
addWin.show()
def updateSearch(self):
pass
def update_image(self):
_, frame = self.vs.read()
26
packet = self.processor.getProcessedImage(frame)
cars_violated = packet['list_of_cars'] # list of cropped images of violated cars
if len(cars_violated) > 0:
for c in cars_violated
carId = self.database.getMaxCarId() + 1
car_img = 'car_' + str(carId) + '.png'
cv2.imwrite('car_images/' + car_img, c)
self.database.insertIntoCars(car_id=carId, car_img=car_img)
self.database.insertIntoViolations(camera=self.cam_selector.currentText(), car=carId,
self.updateLog()
qimg = self.toQImage(packet['frame'])
self.live_preview.setPixmap(QPixmap.fromImage(qimg))
def updateCamInfo(self):
count, location, self.feed = self.database.getCamDetails(self.cam_selector.currentText())
self.feed = 'videos/' + self.feed
self.processor = MainProcessor(self.cam_selector.currentText())
self.vs = cv2.VideoCapture(self.feed)
self.cam_id.setText(self.cam_selector.currentText())
self.address.setText(location)
self.total_records.setText(str(count))
def updateLog(self):
self.violation_list.clear()
rows = self.database.getViolationsFromCam(str(self.cam_selector.currentText()))
for row in rows:
listWidget = ViolationItem()
listWidget.setData(row)
listWidgetItem = QtWidgets.QListWidgetItem(self.violation_list)
listWidgetItem.setSizeHint(listWidget.sizeHint())
self.violation_list.addItem(listWidgetItem)
self.violation_list.setItemWidget(listWidgetItem, listWidget)
@QtCore.pyqtSlot()
def refresh(self):
self.updateCamInfo()
self.updateLog()
@QtCore.pyqtSlot()
27
def search(self):
from SearchWindow import SearchWindow
searchWindow = SearchWindow(self.search_result, parent=self)
searchWindow.show()
@QtCore.pyqtSlot()
def clear(self):
qm = QtWidgets.QMessageBox
prompt = qm.question(self, '', "Are you sure to reset all the values?", qm.Yes | qm.No)
if prompt == qm.Yes:
self.database.clearCamLog()
self.updateLog()
else:
pass
def toQImage(self, raw_img):
from numpy import copy
img = copy(raw_img)
qformat = QImage.Format_Indexed8
if len(img.shape) == 3:
if img.shape[2] == 4:
qformat = QImage.Format_RGBA8888
else:
qformat = QImage.Format_RGB888
outImg = QImage(img.tobytes(), img.shape[1], img.shape[0], img.strides[0], qformat)
outImg = outImg.rgbSwapped()
return outImg
@QtCore.pytSlot()
def camChanged(self):
if not self.cam_clear_gaurd:
self.updateCamInfo()
self.updateLog()
@QtCore.pyqSlot()
def camGroupChanged(self):
cams = self.database.getCamList(self.camera_group.currentText())
self.cam_clear_gaurd = True
self.cam_selector.clear()
28
self.cam_selector.addItems(name for name, location, feed in cams)
self.cam_selector.setCurrentIndex(0)
# self.cam_selector.currentIndexChanged.connect(self.camChanged)
DATABASE CODE
import sqlite3 as lite
from enum import Enum
from PyQt5.QtGui import QPixmap
class KEYS(Enum):
LOCATION = 'location'
CARID = 'carid'
CARCOLOR = 'carcolor'
FIRSTSIGHTED = 'firstsighted'
CARIMAGE = 'carimage'
LICENSENUMBER = 'licensenumber'
LICENSEIMAGE = 'licenseimage'
NUMRULESBROKEN = 'numrulesbroken'
CAROWNER = 'carowner'
RULENAME = 'rulename'
RULEFINE = 'rulefine'
TIME = 'time'
RULEID = 'ruleid'
class Database():
__instance = None
@staticmethod
def getInstance():
if Database.__instance is None:
Database()
return Database.__instance
def __init__(self):
if Database.__instance is not None:
raise Exception("This class is a singleton!")
else:
Database.__instance = self
self.con = lite.connect("database/traffic.db")
29
def getCarColorsList(self):
command = "select distinct(color) from cars"
rows = self.con.cursor().execute(command).fetchall()
return [row[0] for row in rows]
def getLicenseList(self):
command = "select license_number from cars"
rows = self.con.cursor().execute(command).fetchall()
return [row[0] for row in rows]
def insertIntoCars(self, car_id='', color='', lic_num='', lic_img='', car_img='', owner=''):
sql = '''INSERT INTO cars(id, color,license_image, license_number, car_image,
VALUES(?,?,?,?,?,?) '''
car_img = car_img.split('/')[-1]
lic_img = lic_img.split('/')[-1]
cur = self.con.cursor()
cur.execute(sql, (car_id, color, lic_num, lic_img, car_img, owner))
cur.close()
self.con.commit()
def getMaxCarId(self):
sql = '''select max(id) from cars'''
carid = self.con.cursor().execute(sql).fetchall()[0][0]
if carid is None:
carid = 1
return carid
def insertIntoViolations(self, camera, car, rule, time):
sql = '''INSERT INTO violations(camera, car, rule, time)
VALUES(?,?,?,?) '''
cur = self.con.cursor()
cur.execute(sql, (camera, car, rule, self.convertTimeToDB(time)))
cur.close()
self.con.commit()
def insertIntoRules(self, rule, fine):
sql = '''INSERT INTO rules(name, fine)
VALUES(?,?) '''
cur = self.con.cursor()
cur.execute(sql, (rule, fine))
30
cur.close()
self.con.commit()
def insertIntoCamera(self, id, location, x, y, group, file):
sql = '''INSERT INTO camera(id,location,coordinate_x, coordinate_y,feed,cam_group)
VALUES(?,?,?,?,?,?) '''
file = file.split('/')[-1]
cur = self.con.cursor()
cur.execute(sql, (id, location, x, y, file, group))
cur.close()
self.con.commit()
def search(self, cam=None, color=None, license=None, time=None):
cur = self.con.cursor()
command = "SELECT camera.location, cars.id, cars.color, cars.first_sighted,
" cars.license_number, cars.car_image, cars.num_rules_broken, cars.owner," \
" rules.name, rules.fine, violations.time, rules.id" \
" FROM violations, rules, cars, camera" \
" where rules.id = violations.rule" \
" and violations.camera = camera.id" \
" and cars.id = violations.car"
if cam is not None:
command = command + " and violations.camera = '" + str(cam) + "'"
if color is not None:
command = command + " and cars.color = '" + str(color) + "'"
if time is not None:
command = command + " and violations.time>= " + str(
self.convertTimeToDB(time[0]))+"andviolations.time<="+
cur.execute(comand)
rows = cur.fetchall()
ret = []
for row in rows:
dict = {}
dict[KEYS.LOCATION] = row[0]
dict[KEYS.CARID] = row[1]
dict[KEYS.CARCOLOR] = row[2]
dict[KEYS.FIRSTSIGHTED] = row[3]
31
carimage = QPixmap("car_images/" + row[4])
dict[KEYS.CARIMAGE] = carimage
dict[KEYS.LICENSENUMBER] = row[5]
licenseimage = QPixmap("license_images/" + row[6])
dict[KEYS.LICENSEIMAGE] = licenseimage
dict[KEYS.NUMRULESBROKEN] = row[7]
dict[KEYS.CAROWNER] = row[8]
dict[KEYS.RULENAME] = row[9]
dict[KEYS.RULEFINE] = row[10]
dict[KEYS.TIME] = row[11]
dict[KEYS.RULEID] = row[12]
ret.append(dict)
cur.close()
return ret
def getViolationsFromCam(self, cam, cleared=False):
cur = self.con.cursor()
command = "SELECT camera.location, cars.id, cars.color, cars.first_sighted, "
cars.license_number, cars.car_image, cars.num_rules_broken, cars.owner," \
" rules.name, rules.fine, violations.time, rules.id" \
" FROM violations, rules, cars, camera" \
" where rules.id = violations.rule" \
" and cars.id = violations.car" \
" and violations.camera = camera.id"
if cam is not None:
command = command + " and violations.camera = '" + str(cam) + "'"
cur.execute(command)
rows = cur.fetchall()
ret = []
for row in rows:
dict = {}
dict[KEYS.LOCATION] = row[0]
dict[KEYS.CARID] = row[1]
dict[KEYS.CARCOLOR] = row[2]
dict[KEYS.FIRSTSIGHTED] = row[3
carImagePath = "car_images/" + row[6]
32
carimage = QPixmap(carImagePath)
dict[KEYS.CARIMAGE] = carima
dict[KEYS.LICENSENUMBER] = row[5]
licenseimage = QPixmap("license_images/" + row[6])
dict[KEYS.LICENSEIMAGE] = licenseimage
dict[KEYS.NUMRULESBROKEN] = row[7]
dict[KEYS.CAROWNER] = row[8]
dict[KEYS.RULENAME] = row[9]
dict[KEYS.RULEFINE] = row[10]
dict[KEYS.TIME] = row[11]
dict[KEYS.RULEID] = row[12]
ret.append(dict)
cur.close()
return ret
def deleteViolation(self, carid, ruleid, time):
cur = self.con.cursor()
command = "update violations set cleared = true " \
"where car = " + str(carid) + " and rule = " + str(ruleid) + " and time = " + str(time)
rowcount = cur.execute(command).rowcount
print("Deleted " + str(rowcount) + " rows")
cur.close()
self.con.commit()
def getCamDetails(self, cam_id):
command = "select count(*) from violations where camera = '" + str(cam_id) + "'"
cur = self.con.cursor()
count = cur.execute(command).fetchall()[0][0]
cur.close()
command = "select location, feed from camera where id = '" + str(cam_id) + "'"
cur = self.con.cursor()
res = cur.execute(command).fetchall()
location = None
feed = None
location, feed = res[0]
cur.close()
return count, location, feed
33
def deleteAllCars(self):
commad = "delete from cars"
cur = self.con.cursor()
cur.execute(commad)
cur.close()
self.con.commit()
def deleteAllViolations(self):
commad = "delete from violations"
cur = self.con.cursor()
cur.execute(commad)
cur.close()
self.con.commit()
def getCamList(self, group):
if group is not None:
command = "select id, location, feed from camera where cam_group:
command = "select id, location, feed from camera"
cur = self.con.cursor()
cur.execute(command)
rows = cur.fetchall()
ret = [(row[0], row[1], row[2]) for row in rows]
cur.close()
return ret
def getCamGroupList(self):
command = "select name from camera_group"
cur = self.con.cursor()
cur.execute(command)
rows = cur.fetchall()
ret = [row[0] for row in rows]
cur.close()
return ret
def clearCamLog(self):
command = "update violations set cleared = true"
34
MAIN PROCESSOR
from Database import Database
from processor.TrafficProcessor import TrafficProcessor
from processor.violation_detection import DirectionViolationDetection
class MainProcessor:
def __init__(self, camera_id):
self.cam_id = camera_id
self.cam_violation_count, self.cam_location, self.cam_feed =
Database.getInstance().getCamDetails(camera_id)
if camera_id == 'cam_01' or camera_id == 'cam_03':
self.processor = TrafficProcessor()
self.processor.zone1 = (100, 150)
self.processor.zone2 = (450, 145)
self.processor.thres = 30
elifcamera_id == 'cam_02':
self.processor = TrafficProcessor()
self.processor.zone1 = (100, 150)
self.processor.zone2 = (450, 145)
self.processor.thres = 6
self.processor.dynamic = True
elifcamera_id == 'cam_04':
self.processor = DirectionViolationDetection(self.cam_feed)
def getProcessedImage(self, frame=None, cap=None):
if self.cam_id in ['cam_01', 'cam_02', 'cam_03']:
dicti = self.processor.cross_violation(frame)
elifself.cam_id == 'cam_04':
dicti = self.processor.feedCap(frame)
return dicti
def setLight(self, color):
self.processor.light = color
35
VIOLATION DETECTION
import cv2
import imutils
import numpy as np
import time
# cv2.namedWindow('image', cv2.WINDOW_NORMAL)
# cv2.resizeWindow('image', 64,64)
from processor import Vehicle
class DirectionViolationDetection:
def __init__(self, vid_file): # vid_file = 'videos/traffic.avi'
self.cnt_up = 0
self.cnt_down = 0
self.zone1 = (100, 200)
self.zone2 = (450, 100)
self.cap = cv2.VideoCapture(vid_file) # insane
# Capture the properties of VideoCapture to console
# for i in range(19):
#print(i, self.cap.get(i))
self.w = self.cap.get(3)
self.h = self.cap.get(4)
self.frameArea = self.h * self.w
self.areaTH = self.frameArea / 200
print('Area Threshold', self.areaTH)
# Input/Output Lines
self.line_up = int(2 * (self.h / 5))
self.line_down = int(3 * (self.h / 5))
self.up_limit = int(1 * (self.h / 5))
self.down_limit = int(4 * (self.h / 5))
self.line_down_color = (255, 0, 0)
self.line_up_color = (0, 0, 255)
self.pt1 = [0, self.line_down]
self.pt2 = [self.w, self.line_down]
self.pts_L1 = np.array([self.pt1, self.pt2], np.int32)
self.pts_L1 = self.pts_L1.reshape((-1, 1, 2))
self.pt3 = [0, self.line_up]
36
self.pt4 = [self.w, self.line_up]
self.pts_L2 = np.array([self.pt3, self.pt4], np.int32)
self.pts_L2 = self.pts_L2.reshape((-1, 1, 2))
self.pt5 = [0, self.up_limit]
self.pt6 = [self.w, self.up_limit]
self.pts_L3 = np.array([self.pt5, self.pt6], np.int32)
self.pts_L3 = self.pts_L3.reshape((-1, 1, 2))
self.pt7 = [0, self.down_limit]
self.pt8 = [self.w, self.down_limit]
self.pts_L4 = np.array([self.pt7, self.pt8], np.int32)
self.pts_L4 = self.pts_L4.reshape((-1, 1, 2))
# Create the background subtractor
self.fgbg = cv2.createBackgroundSubtractorMOG2()
self.kernelOp = np.ones((3, 3), np.uint8)
self.kernelOp2 = np.ones((5, 5), np.uint8)
self.kernelCl = np.ones((11, 11), np.uint8)
# Variables
self.font = cv2.FONT_HERSHEY_SIMPLEX
self.vehicles = []
self.max_p_age = 5
self.pid = 1
def feedCap(self, frame):
retDict = { 'image_threshold': None,
'image_threshold_2': None,
'mask_image': None
'mask_image_2': None,
'frame': None,
'list_of_cars': []}
# while (cap.isOpened()):
# read a frame
# _, frame = self.cap.read()
# if _ == 0:
# break
for i in self.vehicles:
i.age_one() # age every person on frame
37
# PREPROCESSING #
fgmask = self.fgbg.apply(frame)
fgmask2 = self.fgbg.apply(frame)
# Binary to remove shadow
# cv2.imshow('Frame', frame)
# cv2.imshow('Backgroud Subtraction', fgmask)
_, imBin = cv2.threshold(fgmask, 200, 255, cv2.THRESH_BINARY)
_, imBin2 = cv2.threshold(fgmask2, 200, 255, cv2.THRESH_BINARY)
# Opening (erode->dilate) to remove noise
mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, self.kernelOp)
mask2 = cv2.morphologyEx(imBin2, cv2.MORPH_OPEN, self.kernelOp)
# Closing (dilate->erode) to join white region
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, self.kernelCl)
mask2 = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, self.kernelCl)
# cv2.imshow('Image Threshold', cv2.resize(fgmask, (400, 300)))
# cv2.imshow('Image Threshold2', cv2.resize(fgmask2, (400, 300)))
# cv2.imshow('Masked Image', cv2.resize(mask, (400, 300)))
# cv2.imshow('Masked Image2', cv2.resize(mask2, (400, 300)))
retDict['image_threshold'] = cv2.resize(fgmask, (400, 300))
retDict['image_threshold_2'] = cv2.resize(fgmask2, (400, 300))
retDict['mask_image'] = cv2.resize(mask, (400, 300))
retDict['mask_image_2'] = cv2.resize(mask2, (400, 300))
## FIND CONTOUR ##
# cv2.rectangle(frame, zone1, zone2, (255, 0, 0), 2)
contours0 = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours0 = contours0[0] if imutils.is_cv2() else contours0[1]
# contours0 = imutils.grab_contours(contours0)
for cnt in contours0:
# cv2.drawContours(frame, cnt, -1, (0,255,0), 3, 8)
area = cv2.contourArea(cnt)
# print area," ",areaTH
if self.areaTH< area < 20000:
# TRACKING #
M = cv2.moments(cnt)
cx = int(M['m10'] / M['m00'])
38
cy = int(M['m01'] / M['m00'])
x, y, w, h = cv2.boundingRect(cnt)
# the object is near the one which already detect before
new = True
for i in self.vehicles:
if abs(x - i.getX()) <= w and abs(y - i.getY()) <= h:
new = False
i.updateCoords(cx, cy) # Update the coordinates in the object and reset age
if i.going_UP(self.line_down, self.line_up):
self.cnt_up += 1
print("ID:", i.getId(), 'crossed going up at', time.strftime("%c"))
# cv2.putText(frame,str(i.getId()), (x, y-2), cv2.FONT_HERSHEY_SIMPLEX, 5, 255)
elifi.going_DOWN(self.line_down, self.line_up):
roi = frame[y:y + h, x:x + w]
# cv2.imshow('Region of Interest', roi)
retDict['list_of_cars'] = roi
print("Area equal to ::::", area)
self.cnt_down += 1
print("ID:", i.getId(), 'crossed going down at', time.strftime("%c"))
# cv2.putText(frame,str(i.getId()), (x, y-2), cv2.FONT_HERSHEY_SIMPLEX, 5, 255)
break
if i.getState() == '1':
if i.getDir() == 'down' and i.getY() >self.down_limit:
i.setDone()
elifi.getDir() == 'up' and i.getY() <self.up_limit:
i.setDone()
if i.timedOut():
# Remove from the list person
index = self.vehicles.index(i)
self.vehicles.pop(index)
del i
if new:
p = Vehicle.MyVehicle(self.pid, cx, cy, self.max_p_age)
self.vehicles.append(p)
self.pid
39
##################
## DRAWING ##
################
cv2.circle(frame, (cx, cy), 5, (0, 0, 255), -1)
img = cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
# cv2.drawContours(frame, cnt, -1, (0,255,0), 3)
# cv2.imshow('Image', cv2.resize(img, (400, 300)))
###############
## IMAGE ##
###############
str_up = 'UP: ' + str(self.cnt_up)
str_down = 'DOWN: ' + str(self.cnt_down)
frame = cv2.polylines(frame, [self.pts_L1], False, self.line_down_color, thickness=2)
frame = cv2.polylines(frame, [self.pts_L2], False, self.line_up_color, thickness=2)
frame = cv2.polylines(frame, [self.pts_L3], False, (255, 255, 255), thickness=1)
frame = cv2.polylines(frame, [self.pts_L4], False, (255, 255, 255), thickness=1)
# cv2.putText(frame, str_up, (10,40),self.font,2,(255,255,255),2,cv2.LINE_AA)
# cv2.putText(frame, str_up, (10,40),self.font,2,(0,0,255),1,cv2.LINE_AA)
# cv2.putText(frame, str_down, (10,90),self.font,2,(255,255,255),2,cv2.LINE_AA)
# cv2.putText(frame, str_down, (10,90),self.font,2,(255,0,0),1,cv2.LINE_AA)
# cv2.imshow('Frame', cv2.resize(frame, (400, 300)))
time.sleep(0.04)
# cv2.imshow('Backgroud Subtraction', fgmask)
retDict['frame'] = cv2.resize(frame, (400, 300))
return retDict
# Abort and exit with 'Q' or ESC
# k = cv2.waitKey(10) & 0xff
# if k == 27:
# break
# cap.release()
# cv2.destroyAllWindows()
40
9. REPORT
The GUI is made mainly for this purpose, there will always be a supervisor for a group of
cameras. He can see the list of rule violations and can see details of the cars that violated the rules
(fig-8). If he clicks on the detail button, a new window will appear where the user will be able to file
the report or send/print a ticket for the car owner.
Form 1:Home
Also, the admin/user can delete the records if he gets a false positive. But there will never be a
record deleted. The database has a marker of which files have been archived. If we want to retrieve a
record from the deleted one, then the admin needs to go to the archive window. There he can restore
any record he wants. The user can also search for a vehicle, with its license number, its color, or the
date of a rule violation. The license number has text prediction so the user will be sure while typing a
license number that it exists.For Signal Violation, We have used a straight line in the picture. When the
traffic light is red and a car is crossing a straight line, a picture of that car is registered in the database
along with some environmental values. The user can see in the live preview which cars are being
detected in real-time and tested if they are crossing the line.
41
Form 2:Detected vehicles
For direction violation detection, some lines are drawn to divide into regions. Then when a car
moves from one region to another, its direction is measured. If the direction is wrong, then it is
registered as previous
Forms 3: Verifying
42
It's a process begins opencv for background processing for detecting a object trained by AI
observation step by step training and object finding process.
43
10. CONCLUSION
The Traffic Violation Detection System represents a crucial advancement in modern traffic
management and law enforcement. Through the integration of cutting-edge technologies such as
computer vision, machine learning, and real-time data processing, the system has demonstrated
significant effectiveness in improving road safety, enhancing enforcement efficiency, and fostering
compliance with traffic regulations. By leveraging Open CV for background processing and object
detection trained by AI, the system can accurately identify and classify various traffic violations,
including speeding, red light running, and illegal parking. This capability enables law enforcement
authorities to promptly respond to violations, issue citations, and mitigate potential hazards on
roadways.
44
11.BIBLIOGRAPHY
[2] Y. Artan, O. Bulan, R. P. Loce, and P. Paul, "Passenger Compartment Violation Detection in
HOV/HOT Lanes," 2015.
[3] MukreminOzkul , Ilir Capuni(2018).” Police-less multiparty traffic violation detection and
reporting system with privacy preservation”, IET Intelligent Transport Systems, Vol. 12 No. 5, pp.
351-358.
[4] Rhen AnjeromeBedruz, Aaron Christian P. Uy, Ana Riza Quiros, Robert Kerwin Billones, Edwin
Sybingco, Argel Bandala, Elmer P. Dadios (2019). “A Robotic Model Approach of an Automated
Traffic Violation Detection System with Apprehension” 2018 IEEE 10th International Conference on
Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and
Management (HNICEM), pp. 1-4.
[5] P.Srinivas Reddy, Ramesh O., "A Video-Based Traffic Violation Detection System," Sep 2019.
[6] Madhuravani S., Deepthi N. B., Umar S., Gouse "Passenger Compartment Violation Detection in
HOV/HOT Lanes," in Aug 2019.
[7] Dat Tran, To train your own Object Detector with TensorFlow‟s Object Detector API | 2021 |
Towards Data Science.
[8] Sowmya G., Divya Jyothi G., Shirisha N., Navya K., Active learning strategies in engineering
education, Journal of Advanced Research in Dynamical and Control Systems, Jan 2021.
45