0% found this document useful (0 votes)
24 views3 pages

Ijtra Paper Format 21

This document discusses a technique for vehicle detection, counting, and classification using video footage from stationary traffic cameras. It begins with an introduction describing the importance of traffic monitoring systems. Vehicle detection is performed using background subtraction to identify pixels in the current frame that differ from the background model. After detection, vehicles can be counted by establishing an imaginary line and incrementing the count when vehicles cross it. Finally, vehicle classification categorizes detected vehicles by type using local binary pattern features extracted from divided regions of each image. The technique aims to provide traffic data for monitoring systems.

Uploaded by

Tezera Tesfaye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views3 pages

Ijtra Paper Format 21

This document discusses a technique for vehicle detection, counting, and classification using video footage from stationary traffic cameras. It begins with an introduction describing the importance of traffic monitoring systems. Vehicle detection is performed using background subtraction to identify pixels in the current frame that differ from the background model. After detection, vehicles can be counted by establishing an imaginary line and incrementing the count when vehicles cross it. Finally, vehicle classification categorizes detected vehicles by type using local binary pattern features extracted from divided regions of each image. The technique aims to provide traffic data for monitoring systems.

Uploaded by

Tezera Tesfaye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

International Journal of Technical Research and Applications e-ISSN: 2320-8163,

www.ijtra.com Special, Issue 43 (March 2017), PP. 83-85

VEHICLE DETECTION, COUNTING AND


CLASSIFICATION
1
Vishakha Dumbre, 2Prachi Birhade, 3Samruddhi Talankar, 4Akash Jagtap , 5Sonali Dhamele
1,2,3,4,5
Computer Department, Terna Engineering College Navi Mumbai, India.
1
[email protected], [email protected], [email protected], [email protected],
5
[email protected]

detected. After detection of vehicle, we can count the vehicles


Abstract— Detecting moving objects in videos is an by made an imagination line on video. Whenever the vehicle
important task in several computer vision applications, crosses the line, automatically the count will be increase. The
human interaction, monitoring of traffic and Structural main goal of vehicle classification is to categorize the detected
Health Monitoring. When stationary camera is placed, a vehicles into their respective classes.
basic method to detect the objects of interest is
background subtraction. However, precise moving object
detection using such a method is an extremely difficult II. BACKGROUND SUBTRACTION
task in a varying environment. This paper introduces a Identifying moving objects from a video sequence is a basic
new technique for detecting, counting and classification of and important task in many computer-vision applications.
the vehicles. As more vehicles continuously appear on the Background subtraction which is also known as Foreground
roads which causes congestion and accidents. A traffic Detection, is a technique within the field of image processing
monitoring system that is capable of detecting, counting and computer vision where in an images foreground is
and classifying the passed vehicles is needed to provide in extracted for more processing (object recognition etc.).
advance information to relevant authorities on the road Generally regions of interest are objects (humans, cars, text
traffic demand. Vehicle detection is the key task in this etc.) in its foreground. A common approach is to perform
area and vehicle counting and classification are two background subtraction, which identifies moving objects from
important applications. This paper introduces the the portion of a video frame that differs significantly from a
proposed method and its efficiency for use in traffic background model.
monitoring systems. The principle in the approach is that of detecting the
moving objects from the difference between the current frame
Index Terms— Vehicle detection, Background subtraction, and a frame of reference, typically known as “background
Vehicle classification, Vehicle counting.
image”, or “background model”. Background subtraction
provides necessary clues for various applications in computer
I. INTRODUCTION
vision, for example surveillance, tracking or human poses
Traffic surveillance system is a very important a part of estimation. However, background subtraction is usually based
Intelligent Transport System. An Intelligent Transport System on a static background hypothesis [3].
(ITS) is the application that includes Electronic, Computer and There are many challenges in developing a good
Communication technologies into vehicles and roadways for background subtraction algorithm. First, it should be robust
analysis of traffic conditions, reducing congestion and against changes in illumination. Second, it should avoid
enhancing flexibility [1]. For many traffic monitoring systems, detection of non-stationary background objects [4].
three major stages are to estimate the specified traffic
parameters. They are Vehicle detection, count and
Classification. III. VEHICLE DETECTION
Videos or images prepared by traffic cameras installed over Object detection is typically accomplished by a simple
the roads or on the roadside. Different traffic parameters such background by current frame differencing and then followed
as vehicle type, the number of vehicles, traffic density and even by threshold. The main idea here is to make a subtraction
traffic accident information can be extracted only using traffic between pixels of the background frame and that of the current
videos or images in a short time. frame. The pixels having value greater than certain threshold is
Vehicle Detection is often one of the first tasks in computer considered to belong to the object (detected vehicle) as shown
vision application with stationary camera. After a vehicle is in figure 1. We use four distinct steps to detect pixels
detected, other applications can be applied more easily [2]. belonging to moving vehicles.
In this system, the object is detected by pixel wise
subtraction between the current frame and the background
frame. Using some threshold limit, all pixels belonging to
object (that are not present in the background image) are

83 | P a g e
International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Special, Issue 43 (March 2017), PP. 83-85

vectors. The main goal of this unit is to determine which


category the passing vehicle belongs to. The local binary
pattern (LBP) operator is a powerful feature extractor which
transforms an image into an integer labels statistics. Prior to the
feature extraction process, some image pre-processing steps
were applied to standardized and enhance the images.
Local binary pattern (LBP) is to extract unique attributes of
each object. The local binary pattern (LBP) was first
introduced as grey- scale and rotation invariant texture
(a) descriptor. The basic Local binary pattern operator labels the
pixels Pn for (n=0,1,....7) of an image using thresholding a 3 x
3 neighbourhood of each pixel with the value of the centre
pixel Pc and binary number is considered as the result. Given a
pixel at location (xc, yc) which is the resulting LBP at that
location can be expressed as follows

(1)
Where Pc is the grey-level value of the centre pixel and Pn are
(b) its eight surrounding neighbours and S is given as
(2)

The histogram of the labels computed over each region


can be used as local primitive descriptors, which describe
the local changes in the region such as; flat area, curves,
edges, spots etc. In this, the procedure for extracting the LBP
descriptors for vehicle representation is implemented as
follows: first, the each processed image is partitioned into 36
(c) regions. Second, the LBP histogram from every region was
Figure 1: Vehicle Detection: (a) averaged background (b) computed. Third, the histogram ratio hr and maximum
masked current frame containing objects in the detection zone histogram value hm for each region is determined using
(c) blobs of detected vehicles. equations (3) and (4). Finally, the 36 hr and 36 hm from all
the regions were concatenated into one feature vector.
The first step in the proposed vehicle detection method is to
construct a binary mask that defined the detection range. The
mask “M” has same size that of video frames in the region
corresponding to the detection area and zeros elsewhere. Next,
we multiply both the background and the current image with
the mask figure 1. (b) This way the areas outside the detection
region are simply eliminated or rejected (dark areas) [4]. Then
the background subtraction technique is applied to detect
locations within the current image that have values greater than hm = j , j= arg {max h(i ) } (4)
the set threshold.
VLBP = [hr1, hr2,…hr36, hm1, hm2,…hm36, a, b1, b2, b3, b4, c]
(5)
IV. VEHICLE CLASSIFICATION
In the classification step, we classify vehicles into three The formed feature vectors are then passed to linear
classes: small (e.g. car), medium (e.g. van) and large (e.g. bus classifier to categorize them into their respective categories.
and trunk). To reach this goal, two features are extracted to Linear discriminate analysis (LDA) attempts to classify a
differentiate between different vehicle types. First, a length- given input object into two or more categories based on
based feature is computed that is very useful for classifying features that characterize them, LDA attempts to find linear
vehicles according to their size [9]. In the field of pattern decision boundaries from the features space that best separate
recognition, a classifier is used to identify the correct class of a the objects classes.
given object based on some classification rules and
characteristics of the object, which are also called feature

84 | P a g e
International Journal of Technical Research and Applications e-ISSN: 2320-8163,
www.ijtra.com Special, Issue 43 (March 2017), PP. 83-85

This is achieved by maximizing the between class scatter CONCULUSTION


matrix while minimizing the within class scatter matrix. The proposed system consists of vehicle detection, vehicle
Mathematically, the scatter matrices are defined as follows: classification and counting of vehicles. This system will be
implemented on a video sequence of frames recorded from a
static camera. The vehicles are detected using background
subtraction and thresholding approach. The vehicles are
classified into different types based on LDA classifier. The
vehicle count is determined by drawing an imaginary line on
the input frame.
REFERENCES
[1] Jun-Wei, H., et aI. , Automatic traffic surveillance system for
vehicle tracking and classification. Intelligent Transportation
Systems, IEEE Transactions on, 2006. 7(2): pp. 175-187.
[2] Wu, K., Xu, T., Zhang, H.: „Overview of video-based vehicle
detection technologies‟. 6th Int. Conf. on Computer Science &
Education (ICCSE), August 2011, pp. 821–825.
(8) [3] Habibu Rabiu, International Journal of Computer Science,
Engineering and Applications (Ijcsea) Vol.3, No.1,” Vehicle
Where, SW and SB stand for within class and between Detection and Classification for Cluttered Urban Intersection”
scatter matrices DOI:10.5121/ijcsea.2013.3103.
respectively, ni is the range of training samples in class i, c [4] G. N. Swamy, S. Srilekha, “Vehicle Detection and Counting
is the number of class labels, mi is samples mean of class i, xi Based On Color Space Model”, 978-1-4 799-8081-9/15/$3l.00
stands for i th class sample set and xj represents the jth object © 2015 IEEE.
of that class. |SW| and [5] Cheung, S.-C. and C. Kamath, "Robust techniques for
|SB| indicate the determinants of within and between class background subtraction in urban traffic video," Video
matrices. Communications and Image Processing, SPIE Electronic
Imaging, San Jose, January 2004, UCRL-JC-153846-ABS,
UCRL-CONF-200706.
V. VEHICLE COUNTING [6] Jin-Cyuan Lai, Shih-Shinh Huang, and Chien-Cheng Tseng
For counting vehicles, only one counting line is used. A Dept. of Computer and Communication Engineering, National
line in the frame is defined where vehicles are to be detected. Kaohsiung First University of Science and Technology,”Image-
Based Vehicle Tracking and Classi<Ecation on the Highway”
Whenever the vehicle crosses the line, automatically the count
978-1-4244-6878-2/10/$26.00 ©2010 IEEE.
will be increased.
[7] Jun-Wei Hsieh, Member, IEEE, Shih-Hao Yu, Yung-Sheng
Chen, Member, IEEE, and Wen-Fong Hu,”Automatic Traffic
The calculation of accuracy is shown in equation (9), Surveillance System for Vehicle Tracking and Classification”,
Accuracy in % = (Recognition number / actual number) X 1524-9050/$20.00 © 2006 IEEE.
100 (9) [8] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: A
Where, Recognition number = number of vehicles counted by review,” Pattern Analysis and Machine Intelligence, IEEE
the system, Actual number = number of vehicles observed in Transactions on, vol. 28, no. 5, pp. 694–711, 2006.
the frame. [9] Kamkar, “Vehicle detection, classification and counting in
various conditions”, IET Intell. Transp. Syst., 2016, Vol. 10, Iss.
6, pp. 406–413 & the Institution of Engineering and Technology
2016.

85 | P a g e

You might also like