34
34
34
Project Report
On
I hereby declare that the project work, which is being presented to the Department of Computer
Science and Engineering, Rajasthan Technical University Kota, entitled as “Vehicle Speed and
Distance Detection” was carried out and written by us with our correct and complete knowledge
carried under the guidance of Dr. Gowri Choudhary, Assistant Professor, Department of
Computer Science and Engineering, Rajasthan Technical University Kota.
The results contained in this report have not been submitted in part or in full to any other University
or Institute for the award of any degree or diploma to the best of our knowledge.
ii
CERTIFICATE
This is to certify that the final semester project, entitled as “ Vehicle Speed and Distance
Detection” has been successfully carried out by Tanvi Sharm (Enrolment No. 20EUCCS061),
Madan Palsaniya (Enrolment No. 20EUCCS033), Kumar Tarun Sundaram (Enrolment No.
20EUCCS031) and Tejas Prajapati(Enrolment No. 20EUCCS063) under my guidance partially
fulfilling the criteria of “Bachelors of Technology in Computer Science and Engineering” from
the Department of Computer Science and Engineering, Rajasthan Technical University Kota, for
the academic year 2023-24.
iii
ACKNOWLEDGEMENT
I am thankful to my project’s supervisor Dr. Gowri Choudhary for his continuous support,
conviction, encouragement, and invaluable advice in credit project work. I also like to thank Prof.
C.P Gupta for helping me throughout the literature review and presentation preparation process.
iv
ABSTRACT
Speed detection of vehicle and its tracking plays an important role for safety of civilian lives, thus
preventing many mishaps. This module plays a very significant role in the monitoring of traffic
where efficient management and safety of citizens is the main concern. In this paper, we discuss
about potential methods for detecting vehicle and its speed. Various research has already been
conducted and various papers have also been published in this area. The proposed method consists
of mainly three steps background subtraction, feature extraction and vehicle tracking. The speed
is determined using distance travelled by vehicle over number of frames and frame rate. For vehicle
detection, we use various techniques and algorithms like Background Subtraction Method, Feature
Based Method, Frame Differencing and motion-based method, Gaussian mixture model and Blob
Detection algorithm. Vehicle detection is a part of speed detection where, the vehicle is located
using various algorithms and later determination of speed takes place. The process for speed
detection is as follows:1) Input Video 2)Pre-Processing 3)Moving Vehicle detection 4)Feature
Extraction 5)Vehicle tracking 6)Speed detection. Many accidents and mishaps can be avoided if
vehicle detection and speed tracking techniques are implemented.
v
CONTENTS
vi
CHAPTER 7: NODE JS…………………………………………………………………………..….22-24
7.1 Introduction to Node.js…………………………………………………………………………....22
7.2 Node.js Environment Setup…………………………………………………………………….…22
7.3 Node.js Modules and CommonJS………………………………………………………………....22
7.4 Asynchronous JavaScript and Callbacks…………………………………………………….…….23
7.5 Working with npm (Node Package Manager)……………………………………………….…….23
7.6 Express.js Framework……………………………………………………………………….…….23
7.7 RESTful APIs with Express…………………………………………………………………….…24
7.8 Deploying Node.js Applications…………………………………………………………………..24
CHAPTER 8: MONGODB DATABASE ……………………………………………………….…..25-27
vii
List of Figures
viii
List of Tables
ix
CHAPTER 1: INTRODUCTION ABOUT THE PROJECT
The motivation behind this project stems from the increasing need for effective traffic management
solutions to address the growing challenges of congestion, accidents, and pollution on roadways.
By developing an automatic vehicle speed and distance detection system, we aim to contribute to
the improvement of road safety, traffic flow optimization, and overall transportation efficiency.
The problem statement for this project revolves around the need to develop a robust and reliable
automatic system for detecting vehicle speed and distance. This system should be capable of
accurately measuring the speed of moving vehicles and calculating the distance between them in
real-time, without relying on manual intervention or external factors that may compromise
accuracy.
Design and develop a prototype automatic vehicle speed and distance detection system.
Implement algorithms for speed detection and distance measurement using sensor data.
Identify potential challenges and limitations of the system and propose solutions for improvement.
1
Explore potential applications and implications of the developed system in traffic management
and automotive safety.
The prototype system may not achieve the same level of accuracy and reliability as commercial-
grade solutions.
Environmental factors such as weather conditions and terrain may impact the performance of the
system.
The system may have limitations in detecting vehicles with certain characteristics or in complex
traffic scenarios.
Despite these limitations, the project aims to provide valuable insights into the feasibility and
effectiveness of automatic vehicle speed and distance detection systems.
Chapter 3 outlines the methodology employed in the design and development of the system,
including the hardware and software components used.
Chapter 4 presents the results of experiments and tests conducted to evaluate the performance of
the system.
Chapter 5 discusses the implications of the results, addresses challenges, and suggests avenues for
future research. Finally,
Chapter 6 summarizes the key findings of the project and provides concluding remarks.
2
CHAPTER 2: LITERATURE REVIEW
Real-Time Performance: Optical flow algorithms can provide speed estimates in real-time,
making them suitable for applications requiring immediate feedback, such as traffic
management and surveillance.
Non-Intrusive: Optical flow methods are non-intrusive and can be deployed without the
need for physical infrastructure or sensors embedded in the road surface, reducing
installation and maintenance costs.
Suitability for Various Environments: Optical flow algorithms are versatile and can be
applied in various environmental conditions, including daylight, low light, and adverse
weather, making them suitable for outdoor traffic monitoring applications.
Complexity in Crowded Scenes: Optical flow algorithms may face challenges in accurately
estimating vehicle speeds in crowded traffic scenarios with overlapping motion patterns
and occlusions, leading to inaccuracies and false detections.
Sensitivity to Lighting Conditions: Changes in lighting conditions, such as shadows, glare,
and reflections, can affect the performance of optical flow methods, leading to variations
in speed estimates and reduced accuracy.
3
2.3 Background Subtraction Method
Another widely used technique in automatic vehicle speed and distance detection systems is the
background subtraction method. This method involves subtracting a reference background image
from the current frame to isolate moving objects, including vehicles, on the roadway. By analyzing
the motion of foreground objects over time, the background subtraction method can estimate
vehicle speeds and distances with high accuracy.
Feature-Based Fusion: Integration techniques based on feature extraction and fusion allow
for the combination of motion information extracted from optical flow with foreground
detection results obtained from background subtraction, enhancing the accuracy and
robustness of speed and distance estimation.
Machine Learning Approaches: Machine learning algorithms, such as neural networks and
support vector machines, have been employed to learn the relationships between optical
4
flow features and background subtraction results, enabling more effective integration and
adaptation to varying traffic conditions.
2.5 Summary
This chapter provided an in-depth exploration of automatic vehicle speed and distance detection
systems, focusing on the optical flow method and background subtraction method as key
techniques. By examining the advantages, challenges, and integration possibilities of these
methods, this literature review sets the stage for the subsequent chapters, where we will discuss
the methodology employed in this project to leverage these techniques for the development of an
effective automatic speed and distance detection system.
5
CHAPTER 3: METHODLOGY
The method of retrieval of a mobile object from a definite image (fixed background) is called
background subtraction and the retrieved object is the resulted as threshold of image differencing
. This technique is pre dominantly used in detection of vehicle in an image frame. However, the
results are affected in poor lighting or bad climatic conditions and acts as a drawback to this
method. BS calculates the foreground mask performing a subtraction between the current frame
and a background model, containing the static part of the scene or, more in general, everything
that can be considered as background given the characteristics of the observed scene.
Studies have suggested that statistical and parametric based methodologies are primarily used for
background subtraction methods. Whereas, some of these techniques used Gaussian distribution
model for every pixel in the image. Furthermore, every pixel (i,j) in the image is categorized into
two parts; foreground (moving vehicles, also known as blobs) or background based on the
knowledge procured from the model using the equation (i) as:
C is a constant,
6
Figure 1: Background Subtraction Methods
This technique of identifying image displacements which are easiest to interpret is feature based
modelling. The technique helps in identifying edges, corners and other structures in an image
which are restricted properly in a two-dimensional plane and trace these objects as they transit
between multiple frames. This technique comprises of two stages; finding the features in multiple
images and matching these features between the frames:
Stage 1: In this step, the features are found in a series of two or more images. If carried out
perfectly, with no overhead cost; it may work efficiently with less overload and reduce the
extraneous information to be processed
Stage 2: Features found in stage 1 are matched between the frames. Under most common scenarios,
two frames are used and two sets of features are matched to a resultant single set of motion vectors.
These features in one frame are used as seed points which use other techniques to determine the
flow.
Despite this, both these stages of feature-based modelling possess drawbacks. In the stage of
detecting features, it is necessary that features are located with precision and good reliability. This
is proved to be of immense significance and research is performed on feature detectors. This
feature holds an ambiguity of possible matches to occur as well; unless it is priorly known that
image displacement less than the distance between features.
7
3) Frame Differencing and motion-based methods:
Frame differencing is a method of finding the difference between two consecutive images from a
sequence of images to segregate the moving object (vehicle) from the background. If there is a
change in pixel values, it implies that there was a change in position in the two image frames. The
motion rectification step of detecting a vehicle in a trail of images by alienating the moving objects,
also known as blobs based on its speed, movement and orientation.
8
Let Ik be the value of kth frame in the trail of images and Ik+1 be the value of k+1th frame in the
trail of images Then, the absolute difference in image is calculates as
The resultant picture comprises of holes in area of non-stationary objects and its mapped area is
not closed. The transformation of absolute differential image to binary image can be defined as:
Limitations:
This approach is not effective in windy conditions as this technique detects motion caused by
movement in air. The possibility of camera not remaining fixed in its position due to air cannot be
neglected which results in motion and formation of holes in the binary image.
Gaussian mixture model is a probabilistic function used for representation of normally distributed
data points in a complete set of data points. These models do not require a prior knowledge of the
data point is from which subpopulation cluster which allows the model to learn in an unsupervised
manner. Gaussian mixture models are typically used for extracting features in tracking of
numerous objects which uses the number of mixture components and their means to estimate the
location of an object at every frame in the series of images or video. [6]
The primary aim using this approach is to detect the vehicle and tracing algorithm which can be
used to monitor the traffic. This model uses a customary observation pattern change for each pixel
in the image matrix. Further, Mahalanobis distance of the Gaussian is calculated depending on
customary observation change factor, intensity of colour and determining the Gaussian component
mean.
Blob detection algorithm is a technique which can track the motion of non-stationary objects in
the frame. A blob is defined as a collection of pixels which are identified as an object. This
algorithm determines the location of the blob in consecutive frames of images. Pixels with similar
intensity values or colour codes are clubbed to determine the blob. The algorithm is capable to
detect multiple blobs in the same image and differentiate their speed and motion. This method has
to estimate factors like size, location and colour to determine if the new blob shows resemblance
to the previous blob such that the blob has the same label name.
9
{
{ label pixel = 1 }
Else
{ label pixel = 0 }
{ label pixel =1 }
Else
{ label pixel = 2 }
D. Speed Tracking:
The presented methodology is used for determining the speed of a moving vehicle towards the
camera situated at a considerable distance by tracking the motion of vehicle through series of
images. The proposed methodology consists of steps as shown in the figure below
10
1) Pre-Processing:
Primarily, the video is converted into small frames. Background Subtraction algorithm is used
which subtracts the background from the primary feature/image. A average of all frames is
obtained consisting of only the main feature/image hence subtracting the background. Later, the
output obtained is applied for Thresholding and Morphological Operation. Detection of the object
and centroid is done with the help of Connected Component Method. Centroid is obtained for all
frames. Velocity of the vehicle is calculated using the distance travelled by vehicle and frame rate
of the input video. The various parameters such as number of frames, frame rate, colour format,
frame size are extracted.
11
2) Detection of Moving Vehicle:
The main challenge face during vehicle detection and speed tracking is detecting the main object
(In our case the vehicle). To detect moving object there are various approaches such as temporal
differencing method, optical flow algorithm, background subtraction algorithm. [7] In temporal
difference method, background image is extracted from two adjacent frames with the only
drawback of video being slow. The second, Optical flow algorithm detects the object
independently based on motion of camera but gets complex for real time applications. In
background subtraction absolute difference between background model and each instantaneous
frame is taken to detect moving object. Background model is an image which has no moving object
in it. In this we would be using Background Subtraction Algorithm which specifically consists of
3 stages.
a) Background Extraction:
The video which is recorded on highway consists of object along with the background, it is very
difficult to capture image without the object, thus, for getting such image , background extraction
method is used. In this average of all frames is taken and the object is subtracted out leaving the
background alone. This extracted image is known as ROI (Region of Interest). Now, each of the
frames are converted from RGB to a gray-scale image and each individual frame is multiplied with
the ROI(Region of Interest) obtained. Because of this other unwanted noise of waving trees and
vehicles is avoided which helps in increasing accuracy. The absolute difference of each
instantaneous frame and background model after multiplying both with extracted ROI has taken
to detect only moving vehicles.
12
b)Thresholding:
Image segmentation is done using thresholding in which gray-scale images are converted into
binary images. Threshold value is selected which is very important. To separate foreground vehicle
from static background thresholding is used here.
Where g(x, y) is threshold image, T is the selected threshold value; f(x, y) is instantaneous frame.
In this work, we got vehicle as object and some noise.
c)Morphological Operations:
Morphological Operations are used to remove noise from the imperfect segmentations and are well
suited for binary images. This is performed on output image obtained from the thresholding phase.
Opening closing dilations are performed where opening and closing is used to remove the detected
foreground holes. Dilation consists of interaction of structuring element and foreground pixels.
The structuring element is a small binary image. After the whole process, the selected object pixels
are applied for connected component analysis. Connected component is applied to binary and grey
scale image and analysis is used to identify connected pixel region by scanning an image pixel-
by-pixel. It has various connectivity i.e.8-pixel connectivity or 4-pixel connectivity.
4) Detection of Vehicle:
Vehicle detection process is based on the process of feature detection. The features which are
extracted are tracked over sequential frames.[Mohit] Matching algorithm is used to determine
whether it is the same object or a different one. Mahalanobis distance is used in object matching
algorithm.
13
During the past decade, Mahalanobis distance learning has attracted a lot of interest. The
Mahalanobis distance between two d-dimensional numerical vectors x and x′ is defined by
5) Speed Determination:
The vehicles with a particular Id is observed for a series of sequential frames. The number of
frames in which the car appears is noted.
Where, frame 0 is the first frame when object is entered in Region of interest and frame n is last
frame when object passed away from Region of interest and the real-world distance is mapped on
the image. The count of total number of frames is then multiplied with duration of one frame which
is calculated from frame rate of video. Similarly, total time taken by vehicle to travel and distance
is fixed and is mapped from real-world into image.
14
Figure 2: Row Figure Before Optical flow Method
Thus, from distance and travelled time of detected vehicle, speed of that vehicle is determined
from above formulae.
15
CHAPTER 4: OPTICAL FLOW METHOD AND BACKGROUND
SUBTRACTION FOR VEHICLE DETECTION AND SPEED
TRACKING
4.1 Introduction
In this chapter, we delve into the utilization of the optical flow method and background subtraction
technique for vehicle detection and speed tracking in real-world traffic scenarios. Optical flow
captures the apparent motion of pixels between consecutive frames, enabling the estimation of
vehicle speeds, while background subtraction isolates moving objects from the static background,
facilitating vehicle detection. We explore the principles, implementation, and integration of these
methods to develop a robust system for traffic monitoring and management.
Optical flow is based on the assumption of spatial and temporal coherence, where neighboring
pixels exhibit similar motion patterns over time.
The optical flow equation models the relationship between image brightness variations and the
motion field, seeking to minimize the difference between observed and predicted pixel intensities.
Optical flow algorithms can be categorized into dense and sparse methods. Dense methods
estimate flow vectors for every pixel in the image, while sparse methods focus on key points or
features.
16
Figure 3: Vehicle Speed detection in Night
Common optical flow algorithms include Lucas-Kanade, Horn-Schunck, and Farneback. These
algorithms differ in their assumptions, optimization criteria, and computational complexity.
17
Integration with Vehicle Speed Tracking:
In the context of vehicle speed tracking, optical flow is utilized to measure the displacement of
vehicles between consecutive frames.
By analyzing the motion vectors of vehicle pixels, the speed of individual vehicles can be estimated
based on the distance traveled over time.
Background subtraction relies on the assumption that the background scene remains relatively
static over time, while foreground objects introduce temporal changes in pixel values.
Background subtraction algorithms typically consist of three main steps: background modeling,
foreground segmentation, and post-processing.
Background modeling involves estimating the static background scene from input video frames
using techniques such as temporal averaging or statistical modeling.
Foreground segmentation identifies moving objects by detecting pixels that deviate significantly
from the background model.
Post-processing steps, such as morphological operations or contour analysis, are applied to refine
the segmentation results and extract meaningful object boundaries.
18
Figure 5 : Shadow detection results based on HSV color space.
In vehicle detection applications, background subtraction is used to segment moving vehicles from
the background scene.
Detected foreground regions are further processed and analyzed to extract vehicle features, such
as size, shape, and motion characteristics, enabling robust vehicle detection and localization.
19
Integration Framework:
Optical flow provides fine-grained motion estimation, capturing subtle movements of vehicles
within the scene.
Background subtraction offers scene segmentation, isolating moving vehicles from the static
background and reducing false detections.
In the integrated framework, optical flow and background subtraction methods are applied
sequentially or concurrently to process input video frames.
Optical flow estimates the motion vectors of pixels, while background subtraction segments
moving objects from the background scene.
The results from both methods are fused or combined to refine vehicle detection and speed
estimation, leveraging the complementary information provided by each technique.
The integrated system requires careful optimization and calibration to ensure coherent operation
and accurate results.
Parameters such as flow regularization, background modeling thresholds, and object tracking
criteria are fine-tuned to achieve optimal performance in different traffic scenarios and
environmental conditions.
Quantitative Metrics:
Detection accuracy is measured by comparing system outputs with ground truth annotations,
calculating metrics such as precision, recall, and F1 score.
Speed estimation error quantifies the difference between estimated and ground truth vehicle
speeds, providing insights into the accuracy of speed tracking.
20
Qualitative Assessments:
Visual inspection of system outputs allows for qualitative evaluation of detection and tracking
results.
The system's robustness to challenging scenarios, such as occlusions, lighting variations, and
complex motion patterns, is assessed through real-world testing and validation.
Deployment Scenario:
Surveillance cameras are strategically positioned along the roadway to provide comprehensive
coverage of the traffic scene.
The integrated system is deployed on dedicated hardware platforms, enabling real-time processing
of video streams and extraction of vehicle speed and position information.
System Performance:
The performance of the deployed system is evaluated under varying traffic conditions, including
different vehicle speeds, densities, and environmental factors.
Real-world testing allows for validation of the system's accuracy, reliability, and robustness in
practical traffic monitoring scenarios.
4.7 Conclusion
In conclusion, the integration of optical flow and background subtraction techniques offers a
powerful approach for vehicle detection and speed tracking in real-world traffic environments. By
leveraging the complementary strengths of both methods, we can achieve accurate, robust, and
efficient systems for traffic monitoring and management. The systematic integration framework,
optimization strategies, and performance evaluation methodologies outlined in this chapter
provide a comprehensive guide for the development and deployment of advanced traffic
surveillance systems. Through continued research and innovation, we can further enhance the
capabilities and effectiveness of these systems in addressing the challenges of modern
transportation.
21
CHAPTER 5: RESULT
22
Figure 6: Shadow detection results based on HSV color space. (a) Frame #12. (b) Frame #23.
The tracking experiments encompassed various scenarios, including normal traveling conditions
of vehicles on the highway, as well as abnormal conditions such as illumination changes, similar
neighboring objects, occlusions, and scale variations. These diverse scenarios aimed to validate
the effectiveness of the proposed tracking algorithm under different environmental conditions and
challenges.
23
Figure 7 : Detection result of the 100th frame. (a) 'e original image. (b) Grayscale image. (c)
Optical flow vector. (d) Morphological filtering. (e) Test results.
To ensure consistency and fairness in the comparison, the proposed tracking algorithm was
benchmarked against two typical target tracking algorithms: the Kalman filter algorithm and the
Camshift algorithm. Parameter settings for the particle filter, a key component of the proposed
algorithm, were configured with a particle number of 100. Other relevant parameters, including
24
system noise and observation noise, were selected to be as consistent as possible across all three
algorithms.
Given that the tracking results of the test videos were manually labeled, they serve as the ground
truth against which the performance of the tracking algorithms is evaluated. The comparison
focuses on metrics such as tracking accuracy, robustness to environmental variations, and
computational efficiency.
5.4 Conclusion
The tracking experiments conducted in this study provide valuable insights into the performance
of the proposed tracking algorithm for vehicle tracking in challenging real-world scenarios. By
leveraging the optical flow method and a particle filter-based tracking framework, the proposed
algorithm demonstrates promising capabilities in accurately tracking moving vehicles despite
varying environmental conditions and challenges.
Comparison with traditional tracking algorithms such as the Kalman filter and Camshift algorithm
highlights the effectiveness and advantages of the proposed approach. With consistent parameter
settings and rigorous evaluation against ground truth labels, the proposed algorithm showcases
superior tracking accuracy, robustness, and efficiency, particularly in scenarios involving
illumination changes, occlusions, and scale variations.
Overall, the tracking results underscore the potential of the proposed algorithm to enhance the
capabilities of vehicle tracking systems in real-world applications, contributing to improved traffic
monitoring, surveillance, and safety on roadways. Continued research and refinement of the
algorithm can further enhance its performance and applicability in diverse traffic environments.
25
CHAPTER 6: CONCLUSION
This study introduces a novel approach for moving vehicle detection and tracking in complex
transportation environments, leveraging optical flow and immune particle filter algorithms. The
proposed method begins by utilizing the optical flow method to initially detect moving vehicles.
Subsequently, a shadow detection algorithm based on the HSV color space is employed to
accurately identify moving vehicles, overcoming the challenges posed by shadow interference.
Finally, the moving vehicles are robustly tracked using the proposed immune particle filter
algorithm.
Experimental evaluations conducted under complex traffic scenes with shadow interference
demonstrate the efficacy of the proposed method in mitigating the impact of shadows on moving
vehicle detection. The method achieves accurate detection and robust tracking of moving vehicles,
leading to higher accuracy compared to existing algorithms such as Camshift and Kalman filter.
However, it's worth noting that the proposed method is currently limited to daytime conditions
with varying illumination levels, both good and poor. Future research will explore extending the
method to nighttime conditions by considering the utilization of infrared images of moving
vehicles.
Furthermore, the study envisions transferring the experimental results to a cloud computing
platform via a wireless sensor network. This would enable policymakers to access and analyze the
data, enhancing vehicle management strategies. The availability of data used in this study is subject
to request from the corresponding author, facilitating transparency and reproducibility of the
findings.
In summary, the proposed method presents a promising approach to address the challenges of
moving vehicle detection and tracking in complex transportation environments. With further
research and integration into cloud-based platforms, it has the potential to significantly improve
vehicle mana
26
REFERENCES
{1] Raad Ahmed Hadi1,Ghazali Sulong and Loay Edwar George, “Vehicle detection and tracking
techniques :A concise review”, in Signal & Image Processing : An International Journal (SIPIJ)
Vol.5, No.1, February 2014.
[2] https://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html
[3] https://users.fmrib.ox.ac.uk/~steve/review/review/node2.html
[4] Z. Wei, et al., "Multilevel Framework to Detect and Handle Vehicle Occlusion," Intelligent
Transportation Systems, IEEE Transactions on, vol. 9, pp. 161-174, 2008.
[5] Nishu Singla,“Motion Detection Based on Frame Difference Method”, International Journal of
Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 15 (2014), pp.
1559-
[6] B. Suresh, K. Triveni Y. V. Lakshmi, P. Saritha, K. Sriharsha, D. Srinivas Reddy,
“Determination of Moving Vehicle Speed using Image Processing”, International Journal of
Engineering Research & Technology (IJERT) ISSN: 2278-0181 Published by, www.ijert.org
NCACSPV - 2016 Conference Proceedings.
[7] Y. Ma, X. Song, X. Li, and J. Liu, “Research and implementationof real-time monitoring
algorithm based on embeddedtraffic flow,” LCD & Display, vol. 33, no. 9,pp. 787–792, 2018.
[8] T. Gao, Z.-g. Liu, S.-h. Yue, J. Zhang, J.-q. Mei, and W.-c. Gao,“Robust background
subtraction in traffic video sequence,”Journal of Central South University of Technology, vol. 17,
no. 1, pp. 187–195, 2010.
[9] J. Xu, M. Fang, and H. Yang, Motion Detection and Tracking inComputer Vision, National
Defense Industry Press, Beijing,China, 2012.
[10] D. Forsyth, “Object detection with discriminatively trainedpart-based models,” IEEE
Transactions on Pattern Analysis &Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2014.
27