Video-Based Detection and Tracking Model For Traffic Surveillance
Video-Based Detection and Tracking Model For Traffic Surveillance
Video-Based Detection and Tracking Model For Traffic Surveillance
net/publication/282328780
CITATIONS READS
3 519
3 authors, including:
Ragab M. Mousa
Ministry of Transport and Communications-Oman
33 PUBLICATIONS 117 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
DEVELOPMENT OF SUPERPAVE PERFORMANCE GRADING MAP FOR THE SULTANATE OF OMAN View project
All content following this page was uploaded by Ragab M. Mousa on 01 October 2015.
Word Count
Abstract 250
Text 5239
Tables (3 x 250) 750
Figures (6 x 250) 1500
Total 7,489
* Corresponding Author
A. Al Kherret, A. Al Sobky, and R. Mousa 1
1 ABSTRACT
2 The continuous increase of traffic congestion in urban areas demands a high reliable
3 traffic management system for monitoring traffic flows and providing key input parameters
4 for predicting traffic conditions. Video sequences of road scenes are increasingly used in
5 several contexts with an emphasis on automation, notably for tracking moving objects in a
6 static background. This paper presents a multiple-vehicle surveillance model developed using
7 Matlab Software for detecting and tracking moving vehicles, and collecting traffic data for
8 different lengths of region of interest (ROI), ranging between 5 and 30 m. The model was
9 validated using simulated video scenes, designed in VISSIM with known traffic data.
10 Measurements from model were compared with actual measurements reported by VISSIM
11 and results confirmed exact match of vehicle counts. Statistical t-tests of mean speed
12 differences confirmed the model validity at 5% significance level, especially with ROI length
13 of 10 and 15 m. Validation of headway measurements was also confirmed for optimum ROI
14 lengths. The model processes one second in video clips of frame rate 20 frames/sec in 0.96
15 sec. This is appropriate for real-time applications to yield traffic parameters including vehicle
16 speed, headway, count, incident detection, queue detection, etc. However, the model was
17 validated assuming no lane changes and no overlap of vehicles, and, hence, model validity is
18 limited to these assumptions. It is recommended that this model be validated using real-
19 world videos containing noises such as light variation, shadows, vibrations due to wind,
20 skewed views, lane changes, and/or trucks that obscure full view of vehicles.
21
22 Keywords: Matlab, Image Processing, Traffic Surveillance, Vehicle Detection, Vehicle
23 Tracking, Speed.
24
25
A. Al Kherret, A. Al Sobky, and R. Mousa 2
1 INTRODUCTION
2 The daily life of people encounters more problems as the population continuously
3 increases in urban area, and road traffic becomes more congested because of high demand
4 and less level of road capacity and infrastructure. Since the effects of these problems are
5 significant in daily life, it is important to seek efficient solutions to reduce congestion and
6 provide secure transportation system.
7 In recent years, much research has been performed for developing real time traffic
8 monitoring systems for managing the traffic flow of roadways, prevention of accidents,
9 providing secure transportation, etc. Within these works, one aim is to realize different
10 applications such as estimation of vehicle speeds on the roadways, determination of traffic
11 intensity and if necessary, to direct the vehicles to less dense roads, manage the timing of
12 traffic lights automatically, etc. (1-4).
13 The common method to obtain information on traffic flow is by utilizing buried
14 induction loops. Although this existing technique is not affected by weather and light
15 conditions, it suffers from high installation and maintenance costs (5,6).
16 In order to overcome this limitation, vehicle tracking using image processing
17 techniques has been adopted for traffic monitoring system to yield the traffic parameters
18 including queue detection, incident detection, lane changes, vehicle classification, vehicle
19 counting and vehicle speed (5-9).
20 A more reliable traffic flow modeling and an improved understanding of drivers’
21 behavior can be attained since the vehicle tracking system can provide more individual
22 vehicle data such as spacing, velocity and acceleration (5).
23 The literature is abundant with researches that dealt with detection and tracking of
24 moving objects in video sequence, and numerous mathematical models have resulted out of
25 these studies. Parekh et al. (10) reported that a general subdivision of object detection
26 techniques can be made of three main categories, namely, (a) Background Subtraction, (b)
27 Temporal Differencing, and (c) Optical Flow. Similarly, the object tracking can be divided
28 into three main categories, Point tracking, Kernel tracking, and Silhouette tracking.
29 In object detection techniques, Background Modeling (Background Subtraction) is
30 used to detect moving objects in an image by taking the variations between the current image
31 and the reference background image in a pixel-by-pixel fashion. The background subtraction
32 method uses a simple algorithm. However, it is very sensitive to changes in the external
33 environment. Similarly, Temporal Differencing method is used to calculate the absolute
34 differences between two consecutive images to extract moving regions and obtain a threshold
35 function to determine changes. The temporal differencing has a strong adaptability for a
36 variety of dynamic environments, but its method of calculation is generally difficult to
37 achieve complete outline of moving object. The Optical Flow method uses the optical flow
38 distribution characteristics of moving objects over time in an image sequence. Flow
39 computation methods cannot be applied to video streams in real time because they are very
40 complex and very sensitive to noise (11, 12).
41 In object tracking manners, the Point tracking method is based on monitoring and
42 comparing the positions of different detected points from one frame to another. Kernel
43 tracking method tracks objects by calculating the motion of an object shape and its
44 appearance in successive frames. The Silhouette tracking method uses information inside the
45 silhouette's region in the form of edged maps to track the object using shape matching
46 approach (13, 14).
47 As previously mentioned, background subtraction methods are very sensitive to
48 changes in the scene. Also, this method requires a training period absent of foreground
49 objects and is too slow to be practical. Stauffer and Grimson (15) modeled each pixel in an
50 image sequence as a mixture of Gaussians and used an on-line approximation to update the
A. Al Kherret, A. Al Sobky, and R. Mousa 3
1 model. Then, the Gaussian distributions of the adaptive mixture model are evaluated to
2 determine which are most likely to result from a background process. Finally, each pixel is
3 classified based on whether the Gaussian distribution which represents it most effectively is
4 considered part of the background model. Kaewtrakulpong and Bowden (16) improved the
5 previous adaptive background mixture model by updating equations and utilized and
6 applying different equations at different phases to make the system learn faster, more
7 accurate, and adapt effectively to changing environments. However, Matlab Software (17)
8 adopted the previous two studies and presented a system object to detect foreground using
9 Gaussian Mixture Models (GMMs).
10 Nowadays, detection and tracking moving objects are becoming more essential to
11 traffic engineers since available systems such as video image processing (VIP) are
12 successfully used in traffic data collection and traffic surveillance. According to Martin et al.
13 (18), Klein et al. (19) , and Klein (20), all detector technologies and particular devices have
14 limitations, specializations, and individual capabilities. Among these technologies, only
15 microwave radar, active infrared, and VIP systems are capable of supporting multiple lane
16 and multiple detection zone applications. In comparison to all other technologies, VIP system
17 is considered the best in terms of installation, maintenance and portable improvement.
18 Moreover, this technology allows users to check visually the results by watching videos
19 previously recorded.
20 A VIP system typically consists of one or more cameras, a microprocessor-based
21 computer for digitizing and analyzing the imagery, and software for analyzing the imagery of
22 traffic stream to determine changes between successive frames and converting them into
23 traffic flow data (Leduc, (21) and Mimbela and Klein (22).
24 Several techniques of vehicle tracking system have been investigated for traffic
25 monitoring. The review of the literature revealed the lack of studies dealing with the impact
26 of vehicle detection zone (ROI) on the accuracy of detections and measured speeds of
27 individual vehicles. In fact, most of reviewed studies focused only on the vehicle speed as the
28 prime traffic data. (23-27).
29 It is, therefore, the objective of this research to collect traffic data of detected vehicles
30 and assess the impact of the size of ROI on the accuracy of such collected data. This paper
31 presents the development of a multiple-vehicle surveillance model based on the features of
32 Matlab programming language, especially the image processing toolbox. A Video-Based
33 Detection And Tracking Model called "VB-DATM" was developed in the course of this
34 research. The paper also summarizes the main findings of the model applications, features,
35 and limitations.
36
37 DEVELOPMENT OF THE PROPOSED MODEL
38 This section presents the efforts devoted in developing the proposed model. The
39 proposed model is developed using Matlab Software and consists of three sequential
40 modules, namely, Detection, Tracking, and Traffic Data Collection as outlined below. It
41 should be mentioned that other software such as Visual Basic and C Sharp can be used;
42 however, Matlab is used in this work because it has an image processing toolbox.
43 Detection Module: in this module, the moving vehicle is detected as it enters the ROI
44 and exact time and location are registered while vehicle is traveling in the ROI. This
45 is repeated in each frame until the vehicle leaves ROI. These steps are repeated for all
46 vehicles in the video clip.
47 Tracking Module: in this module, data of detected vehicles recorded in frames are
48 checked to segregate all frames belonging to each vehicle and storing them in a
49 separate sheet. This data is simply spatial and temporal data of vehicle trajectory
50 while traveling in the ROI. Each data sheet belongs to only one vehicle.
A. Al Kherret, A. Al Sobky, and R. Mousa 4
1 Traffic Data Collection Module: With the availability of spatial and temporal data
2 of each vehicle, traffic data such as flow, speed, headway, and possibly density can be
3 computed in this module.
4
5 The methodology executed in each module is discussed in some detail in the
6 following sections.
7
8 Vehicle Detection Methodology
9 Vehicle detection is the first step prior to performing more sophisticated tasks such as
10 tracking (17). In this paper, an interactive code was written to detect and track moving
11 vehicles in video sequence using foreground detection based on Gaussian Mixture Models.
12 The code for detection consists of three main parts; initialization, external functions, and
13 stream processing loop. These are elaborated in the following subsections.
14
15 Initialization
16 The initialization is made at the beginning of this module to:
17 1. Read the video from *.avi file,
18 2. Read and store the height and length of video frames and frame rate,
19 3. Display the first frame of the video clip,
20 4. Read the limits of the region of interest (ROI) and lane separators as interactively
21 defined by the user on the screen using the mouse.
22 5. Assign variables to store the coordinates of these limits and the length of the region in
23 pixel.
24 6. Read the length of the ROI in meter. This step is performed to create a Conversion
25 Factor which is used to convert dimensions in pixels to meters and vice versa.
26 7. Read the expected maximum speed (km/h). The Conversion Factor and the maximum
27 speed are used to calculate the maximum distance that a vehicle can travel in single
28 frame (step/frame).
29
30 External Functions
31 To facilitate the vehicle detection and tracking process, three main functions are
32 written as follows:
33 1. Checking whether the center of the bounding box around the detected vehicle is inside
34 the polygon of ROI (or detection area) of certain lane.
35 2. Calculating the travel distance between two given frames. The distance traveled by a
36 vehicle ( D) during two frames is calculated using Eq. 1:
37
38 (1)
𝐷= 𝑦2 − 𝑦1 2 + 𝑥2 − 𝑥1 2
39
40
41 Where (x1, y1) are the coordinates of the vehicle center in the first frame, and
42 (x2, y2) are the coordinates in the second frame.
43
44 3. Calculating the vehicle travel distance inside the ROI. If the position of a detected
45 vehicle is currently at Point P (xp, yp) and the start line (BC) of ROI is connecting
46 the two points B (xB , yB) and C (xC , yC), then the slope (m) of Line BC is given by
47 Eq. 2:
48
𝑦𝐵 − 𝑦𝐶
49 𝑚= (2)
50 𝑥𝐵 − 𝑥𝐶
A. Al Kherret, A. Al Sobky, and R. Mousa 5
1 The travel distance (TD) can be calculated as the perpendicular distance from the
2 vehicle position (P) to the start line of ROI as shown in Eq. 3 below.
3
4 𝑚 𝑥𝑝 − 𝑥𝐵 + 𝑦𝐵 − 𝑦𝑝 (3)
𝑇𝐷 =
5 𝑚2 +1
6
7
8 Stream Processing Loop
9 The main purpose of this loop is to detect vehicles in the input video. The processing
10 loop uses variables created in the initialization stage and external functions mentioned above.
11 The loop includes three stages. Stage 1 reads the input video frame and detects the
12 foreground. Foreground detection means converting the colored frame, as shown in Figure 1-
13 a, to a binary image (black and white image) with the same dimensions based on a
14 conversation threshold. Black areas refer to the fixed background of the scene whereas white
15 areas (blobs) refer to the moving objects, as shown in Figure 1-b. The foreground
16 segmentation process does not produce perfect moving objects and often includes undesirable
17 noise due to the use of improper conversation threshold. In Stage 2, the noise is removed
18 from the foreground image using special morphological filters and another proper binary
19 image is produced as shown in Figure 1-c.
20 Stage 3 begins by establishing the bounding box around every blob in the foreground
21 image and then determining the center coordinates of each box as shown in Fig. 1-d. The
22 external functions can be used to determine the travel lane and travel distance inside ROI.
23
1
2 FIGURE 2 Flowchart depicting the logic of the proposed model in vehicle tracking
A. Al Kherret, A. Al Sobky, and R. Mousa 8
1 clarity, and (b) VISSIM can report the exact time and speed of each vehicle when it crosses
2 certain data collection station. This feature eliminates the possibility of human errors in case
3 data is extracted manually from played video clips.
4
5 Preparation of Video Clips for use in Validation
6 Mimbela and Klein (22) reported that higher mounting camera position allows for
7 better angle and wider view, covering all lanes on the road. On the contrary, lower mounting
8 heights would not provide effective images as some vehicles may hide part of other vehicles
9 from the scene. As a result, video image processing recognizes overlapped vehicles as single
10 objects. Due to this fact, camera location was selected to provide sufficient field of vision and
11 prevent overlapping of vehicles. The camera height was set to 30 m with 1 m lateral offset
12 from edge of road, and 65° pitch angle.
13 A 6-lane divided highway with 3.5 m lane width was created in VISSIM and traffic
14 was simulated for 330 sec. Figure 3 shows the layout of this road section and the directions
15 (Far or Near) with respect to the camera location. Lanes in each direction are labeled shoulder
16 lane, middle lane, and median lane.
17
18 FIGURE 3 Layout of road section considered in VISSIM for producing video clips
19
20 Before the start of simulation, a section of 30 m long was considered for data
21 collection. In particular, 13 lines were drawn across the 6 lanes at 2.5 m intervals. The total
22 distance between the first and last lines is 30 m. The intersection points of these lines with
23 each lane give 13 stations for data collection along each lane. This makes a total of 78 data
24 collection stations (or points) in the 6 lanes. The Numbering and arrangement of these
25 stations across the 6 lanes and lines is shown in Figure 4.
26 At the end of simulation, VISSIM creates two files. The first file is AVI format and
27 contains the video clip of simulated traffic motion and the second file is in text format and
28 contains speed and passing time of each vehicle at data collection points.
29
A. Al Kherret, A. Al Sobky, and R. Mousa 10
1
2
3
4
5 Far
6 direction
7
8
9
10
11
12 Near
13 direction
14
15
16
17 FIGURE 4 Locations of the 78 data collection points across the 6 lanes
18
19 Running the Developed Model for Data Collection
20 In Matlab environment, the developed model was utilized to analyze the video clip
21 (AVI file) for different lengths of ROI, namely, 5, 10, 15, 20, 25 and 30 m. For each ROI
22 length, the model was set up to detect, track, and collect traffic data for that ROI length.
23 Model data are then compared with actual values provided by VISSAM. As such, when the 5
24 m ROI is used, the model considers the ROI to be the length between the first and third lines
25 marked in the video clip. For comparing results of shoulder lane of Far direction, for
26 example, traffic data obtained from the model are compared with the data obtained from
27 VISSIM at Collection Points no. 1, 2, and 3. Similarly, when the 30 m ROI is considered, the
28 model data for the same lane are compared with VISSIM data collected at Points no. 1, 7, and
29 13, and so on.
30
31 Comparative Analysis of Model Results
32 The developed model was run 6 times to detect and track moving vehicles in the
33 video clip for 6 different lengths of region of interest (ROI). Results obtained from the model
34 are compared with their counterparts in VISSIM to check model accuracy as presented in the
35 following sections.
36
37 Comparison of Traffic Counts
38 The model succeeded to detect and track all vehicles simulated in the video clip for all
39 lanes and all ROI lengths. These counts were the same as actual counts reported by VISSIM
40 and, which were also easy to count manually in played video clip. As expected, the number
41 of vehicles counted in each lane is identical for all ROI lengths.
42
43 Comparison of Vehicles Speed
44 This comparison was established to check the model accuracy in estimating vehicle
45 speeds for all ROI lengths. This check was made in three steps. First, the check was made by
46 plotting the speed obtained from model versus the actual speed obtained from VISSIM. As an
47 example, Figure 5 depicts this comparison for the shoulder lane of the Near Direction and
48 ROI of 10 m. The figure clearly shows that model speeds are very close to actual speeds
◦
49 (Refer to the 45 line in figure). This accuracy is applicable for all lanes and all ROI lengths.
A. Al Kherret, A. Al Sobky, and R. Mousa 11
60
Speed obtained from model (km/h)
55
50
Speed of individual vehicles
45- degree line
45
45 47 49 51 53 55 57 59
Actual speed (km/h)
1
2 FIGURE 5 Comparison of model speed accuracy
3
4 The second step of comparison was made by computing the mean and standard
5 deviation of the speeds obtained from model and VISSIM. The accuracy of the model
6 estimates was expressed as the ratio of the mean speed of model and mean speed of actual
7 values. Computed speed ratios for all traffic lanes and ROI lengths range between 96.7% and
8 99.5%, with higher ranges for Near direction lanes and ROI of 10 and 15. Similar ratios for
9 standard deviation were computed, and were all greater than 1.0. This means that the
10 developed model produces relatively higher variation in individual speeds, and this variation
11 is higher for Far direction lanes than for Near direction lanes. The minimum variation is
12 reported with ROI lengths of 10 and 15 m, where standard deviation ratios are close to 1.0.
13 The third step in comparison to conclude the model validation was to perform t-tests
14 on the two data sets for each ROI case. Before conducting this test, however, the
15 Kolmogorov-Smirnov test was performed on the data sets to check whether observations of
16 each data set are normally distributed. The test results confirmed that all data are normally
17 distributed. Results of the t-tests are shown in Table 3 for different lanes and ROI lengths.
18
19 TABLE 3 T-Test Results for Comparison of Model Speeds (P-value)
20
Near-Direction Far-Direction
ROI
Length Shoulder Shoulder Shoulder Shoulder Middle Median
Lane Lane Lane Lane Lane Lane
5m 0.049 0.108 0.004 0.008 0.005 0.002
10 m 0.534 0.499 0.470 0.170 0.136 0.159
15 m 0.431 0.351 0.324 0.163 0.131 0.363
20 m 0.021 0.063 0.017 0.032 0.007 0.036
25 m 0.086 0.083 0.299 0.058 0.002 0.033
30 m 0.117 0.011 0.018 0.000 0.001 0.000
21
A. Al Kherret, A. Al Sobky, and R. Mousa 12
1 Based on the results of the t-tests, it can be concluded that the model produces speeds
2 that are not significantly different from actual speeds for ROI lengths of 10 and 15 m, and
3 this is valid for all 6 lanes. It is interesting to note that although the model produces speeds
4 having average speed greater than 96.5% of actual mean speed for ROI of 5, 20, 25, and 30
5 m, the difference between the two means was significant at the 5% level.
6
7 Comparison of Measured Headways
8 Measuring the individual headways (or gaps) between successive vehicles is essential
9 because they indicate how vehicles are dispersed and whether they follow too close from
10 each other. It can also help detect the presence of queues on the road. Beside applications in
11 safety issues, the number of headways can be directly used to compute the traffic flow rate
12 (Q). With the average speed (V) and flow rate (Q) available, one can readily calculate the
13 average traffic density (K) as the subdivision of Q and V (or K = Q/V).
14 The model accuracy was also checked for optimum lengths of ROI, namely, 10 and
15 15 m. Figure 6 depicts a comparison of the model accuracy in measuring headways. The t-
16 test was also run on measured headways and results confirm the model validity. All test
17 results confirm that the differences between mean headway produced by the model and mean
18 actual headway are not significant at the 5% significance level. Furthermore, the t-test for
19 matched pairs also confirmed the same finding.
8
Headway obtained from model (sec)
0
0 2 4 6 8
Actual headway (sec)
20
21 FIGURE 6 Comparison of model headway accuracy
22
23 MODEL PROCESSING TIME
24 The execution time of processing 20 fps video clips of resolution 1280×720 pixels
25 was about 1.48 sec/frame or 29.6 sec/sec for processing the 20 frames. The time taken to
26 detect and identify the foreground comprises about 92% of total execution time. Attempts
27 were made to minimize the total execution time while maintaining almost the same accuracy.
28 These included processing every other frame, which reduces the execution time by half (14.8
29 sec/sec). Attempts also included reducing image resolution to 640 x 360 pixels, which further
A. Al Kherret, A. Al Sobky, and R. Mousa 13
1 reduced the execution time by a factor of 0.206 (3.04 sec/sec). In order to match real-time
2 processing, the image was cropped to the ROI zone only. Cropping image means processing
3 the ROI only and neglecting the rest of the image. After cropping the image, the execution
4 time was reduced to 0.96 sec/sec when all 20 frames are processed. Since this execution time
5 is less than 1.0 sec/sec, the model is capable of processing real-time applications. T-tests
6 were performed to compare values of original data set with their counterparts after reducing
7 execution time (i.e., after reducing resolution & cropping image). Results confirmed that
8 there are no significant differences between the means of two data sets at the 5% significance
9 level.
10
11 CONCLUSIONS AND RECOMMENDATIONS
12 This paper presents a multiple-vehicle surveillance model, written in Matlab
13 programming language, for vehicle detection, tracking, and collection of traffic data. The
14 model was validated by processing video clips created in VISSIM. Based on the analysis
15 presented in this paper, it can be concluded that the proposed model is a valuable tool for
16 collecting essential traffic data such as speed, flow, and headways, which can save time and
17 resources. The model produces its best results with optimum ROI lengths, namely, 10 and 15
18 m. The model processes each second of video clip having 20 fps in 0.96 second. This rate
19 qualifies the developed model for real-time applications such as incident detection, detection
20 of queues, intelligent transportation system, etc. However, the model was validated using
21 ideal conditions in video clips of simulated traffic streams. Accordingly, the model
22 applications might be limited to such assumptions. It is recommended that model be validated
23 using real-life video clips containing noises such as lane changes, light variation, shadows,
24 vibration due to wind, skewed views, and/or trucks that obscure full view of vehicles.
25
26 ACKNOWLEDGMENT
27 This research is part of a Ph.D. research conducted in the Faculty of Engineering,
28 Cairo University, Egypt. Simulation modeling was performed using licensed VISSIM
29 Software in the Transportation Laboratory, Faculty of Engineering, Ain Shams University.
30
31 REFERENCES
32 1. Cathey, F.W.; Dailey, D.J. A novel technique to dynamically measure vehicle speed
33 using uncalibrated roadway cameras. In Proceedings of the IEEE Intelligent Vehicles
34 Symposium, Las Vegas, NV, USA, 6-8 June 2005; pp.777-782.
35 2. Grammatikopoulos, L.; Karras, G.; Petsa, E. Automatic estimation of vehicle speed
36 from uncalibrated video sequences. In Proceedings of the Int. Symposium on Modern
37 Technologies, Education and Professional Practice in Geodesy and Related Fields,
38 Sofia, Bulgaria, 3-4 November 2005; pp. 332-338.
39 3. Melo, J.; Naftel, A.; Bernardino, A.; Victor J.S. Viewpoint independent detection of
40 vehicle trajectories and lane geometry from uncalibrated traffic surveillance cameras.
41 In Proceedings of the International Conference on Image Analysis and Recognition,
42 Porto, Portugal, 29 September-1 October 2004; pp. 454-462.
43 4. Morris, B.; Trivedi, M. Improved vehicle classification in long traffic video by
44 cooperating tracker and classifier modules. In Proceedings of the IEEE International
45 Conference on Advanced Video and Signal Based Surveillance, Sydney, Australia,
46 22-24 November 2006.
47 5. B. Coifman, D. Beymer, P. McLauchlan and J. Malik, “A real-time computer vision
48 system for vehicle tracking and traffic surveillance,” Transportation Research Part C,
49 vol.6, no. 4, pp. 777-782,1998.
A. Al Kherret, A. Al Sobky, and R. Mousa 14
1 24. Rahim, H. A., Sheikh, U. U., Ahmad, R. B., and Zain, A. S. Vehicle Velocity
2 Estimation for Traffic Surveillance System. World Academy of Science, Engineering
3 and Technology, Vol:4 2010-09-22. Pp. 686 - 689.
4 25. Grammatikopoulos, L., Karras, G., Petsa, E. Automatic Estimation of Vehicle Speed
5 from Uncalibrated Video Sequences. International Symposium on Modern
6 Technologies, Education and Professional Practice in Geodesy and Related Fields
7 Sofia, 03 – 04 November, 2005, Pp. 332-338.
8 26. Temiz, M. S, Kulur, S., Dogan, S. Real Time Speed Estimation from Monocular
9 Video. International Archives of the Photogrammetry, Remote Sensing and Spatial
10 Information Sciences, Volume XXXIX-B3, 2012. XXII ISPRS Congress, 25 August –
11 01 September 2012, Melbourne, Australia.
12 27. Dailey, D. J. and Li, L. Algorithm for Estimating Mean Traffic Speed with
13 Uncalibrated Cameras. In Transportation Research Record: Journal of the
14 Transportation Research Board, No. 1719, Transportation Data, Statistics, and
15 Information Technology. Transportation Research Board of the National Academies,
16 Washington, D.C., 2000, Pp. 27-32 .