0% found this document useful (0 votes)
35 views7 pages

A Traffic Object Detection System For Road Traffic

The document describes a traffic object detection system called OIS that uses optical sensors and image processing for traffic measurement and management. OIS extracts traffic data like vehicle location and speed in real-time from video and can identify accidents and non-motorized traffic. It aims to provide data for dynamic traffic light control of single intersections and wide areas.

Uploaded by

Xuanmanh Tran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views7 pages

A Traffic Object Detection System For Road Traffic

The document describes a traffic object detection system called OIS that uses optical sensors and image processing for traffic measurement and management. OIS extracts traffic data like vehicle location and speed in real-time from video and can identify accidents and non-motorized traffic. It aims to provide data for dynamic traffic light control of single intersections and wide areas.

Uploaded by

Xuanmanh Tran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/224793443

A Traffic Object Detection System for Road Traffic Measurement and


Management

Conference Paper · January 2003


Source: DLR

CITATIONS READS

11 2,747

8 authors, including:

Carsten Dalaff Ralf Reulke


German Aerospace Center (DLR) Humboldt-Universität zu Berlin
27 PUBLICATIONS   51 CITATIONS    338 PUBLICATIONS   1,645 CITATIONS   

SEE PROFILE SEE PROFILE

Adrian Schischmanow Gerald Schlotzhauer


German Aerospace Center (DLR) German Aerospace Center (DLR)
33 PUBLICATIONS   132 CITATIONS    25 PUBLICATIONS   477 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Satellite Image Matching and Registration View project

EPISECC - Establish a Pan-European Information Space to Enhance seCurity of Citizens View project

All content following this page was uploaded by Gerald Schlotzhauer on 10 June 2014.

The user has requested enhancement of the downloaded file.


A Traffic Object Detection System for Road Traffic
Measurement and Management
Carsten Dalaff
Institute for Transport Research,
German Aerospace Center DLR ,Berlin, Germany
[email protected]
Ralf Reulke
Institute for Photogrammetry, Stuttgart University, Germany
[email protected]
Axel Kroen
ISSP Consult, Stuttgart, Germany
[email protected]
Thomas Kahl
ASIS GmbH, Berlin, Germany
[email protected]
Martin Ruhe, Adrian Schischmanow, Gerald Schlotzhauer, Wolfram Tuchscheerer
German Aerospace Center DLR ,Berlin, Germany
[email protected]

Abstract
OIS is a new Optical Information System for road traffic observation and management. The complete system
architecture from the sensor for automatic traffic detection up to the traffic light management for a wide area
is designed under the requirements of an intelligent transportation system. Particular features of this system
are the vision sensors with integrated computational and real-time capabilities, real-time algorithms for image
processing and a new approach for dynamic traffic light management for a single intersection as well as for a wide
area. The developed real-time algorithms for image processing extract traffic data even at night and under bad
weather conditions. This approach opens the opportunity to identify and specify each traffic object, its location,
its speed and other important object information. Furthermore, the algorithms are able to identify accidents, and
non-motorized traffic like pedestrians and bicyclists. Combining all these single information the system creates
new derivate and consolidated information. This leads to a new and more complete view on the traffic situation
of an intersection. Only by this a dynamic and near real-time traffic light management is possible. To optimize a
wide area traffic management it is necessary to improve the modelling and forecasting of traffic flow. Therefore
the information of the current Origin-Destination (OD) flow is essentially. Taking this into account OIS also
includes an approach for anonymous vehicle recognition. This approach is based on single object characteristics,
order of objects and forecast information, which will be obtained from intersection to intersection.

Keywords: Traffic observation, traffic control, sensor network, sensor fusion

1 Introduction lion Euro external costs per year. Daily 4.000 km of


traffic congestions stress only the European highways.
Traffic observation, control and real-time management This means 10% of the complete European highway
is one of the major components within future intelligent system. The economical damage is tremendous, but
transportation systems (ITS). One central postulation till now there is no common approach to calculated
of the European Government is the increase of road the real amount. So the official numbers differ. The
safety, so the number of killed people should be halved necessary investments for the European transportation
till the year 2010. There are nearly 41.900 road casual- field will reach more than 10% of the EU gross na-
ties and more than 1.7 million seriously injured persons tional product (GNP). The needed financial capabilities
each year in the European Union (EU). This causes for transport infrastructure of the acceding countries
about 45 billion Euro direct and approximately 160 bil- (e.g. Poland, Estonia ) will increase this expense enor-

78 Image and Vision Computing NZ


mously. To realize just the priority projects in these hicles has to be equipped with hardware and mobile
countries, the EU has to be spend 91 billion Euro up communication units. To get reliable data every one
to 2015. Taking this into account it seems effective minute the current position and velocity of the vehicles
to use parts of this money for innovative approaches is needed. This causes enormous costs for the mobile
for traffic management in future intelligent transporta- communication. The FCD approach provides spatial
tion systems. Having such an intelligent transportation traffic information, but the spatial and time resolution
system an increased road safety can be realized. So doesnt fit the requirement of a traffic signal control.
the economical damage can be reduced. One appropri- OIS as a new and innovative traffic observation system
ate approach could be the use of the mentioned traffic that opens the opportunity to deliver all necessary input
object detection system for road traffic measurement for a local traffic signal control as well as for a dynamic
and management. This systems is different to state-of- wide area traffic management system. Next challenge
the-art traffic measurement equipment, e.g. induction is the implementation of OIS in a wide area city.
loops, which does not suffice anymore the growing de-
mand of transport research and traffic control.
2 System Requirements
The project OIS [1] uses optical and informational en-
abling technologies for an automatic traffic data gen- A modern system for traffic control and real-time man-
eration with an image processing approach. Its main agement has to meet the following requirements:
purpose is to acquire and evaluate autonomously traffic
image sequences from roadside cameras. Traffic pa- • Reliability under all illumination and weather
rameters will be obtained from extracted and charac- conditions
terized objects of this image sequences. To meet these
requirements, numerous image processing algorithms • Working period non-stop, 24hours 7 days a week
have been developed since more then 20 years (e.g.
• complete overview over the intersection from at
special issue [2]), with simple web-cameras and more
least 20m in front to 20m behind
complex systems (e.g. [3]).
Traffic scene information can be used to optimize • working in real-time
traffic flow on intersections during busy periods, • real-time (every half second a complete data set of
identify stalled vehicles and accidents, and is able the traffic situation) on an intersection
to identify non-motorized traffic like pedestrians and
bicyclists. Additional contributions can be obtained
for the determination of the Origin-Destination (OD) This requirements should be taken into account for the
matrix. The OD matrix contains the information where design of all parts of the system, which covers all pro-
and when the traffic participants start and end their trip cedures and processes from image data acquisition, im-
and which route they have chosen. OD matrix is one age processing and traffic data retrieval up to traffic
basic element for an optimized modelling and forecast control. For realizing an operating system for 24 hours
of traffic flow. The estimated traffic flow is necessary and under different weather conditions infrared cam-
for a dynamic wide area traffic control, management eras should be used. Algorithms had to be developed
and travel guidance. for a special camera arrangement (with real-time de-
mands) for vehicle detection and deriving relevant pa-
Furthermore, the recent advances in computational rameters for traffic description and control.
hardware can provide high computational power
with fast networking facilities at an affordable price.
The availability of specific solutions in the low-cost 3 System Overview
general-purpose range allows special image processing
To get a complete overview over the intersection from
and avoids some basic bottlenecks. A couple of traffic
at least 20m in front to 20m behind it is necessary to
data measurement systems already exist. Best known
have more than one camera. The number of required
is the induction loop. Induction loops are embedded
cameras depends on the intersection geometry and in-
in the pavement. They are able to measure the present
stallation possibilities for the cameras on house walls
of a vehicle, its speed and rough classification. These
or lampposts. The time synchronous image data acqui-
are only local information but for a wide area traffic
sition from one observation point with different cam-
management a big coverage of the area is needed.
eras will be done in a so-called camera node. The
Another approach is based on the idea that moving ve- camera node is part of the OIS-philosophy and consists
hicles transmit information about there position and ve- of different sensors as a combination of VIS and IR-
locity via mobile communication, e.g. GSM to a traffic cameras. To fit the real-time processing requirement,
management center. These data are called Floating Car time consuming image processing parts will be imple-
Data (FCD). To get an overview of the traffic situation mented in a special real time-unit. This approach is in
of a complete city at any time a huge number of ve- implementation at the moment for two intersections in

Palmerston North, November 2003 79


Berlin, Germany equipped with camera-systems. The parameters (e.g. different resolutions, different spec-
architecture of the complete system is shown in the first tral sensitivities) which are connected within a network
figure. similar to the internet and being able to convert the
incoming physical signals not only to digital data but
to process them to user needed information. Therefore,
image fusion as well as fast and reliable algorithms
are needed preferably near the sensor itself. Real-time
processing and programmable circuits will play an im-
portant role.

4 Hardware Concept
The hardware concept is oriented on the requirements,
as described above. The system has to operate 24
Figure 1: System design for the test bed. hours and 7 days a week and to observe the whole
junction and at least 20m of all related streets. To
As shown in fig. 1 the camera nodes are part of a hier- improve the opportunities for image acquisition more
archy, which are linked via Internet or Wireless LAN than one camera system for one observation standpoint
with the computer systems on the intersection and the should be in use. Such a camera node fits the first
management center. Due to the limited data transfer requirement with a high (spatial) resolution camera
rate and possible failure of a system on an intersection and a low resolution thermal infrared camera. Also
the camera node works independently. Image process- stereo and distance measurements are possible with
ing is decentralized and will be done in the camera two identical cameras. The camera node is able to
node, so that only objects and object features will be acquire data from up to four cameras in a synchronized
transmitted. Together with error information the object mode. The camera node has real-time data processing
data will be collected on the next level in the hierar- capabilities and allows synchronous capture of GPS
chy. For synchronization purposes time signals can and INS (inertial navigation system) data, which
be incorporated over a network or from independently should be external mounted. Due to limitations of
received signals (like GPS). The camera node is part of camera observing positions and possible occlusions
a hierarchy starting with camera nodes, junctions, sub- (e.g. from buildings, cars and other disturbing
regions, etc. The next so-called junction level unifies objects) for intersection observation mostly more than
all cameras observing the same junction. Starting with one standpoint for a camera node (camera-system)
the information from the camera level, relevant data is necessary. Therefore communication or data
sets are fused in order to determine the same objects in transmission between camera nodes and the computer
different images. After that, these objects are tracked in the next higher hierarchy becomes critical. To
as long as they are visible. So traffic flow parameters reduce data volume in the network the real time data
(e.g. velocity, traffic jam, car tracks) can be retrieved. processing capability of the camera-node is used to
Level number 3, the so-called region level, uses the speed up image processing and to transmit only object
extracted traffic flow parameters out of the lower level data. For the real-time unit a hardware implementation
and feeds this data into traffic models in order to control was chosen. Large free programmable logic gate
the traffic (e.g. switching the traffic lights). Additional arrays (FPGA) are available now. A programming
levels can be inserted. For test applications the cam- language is available (VHDL) and different image
era node will generate a compressed data stream. For processing algorithms are implemented.
typical working mode only object features in an image
should be transmitted and collected in a computer of 5 Data Processing
the hierarchy. Generally, optical sensor systems in the
visible and near-infrared range of the electromagnetic The processing of data in the sensor web follows suc-
spectrum have reached a very high quality standard, cessive steps. In the first step image data are generated
which meet the requirements even for high-level scien- and pre-processed. After that the objects are extracted
tific and commercial tasks, above all concerning the ra- out of the images. In the last step all the object features
diometric and geometric resolution and data rate. Oth- from different camera nodes will be collected, unified
erwise sensors working in the thermal infrared range and processed into traffic information. The data pro-
(TIR) are still a research topic for traffic applications. cessing is optimized to the logical design of the system.
Technology development for the next few years will There are operations for the system configuration, for
not be focused on higher resolutions or faster read-outs, the operational working for the data mining and visu-
because for most applications the performance of these alization. Results are new data and information. A
sensors is sufficient. The emphasis will be put on smart, complex data management system regulates the access,
intelligent sensor systems with different measurement the transmission, handling and the analysis of the data.

80 Image and Vision Computing NZ


5.1 Image Processing like shadows have to be removed. A straightforward
way is the analysis of grey and color values, as well as
The essential processing steps are the elimina- texture within the found object boundaries.
tion of noise and systematic errors, compres-
sion/decompression, higher level image processing and
spatial (or geo-)- and time-referencing. The last point 5.2 Data and object fusion
is necessary to determine space and time coordinates of To ensure an operating system 24 hours a day even un-
observed objects as an essential feature for data fusion. der bad weather conditions, e.g. rain, fog, and at night
Most of the work on vehicle detection or recognition the fusion of a visible and an IR sensor is a promising
was done on ground images, mainly as pre-processing approach. The fusion is possible on different stages of
before tracking for surveillance or traffic applications information processing:
[4]. Different approaches, e.g. finding edges [5]
or deformable model for vehicle [?] can be found. - data level (e.g. image data from different cameras)
Generally, there are problems in image processing
with car occlusion [6] and shadows [7]. Beside - object level (objects extracted from the image
stationary image acquisition from ground also from data)
moving platforms and arial images [8] are used. Image
processing approaches will also be used for detecting - information level.
lanes and obstacles by fusing information [9]. Image
processing for OIS was described in detail in [1]. The Image matching and registration as one part of the data
main processing is a classical object detection and fusion is a procedure that determines the best spatial fit
identification task. Following major problems have to between two or more images acquired at the same time
be solved - Object discrimination from a spatial and and depicting the same scene, by identical or different
time variable background (cross, street, buildings, etc.) sensors. To fuse the different images and/or object
- Removing disturbing structures between object and data a synchronization of time is necessary. Time
camera, as well as shadow regions around the object - synchronization can be realized by internal clocks (e.g.
Identification of cars in a row, which are occluded by computer) or external time information (e.g. GPS). For
other The first problem can be solved at least in two the application of OIS we merge images on camera
different ways: node level and fuse the position information and
object features on junction or region level. The object
1. Working with the image sequences, which are information are fed in from different camera nodes.
subtracted from the image before or Both procedures should be explained more detailed.
Fusion on data level: To fit the requirement for 24-hour
2. following the time changing background. observations, typical CCD-cameras fail because of
limited illumination of the objects. Car headlights and
For the first approach background can be eliminated rear lamps seems not be sufficient. To overcome this
very simple, but stalled vehicles are invisible. problem the car self-radiation, which has a maximum
The other approach is an on-going update of the in the thermal spectral region (TIR), can be used.
background. This needs much more expense, but There are a bunch of different detectors sensitive in
allows a detection of moving and also stalled cars. this spectral range. Most of the sensors are expensive
As a result of this procedure, background can be and needs additional cooler. Recent developments
subtracted from the current image and objects can be show, that bolometer arrays are a candidate for cheaper
derived as shown in fig. 2. Additional morphological and uncooled detector arrays. Therefore, a camera
operation removes clutter and close objects structures. development was started, which gives full access to
The right image shows the grey coded objects after the sensor, control, data correction and dataflow. First
labelling. experiments were done with a commercial system. An
example of the data and the fusion of both on data
level is shown in fig. 3.

Observation was done on late afternoon. The left image


is a typical CCD-image. Contrast becomes smaller,
only reflections from sun glitters on car roofs are visi-
ble. The middle image was taken from a bolometer sen-
sor. The whole intersection and the street are visible.
Figure 2: Object detection in a traffic scene (left: The right image is the fusion of both. For visualiza-
original image, right processed image. tion grey level image was put in the green channel and
The other major problem occurs after detecting the ob- the TIR image in the red. Merging the visible image
ject. To determine size and shape, disturbing effects and the IR-image a affine transformation was used. To

Palmerston North, November 2003 81


fuse data from different sources is obvious, but needs 5.3 Data Acquisition and Georeferencing
spatial and time synchronization, because of different
imaging system and observation conditions. For this Transformation from image to world coordinates is
example, the synchronization task is based on manual an essential need in order to calculate metric based
procedures, like finding equivalent points in both IR traffic data. Standard photogrammetric procedures for
and VIS images and the calculation of the necessary the transformation of coordinates within monocular
transformation. Especially the automatic spatial syn- images are used. Basic assumptions are well
chronization is a research topic. All these operations distributed and accurate measured ground control
are done in each camera node. Another advantage of points (GCP) in object and image space as well as an
TIR images shows the following image pair, which was exact camera calibration. The GCPs are calculated
taken from the same observation point, but at daytime. via DGPS within WGS84 and UTM-projection. The
calculated camera calibration parameters (interior and
exterior orientation) and image coordinates are input
for the transformation equations. Due to monocular
image accusation the object or vehicle positions have
to be projected on a XY-plane in object space. The
vehicle positions projected on that plane depend on
Figure 3: Sensor fusion of VIS an IR images (I). the camera distance, the camera declination and the
position of the point within the vehicle representing its
position. Once the camera calibration is set the vehicle
positions can be transformed to world coordinates
within image sequences until the camera position
changes. Additionally, intersection geometry for
example lanes etc... can be transformed from image
into object space.

Figure 4: Sensor fusion of VIS an IR images (II). 5.4 Calculation of Traffic Characteristics
Fig. 4 shows an example of fusing RGB and TIR im- The following figure 5 shows the general traffic char-
ages at daytime. After coregistration the IR-image to acteristics calculation process.
the RGB-image and applying affine transform, a direct
comparison is possible. In difference to figure 1 the
infrared image was fit directly into the grey level im-
age (a color separation of the RGB-image). The RGB
image has a much more higher resolution than the TIR-
image (720x576 pixel). Spatial and true color object
data can be derived from this image. The TIR-image
has a smaller resolution (320x240 pixel). The com-
bination of both images gives a more complex result,
because of the thermal features, which can be observed
on the engine bonnet and as reflected radiation from
under the car. These are also new features for object
detection and description within the image-processing
Figure 5: General traffic characteristics calculation
task. Object Data Fusion: Occluded regions can only
process.
be analyzed with additional views on these objects. Be-
cause camera nodes always transmit object informa- The image processing (not shown above) cyclically
tion, a data fusion process on object level is neces- delivers the parameters of all identified traffic objects
sary. The object list from different camera nodes has to such as cars, trucks, cyclists and pedestrians in traffic
be analyzed and unified. Object features like position, object lists (TO Lists), containing type, size, speed,
size, and shape vary from each view angle. Therefore direction, geographic location etc. of the objects for a
the different object features from different perspectives certain sample time. A storage procedure (TO storage)
have to be compared. The result of this operation is one writes this raw traffic data into a data base for further
traffic object with an exact position, size and shape, etc. processing. A tracking procedure (TO Tracking) marks
The assumptions for this operation are time and spatial traffic objects appearing in consecutive time samples
synchronized image data. The sequential processing of by an unique object identification. Thus, traffic
this list allows the derivation of more detailed infor- objects can be pursued throughout the observed traffic
mation, e.g. track of the traffic participants, removing area. The tracked traffic objects and their parameters
erroneous objects, etc. are stored in the Tracked TO Data area of the data

82 Image and Vision Computing NZ


base. Using this data, a the Tracking Characteristics no sensor webs or measurement systems available, that
Calculation module computes traffic characteristics, are able to measure each object on a intersection and
e.g. traffic density or flow rates. To compute the traffic that provides all necessary spatial information for a
density the number of motor vehicles on a certain real dynamic traffic control system. OIS is a system
road segment is necessary. Using their geographical that is able to acquire features like size, shape and
coordinates and direction all motor vehicles moving other object features to classify and identify traffic
along a road segment are selected form the data base. objects. Summarizing this, OIS gives a complete data
By simple counting the number of this vehicles and information (overview) over a whole intersection.
scaling their number to a one kilometer segment the
traffic density can be obtained. Flow rates can be The project is funded by the German government
determined by counting the number of cars crossing (Federal Ministry of Education and Research,
a defined traverse section. Based on the traffic object registration number: 03WKJ02B).
parameter set over a number of sample times and
using the tracking information, the number of vehicles References
crossing the section is counted and a traffic flow
measure in vehicles per hour or so can be obtained. [1] R. Reulke, A. Bœrner, H. Hetzheim, A. Schis-
chmanow, and H. Venus. A sensor web for road-
traffic observation. In Image and Vision Comput-
6 An Adaptive and High Dynamic ing, New Zealand 2002, pages 293–298, 2002.
Network Control
[2] A. Broggi and E. D. Dickmanns. Applications of
The video based traffic sensor developed in this computer vision to intelligent vehicles. Image and
project creates possibilities for new concepts of traffic Vision Computing, 18(5):365–366, April 2000.
control for intersections and wide area networks. This
[3] F. Pedersini, A. Sarti, and S. Tubaro. Multi-
approach also includes the development of a new
camera parameter tracking. IEE Proceedings -
dynamic and adaptive traffic control models for traffic
Vision, Image and Signal Processing, 148(1):70–
lights. State-of-the-art in traffic observation is the
77, February 2001.
induction loop. Induction loops are embedded in the
pavement and register about 1 · 2m. This kind of sensor [4] M. Betke, E. Haritaoglu, and L. S. Davis. Real-
is able to measure the present of a vehicle, its speed time multiple vehicle detection and tracking from a
and rough classification. In a next processing step it is moving vehicle. Machine Vision and Applications,
possible to calculate time intervals between vehicles. 12(2):69–83, 2000.
This data is needed to control traffic light signals on
intersections. For a real dynamic and demand based [5] J. Canny. A computational approach to edge
traffic signal light control, you would need the data of detection. PAMI, 8:679–698, 1986.
numerous induction loops on one single intersection.
[6] I. Ikeda, S. Ohnaka, and M. Mizoguchi. Traf-
This is neither efficient nor realizable. OIS sensor
fic measurement with a roadside vision system
web offers a new kind of traffic data/information. It is
- individual tracking of overlapped vehicles. In
based on so called traffic-actuated signals. This means,
Proceedings of the 13th International Conference
the system is detecting information about the real-time
on Pattern Recognition, pages 859–864, 1996.
traffic situation on an intersection automatically. For
example the length of the queue for all different lanes, [7] G. S. K. Fung, Nelson H. C. Yung, Grantham K. H.
the traffic flow at the intersection, the density of traffic Pang, and Andrew H. S. Lai. Effective moving cast
and the current velocity of each vehicle. The OIS shadow detection for monocular colr traffic image
sensor web automatically processes data-position, sequences. Optical Engineering, 41:1425–1440,
speed vectors of each vehicle, queue length as well as June 2002.
other relevant features. This leads to a complete and
real-time overview at least 20 m before and behind an [8] R. Chellappa. An integrated system for site model
intersection. Based on this new quality of data, new supported monitoring of transportation activities in
approaches are developed to control traffic light signals aerial images. In Proceedings of the 1996 ARPA
on dynamic demand. At the moment most of traffic Image Understanding Workshop, pages 275–304,
lights are controlled by two different ways: 1. control 1996.
by fix-time signals 2. control by so called actuated
[9] M. Beauvais and S. Lakshmanan. Clark: a
signals Fix-time signals means: green and red time
heterogeneous sensor fusion method for finding
is fixed over the time and independent of the actual
lanes and obstacles. Image and Vision Computing,
situation on the intersection. Actuated signals means:
18(5):397–4, April 2000.
a number of fix-time signals are used for different
demands and situations. At this time there are nearly

Palmerston North, November 2003 83

View publication stats

You might also like