A Traffic Object Detection System For Road Traffic
A Traffic Object Detection System For Road Traffic
net/publication/224793443
CITATIONS READS
11 2,747
8 authors, including:
Some of the authors of this publication are also working on these related projects:
EPISECC - Establish a Pan-European Information Space to Enhance seCurity of Citizens View project
All content following this page was uploaded by Gerald Schlotzhauer on 10 June 2014.
Abstract
OIS is a new Optical Information System for road traffic observation and management. The complete system
architecture from the sensor for automatic traffic detection up to the traffic light management for a wide area
is designed under the requirements of an intelligent transportation system. Particular features of this system
are the vision sensors with integrated computational and real-time capabilities, real-time algorithms for image
processing and a new approach for dynamic traffic light management for a single intersection as well as for a wide
area. The developed real-time algorithms for image processing extract traffic data even at night and under bad
weather conditions. This approach opens the opportunity to identify and specify each traffic object, its location,
its speed and other important object information. Furthermore, the algorithms are able to identify accidents, and
non-motorized traffic like pedestrians and bicyclists. Combining all these single information the system creates
new derivate and consolidated information. This leads to a new and more complete view on the traffic situation
of an intersection. Only by this a dynamic and near real-time traffic light management is possible. To optimize a
wide area traffic management it is necessary to improve the modelling and forecasting of traffic flow. Therefore
the information of the current Origin-Destination (OD) flow is essentially. Taking this into account OIS also
includes an approach for anonymous vehicle recognition. This approach is based on single object characteristics,
order of objects and forecast information, which will be obtained from intersection to intersection.
4 Hardware Concept
The hardware concept is oriented on the requirements,
as described above. The system has to operate 24
Figure 1: System design for the test bed. hours and 7 days a week and to observe the whole
junction and at least 20m of all related streets. To
As shown in fig. 1 the camera nodes are part of a hier- improve the opportunities for image acquisition more
archy, which are linked via Internet or Wireless LAN than one camera system for one observation standpoint
with the computer systems on the intersection and the should be in use. Such a camera node fits the first
management center. Due to the limited data transfer requirement with a high (spatial) resolution camera
rate and possible failure of a system on an intersection and a low resolution thermal infrared camera. Also
the camera node works independently. Image process- stereo and distance measurements are possible with
ing is decentralized and will be done in the camera two identical cameras. The camera node is able to
node, so that only objects and object features will be acquire data from up to four cameras in a synchronized
transmitted. Together with error information the object mode. The camera node has real-time data processing
data will be collected on the next level in the hierar- capabilities and allows synchronous capture of GPS
chy. For synchronization purposes time signals can and INS (inertial navigation system) data, which
be incorporated over a network or from independently should be external mounted. Due to limitations of
received signals (like GPS). The camera node is part of camera observing positions and possible occlusions
a hierarchy starting with camera nodes, junctions, sub- (e.g. from buildings, cars and other disturbing
regions, etc. The next so-called junction level unifies objects) for intersection observation mostly more than
all cameras observing the same junction. Starting with one standpoint for a camera node (camera-system)
the information from the camera level, relevant data is necessary. Therefore communication or data
sets are fused in order to determine the same objects in transmission between camera nodes and the computer
different images. After that, these objects are tracked in the next higher hierarchy becomes critical. To
as long as they are visible. So traffic flow parameters reduce data volume in the network the real time data
(e.g. velocity, traffic jam, car tracks) can be retrieved. processing capability of the camera-node is used to
Level number 3, the so-called region level, uses the speed up image processing and to transmit only object
extracted traffic flow parameters out of the lower level data. For the real-time unit a hardware implementation
and feeds this data into traffic models in order to control was chosen. Large free programmable logic gate
the traffic (e.g. switching the traffic lights). Additional arrays (FPGA) are available now. A programming
levels can be inserted. For test applications the cam- language is available (VHDL) and different image
era node will generate a compressed data stream. For processing algorithms are implemented.
typical working mode only object features in an image
should be transmitted and collected in a computer of 5 Data Processing
the hierarchy. Generally, optical sensor systems in the
visible and near-infrared range of the electromagnetic The processing of data in the sensor web follows suc-
spectrum have reached a very high quality standard, cessive steps. In the first step image data are generated
which meet the requirements even for high-level scien- and pre-processed. After that the objects are extracted
tific and commercial tasks, above all concerning the ra- out of the images. In the last step all the object features
diometric and geometric resolution and data rate. Oth- from different camera nodes will be collected, unified
erwise sensors working in the thermal infrared range and processed into traffic information. The data pro-
(TIR) are still a research topic for traffic applications. cessing is optimized to the logical design of the system.
Technology development for the next few years will There are operations for the system configuration, for
not be focused on higher resolutions or faster read-outs, the operational working for the data mining and visu-
because for most applications the performance of these alization. Results are new data and information. A
sensors is sufficient. The emphasis will be put on smart, complex data management system regulates the access,
intelligent sensor systems with different measurement the transmission, handling and the analysis of the data.
Figure 4: Sensor fusion of VIS an IR images (II). 5.4 Calculation of Traffic Characteristics
Fig. 4 shows an example of fusing RGB and TIR im- The following figure 5 shows the general traffic char-
ages at daytime. After coregistration the IR-image to acteristics calculation process.
the RGB-image and applying affine transform, a direct
comparison is possible. In difference to figure 1 the
infrared image was fit directly into the grey level im-
age (a color separation of the RGB-image). The RGB
image has a much more higher resolution than the TIR-
image (720x576 pixel). Spatial and true color object
data can be derived from this image. The TIR-image
has a smaller resolution (320x240 pixel). The com-
bination of both images gives a more complex result,
because of the thermal features, which can be observed
on the engine bonnet and as reflected radiation from
under the car. These are also new features for object
detection and description within the image-processing
Figure 5: General traffic characteristics calculation
task. Object Data Fusion: Occluded regions can only
process.
be analyzed with additional views on these objects. Be-
cause camera nodes always transmit object informa- The image processing (not shown above) cyclically
tion, a data fusion process on object level is neces- delivers the parameters of all identified traffic objects
sary. The object list from different camera nodes has to such as cars, trucks, cyclists and pedestrians in traffic
be analyzed and unified. Object features like position, object lists (TO Lists), containing type, size, speed,
size, and shape vary from each view angle. Therefore direction, geographic location etc. of the objects for a
the different object features from different perspectives certain sample time. A storage procedure (TO storage)
have to be compared. The result of this operation is one writes this raw traffic data into a data base for further
traffic object with an exact position, size and shape, etc. processing. A tracking procedure (TO Tracking) marks
The assumptions for this operation are time and spatial traffic objects appearing in consecutive time samples
synchronized image data. The sequential processing of by an unique object identification. Thus, traffic
this list allows the derivation of more detailed infor- objects can be pursued throughout the observed traffic
mation, e.g. track of the traffic participants, removing area. The tracked traffic objects and their parameters
erroneous objects, etc. are stored in the Tracked TO Data area of the data