0% found this document useful (0 votes)
4 views7 pages

Automated 3D Road Sign Mapping With Stereovision-Based Mobile Mapping Exploting Disparity Information From Dense Stereo Matching

Uploaded by

Pema Tseten
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views7 pages

Automated 3D Road Sign Mapping With Stereovision-Based Mobile Mapping Exploting Disparity Information From Dense Stereo Matching

Uploaded by

Pema Tseten
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/242070956

Automated 3D Road Sign Mapping with Stereovision-based Mobile Mapping


Exploting Disparity Information from Dense Stereo Matching

Conference Paper in Photogrammetrie - Fernerkundung - Geoinformation · August 2012


DOI: 10.5194/isprsarchives-XXXIX-B4-61-2012

CITATIONS READS

10 4,218

2 authors:

Stefan Cavegn Stephan Nebiker


Hexagon University of Applied Sciences and Arts Northwestern Switzerland
23 PUBLICATIONS 340 CITATIONS 111 PUBLICATIONS 1,376 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

UAV Remote Sensing View project

3D Vision Mobile Mapping View project

All content following this page was uploaded by Stephan Nebiker on 29 July 2014.

The user has requested enhancement of the downloaded file.


International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia

AUTOMATED 3D ROAD SIGN MAPPING WITH STEREOVISION-BASED


MOBILE MAPPING EXPLOITING DISPARITY INFORMATION FROM
DENSE STEREO MATCHING

S. Cavegn, S. Nebiker

Institute of Geomatics Engineering


FHNW University of Applied Sciences and Arts Northwestern Switzerland, Muttenz, Switzerland
(stefan.cavegn, stephan.nebiker)@fhnw.ch

Commission IV, WG IV/2

KEY WORDS: Mobile, Mapping, Point Cloud, Extraction, Classification, Matching, Infrastructure, Inventory

ABSTRACT:

This paper presents algorithms and investigations on the automated detection, classification and mapping of road signs which
systematically exploit depth information from stereo images. This approach was chosen due to recent progress in the development of
stereo matching algorithms enabling the generation of accurate and dense depth maps. In comparison to mono imagery-based
approaches, depth maps also allow 3D mapping of the objects. This is essential for efficient inventory and for future change
detection purposes. Test measurements with the mobile mapping system by the Institute of Geomatics Engineering of the FHNW
University of Applied Sciences and Arts Northwestern Switzerland demonstrated that the developed algorithms for the automated 3D
road sign mapping perform well, even under difficult to poor lighting conditions. Approximately 90% of the relevant road signs with
predominantly red, blue and yellow colors in Switzerland can be detected, and 85% can be classified correctly. Furthermore, fully
automated mapping with a 3D accuracy of better than 10 cm is possible.

1. INTRODUCTION countries. Since driver assistance systems or intelligent


autonomous vehicles are not the focus of this work, real-time
A great many road signs can be found along the streets in execution is not of top priority. Instead, the emphasis is on
Western Europe, e.g. in Switzerland alone approximately five completeness, correctness and geometric accuracy.
millions signs are in existence. In many cases, there is no digital
information available concerning position and state of these Circle Triangle
road signs and several of them are believed to be unnecessary.
For analysis purposes and to overcome these issues, a road sign
inventory could be the solution. To establish such an inventory,
attribute data and images of the road signs are traditionally
RED

captured in situ and the position is determined using a GNSS


receiver with meter to decimeter accuracy. In recent years,
mapping and inventory has increasingly been carried out on
basis of data recorded with mobile mapping systems as they
permit efficient mapping of 3D road infrastructure assets
without disrupting the traffic flow and endangering the
Circle Rectangle Square
surveying staff. In Belgium, road signs over the whole country
could be mapped by means of laserscanning data; attribute data
was mostly obtained by user interaction (Trimble 2009). In The
Netherlands, road sign mapping was carried out manually on
BLUE

basis of panorama imagery which was collected every 5 m (de


With et al. 2010). If road signs can largely be extracted
automatically from georeferenced images, the manual effort can
be reduced significantly. This paper introduces algorithms for
the automated road sign detection and classification from
Diamond
mobile stereo image sequences as well as the determination of
YELLOW

the 3D position and other attribute data. These algorithms were


primarily optimized for road signs in Switzerland with mainly
red, blue and yellow colors which can appear in the shapes
circle, triangle, rectangle, square and diamond as well as in four Figure 1. Road signs in Switzerland which can automatically be
different dimensions depending on the road type (see Figure 1). detected, classified and mapped with the developed algorithms
However, the algorithms can be adapted to road signs of other

61
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia

The image-based road sign extraction process can typically be deformations (Reiterer et al. 2009, Ren et al. 2009).
subdivided into two main steps. First, a detection of the road Maldonado-Bascón et al. (2007) implemented two types of
signs is carried out aiming at localizing potential candidates. SVM which enable their algorithms to handle translations,
Second, a classification is necessary to identify the type of road rotations, scaling and mostly partial occlusions.
sign. If the absolute position of the detected road signs is of
interest, mapping of the signs is performed in a third step. A
comprehensive overview of different approaches for road sign 2. EXPLOITATION OF DEPTH INFORMATION FROM
detection and classification is given in Nguwi & Kouzani STEREOVISION GEOMETRY
(2008); the most relevant are documented in the following
chapters and at length in Cavegn & Nebiker (2012). For the designed and subsequently presented approach aiming
at detection, classification and mapping of road signs, the
1.1 Detection of road signs exploitation of depth maps from stereovision imagery is the
core element. Although depth information has an enormous
In many cases, road sign detection is based on color potential, earlier and related work on vision-based road sign
information. Color segmentation with thresholds allows fast extraction was primarily focused on utilizing mono imagery.
focusing on search regions. As the RGB color space is sensitive Only Cyganek (2008) incorporated depth data from stereo
to changes of lighting conditions due to shadows, illumination imagery as an optional contribution for search space reduction
and view geometry as well as strong reflections, segmentation is in the extraction process. Furthermore, previous investigations
usually carried out in the HSV color space based on the hue and in general did not focus on establishing the 3D position of the
saturation components (Fleyeh 2006, Maldonado-Bascón et al. extracted road signs. Exceptions are Madeira et al. (2005), Kim
2008). Madeira et al. (2005) use the hue and the chromatic et al. (2006) and Baró et al. (2009) who determine the absolute
RGB component for color segmentation. In comparison to the 3D object point coordinates based on stereo imagery as well as
chromatic RGB component, the saturation component is very Shi et al. (2008) who use a combined approach of image and
sensitive to noise in case of small values. laserscanning data. While Shi et al. (2008) are able to achieve
an accuracy of approximately 30 cm, Madeira et al. (2005) just
1.2 Classification of road signs obtain point coordinates with meter accuracy. However, precise
determination of infrastructure objects in all three dimensions in
Road signs are frequently classified by means of neural a global geodetic reference system is crucial and has become
networks (de la Escalera et al. 2003, Nguwi & Kouzani 2008). increasingly important with respect to traffic planning,
Since the algorithms have to be trained based on many images automated change detection, simulations and visual inspection
appearing in different scaling, orientation and illumination in mixed reality environments.
contexts, they are usually just implemented for a few types such
as speed signs (Ren et al. 2009). Another method for the For efficient data capturing, a stereovision-based mobile
classification process is template matching. This intensity based mapping system (MMS) has to be employed (see Figure 2). The
image correlation approach is, for example, used by Piccioli et generation of depth maps is advantageously based on
al. (1996) and Malik et al. (2007). In its basic form, it is not normalized images. Therefore, the distortion of the collected
robust regarding scaling, rotation or affine transformations in stereo images has to be corrected and the imagery subsequently
general and is sensitive to illumination changes (Ren et al. transformed into the stereo normal case. Based on the resulting
2009). normalized images, the disparity for each pixel is determined by
means of a stereo matching algorithm. The stereo geometry
1.3 Further approaches for the detection and classification allows computing a depth value for each disparity and all values
of road signs of an image constitute a depth map. For the investigations
described in this paper, dense matching was performed with the
Many approaches are not designed to exclusively detect or semi-global block matching algorithm implemented in OpenCV
classify road signs, but they are able to perform both tasks. A (OpenCV 2012), which differs in a few points from the SGM
few of them are mentioned in the following. algorithm by Hirschmüller (2008) (e.g. computation of
matching costs).
The Hough transform tolerates gaps and is not very sensitive to
noise. However, due to different dimensions and shapes of road For the subsequent automated detection and mapping of road
signs, many scales have to be considered which negatively signs, both normalized images and depth maps are required (see
influence the computation time and memory requirements. Figure 2). The classification process additionally needs
Therefore, real-time applications need faster modified methods. templates of all possible road signs. After successful detection,
Chutatape & Guo (1999) proposed a modified version of the classification and mapping, the regions of interest, the attribute
Hough transform which is utilized by Kim et al. (2006) for road data and the 3D position of the road signs are known.
sign detection following the extraction of edges from image data
The developed object extraction algorithms exploit the stereo
by means of the Canny operator. Barrile et al. (2007) detect
disparities and the derived depth maps, respectively, for the
shapes based on the standardized Hough transform. For the
following tasks:
classification, they use the generalized Hough transform which
is also utilized by Habib et al. (1999) on edges which were • Search space reduction using a predefined distance range
extracted with the Canny filter. interval
• Definition of distance-related criteria for the color
The approaches of Support Vector Machines (SVM) and Scale segments
Invariant Feature Transform (SIFT) are increasingly applied to • Generation of regions with similar depth values
both road sign detection and classification. If the SIFT approach (planar segments)
by Lowe (2004) is used, the extracted features are invariant in • Computation of 3D coordinates
terms of translation, rotation and scaling as well as insensitive
to illumination changes, image noise and small geometric

62
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia

Figure 2. Input and output data for the automated detection, classification and 3D mapping of road signs

3. DEVELOPED ALGORITHMS AND 3.1 Automated detection of road signs


SOFTWARE MODULES
The input to the detection process consists of the left
The presented approach which is based on stereo images and normalized image and the corresponding depth map for each
depth maps was implemented in Matlab with several algorithms stereo image pair (see Figure 4). Since no permanent road signs
and software modules. They cover the whole workflow from the are expected to occur in the lower third of the normalized
automated detection and classification through to the mapping image, this region is colored black. As the hue und saturation
of road signs (see Figure 3) and are explained below. components are relatively insensitive to the varying lighting
conditions, which are typical to vision-based mobile mapping,
the RGB normalized image is transformed into the HSV color
Normalized image Depth map INPUT
space. Afterwards, the depth map is used to restrict the
subsequent search space in the imagery by applying a
Transformation into predefined distance range interval. To enable the detection of
HSV color space
road signs on an adjacent lane, a base-depth ratio from 0.06 to
0.25 was chosen. In addition, with a high image acquisition
Reduction to predefined distance range interval
frequency, the same road sign can be detected and classified
redundantly. The segmentation of red, blue and yellow color
Color segmentation
segments is carried out using thresholds for the hue and
saturation components, which were determined empirically
Evaluation of color segments
(criteria for dimensions)
based on images from different measuring campaigns. For blue
Generation of
DETECTION
segments, the hue values have to be between 0.52 and 0.72 and
planar segments the saturation range is from 0.20 to 0.80. Pixels featuring a hue
Shape determination using
roundness and fill factor value between 0.04 and 0.19 as well as a saturation value which
is higher than 0.50 and smaller than 0.98 are covering yellow
Computation of segments. If the area of the color segment corresponds to
standardized dimensions distance-related criteria, its shape is described by the two
features roundness and fill factor:
Computation of
detection indicator 4 ⋅ π ⋅ segment area (1)
roundness =
segment circumference 2
Predefined
Template matching CLASSIFICATION segment area (2)
templates fill factor =
minimum bounding rectangle area

Determination of 3D position MAPPING The extents of a segment must match the standardized road sign
dimensions within a certain tolerance. Again, the dense depth
Attribute data per road sign OUTPUT maps are used in determining the metric heights and widths of
segments in object space. The depth maps are also utilized in
the detection of planar segments. These are regions with similar
Figure 3. Developed algorithms for the automated detection, depth values. The ratio between the area of the planar segment
classification and 3D mapping of chromatic road signs in the color segment (intersection of Figure 4f and 4j) and the
(gray fields: operations exploiting the disparity and full area of the color segment (Figure 4f) serves as detection
depth information respectively) indicator which is used to assess the detection process.

63
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia

b d

a f

c e

g h i j

Figure 4. Automated detection of a road sign with predominantly yellow colors (a: left normalized image, b: hue component,
c: saturation component, d: distance reduced hue component, e: distance reduced saturation component, f: yellow color segments,
g: depth map, h: distance reduced depth map, i: planar segments, j: planar segments after morphological operations)

3.2 Automated classification and mapping of road signs system requires that the exterior orientation parameters of the
left normalized image are known.
The classification process for a detected road sign is performed
using cross-correlation-based template matching with The detection, classification and mapping processes
predefined reference templates. Since the hierarchical automatically yield a number of attribute data like the 3D
classification approach uses the properties color and shape to coordinates, the template number and the standardized side
considerably reduce the candidate set, not all road signs have to lengths. They can further be used for creating or updating a GIS
be tested. Road signs which are inexistent on the captured roads database. This is essential, because even in highly developed
can also be excluded from the classification process. countries, GIS-based digital road sign inventories either do not
yet exist at all or were derived from analogue maps and are
The template with the highest normalized cross-correlation normally not up-to-date.
coefficient within the search image is determined. This value
also serves as classification indicator. If it exceeds a predefined
threshold, the classification is considered as successful. 4. INVESTIGATIONS AND RESULTS
Candidates with a detection or classification indicator below
this predefined threshold are assigned to a list of uncertain The implemented algorithms were evaluated based on two field
objects for a subsequent user-controlled verification and (re-) test campaigns in the city of Muttenz near Basel with the
classification. stereovision-based mobile mapping system by the FHNW
Institute of Geomatics Engineering (IVGI). Currently, this
The dimensions of the search image are defined by the road sign MMS features two pairs of stereo systems, each with a stereo
dimensions in the normalized image plus a margin on each side base of approximately 90 cm, and with industry cameras at
(e.g. 10 pixels). The template is scaled to the dimensions of the different geometric resolutions (Full HD and 11MP). Direct
color segment within the search image. The correlation is georeferencing of the stereo imagery is provided by an entry-
computed based on the channel which empirically showed the level GNSS/IMU system in combination with a distance
highest similarity between a real road sign image and the measuring indicator. Earlier empirical tests of the IVGI MMS in
corresponding synthetic template. This is the blue channel for multiple test campaigns demonstrate accuracies in object
red road signs, the red channel for blue signs and the saturation coordinate space for well-defined points of 3-4 cm in along-
component for yellow road signs. Pixels of the search image track and cross-track and 2-3 cm in vertical dimension – under
which are white in the template have a too low gray value presence of a good GNSS availability (Burkhard et al. 2012).
depending on the image quality. Hence, to improve the
matching results, all pixel values are set to white (maximal The first test campaign was carried out in winter time
value) or black (zero). The required threshold is computed (November 2010) with difficult to poor lighting conditions, the
dynamically based on the gray value distribution of the search second in summer (July 2011) in sunny conditions. In both
image. Further details can be found in Cavegn & Nebiker cases, about 2500 stereo image pairs were captured on
(2012). residential roads at five frames per second and at a driving
speed of approximately 40 km/h resulting in about one Full HD
When a road sign could be detected and classified, the 3D stereo frame every two meters. For the subsequent evaluation of
object coordinates of the sign are determined. For the the detection and classification quality, all relevant road signs
computation of the model coordinates, the image coordinates of with predominantly red, blue and yellow colors were identified.
the sign’s center of gravity, the corresponding depth value as These relevant signs were all road signs adjacent to the driving
well as the parameters of the interior orientation are needed. lane on the right-hand side facing the driver, i.e. with a road
The following transformation to the desired geodetic reference sign plane roughly perpendicular to the road axis, thus covering

64
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia

the vast majority of road signs. The group of relevant road signs 5. CONCLUSIONS AND OUTLOOK
did also not include road signs for cross-roads which were not
facing the mapped roads. The designed algorithms were The investigations demonstrate the potential, in terms of
performed in the distance range from 4 to 14 m. The first test automation and accuracy, offered by stereovision-based mobile
with winter imagery yielded an automatic detection of 89% of mapping, if dense depth information is exploited.
all relevant road signs and a correct automatic classification of Approximately 90% of the relevant road signs with
82% (see Table 1). Based on the summer imagery, 91% of all predominantly red, blue and yellow colors in Switzerland can
road signs could automatically be detected, and 89% could be be detected, and 85% can be classified correctly. By means of a
classified correctly. With an additional user-supported step, the user-supported approach (Cavegn & Nebiker 2012), these rates
classification accuracy could be increased by another 5%. Due can be increased by another 5%. Therefore, only 5 to 10% of
to this user-supported approach and some further built-in the road signs have to be digitized either interactively in the
constraints, there were hardly any false positives. stereo imagery or on site. Moreover, due to various constraints
built into the algorithms, there are hardly any false positives.
There are different reasons for an incorrect detection or
classification of road signs. The detection process yields many The presented approach is robust in terms of scaling,
red segments close to construction areas due to safety fences translations and small rotations. Although it is expected to
and warning devices. If the areas of these color segments are not obtain better results with nearby road signs, they can be
too big, they can lead to false positives. Although the road signs detected in the whole predefined distance range interval. Road
in Switzerland generally appear in good condition, a few of signs can arbitrarily be positioned in the image and small
them are yellowed. Thus, there are very low values for the rotations are tolerated. Furthermore, it is possible to detect
saturation component. The same is also the case for road signs multiple road signs in the same image appearing in the shapes
which are located in shadows. Since the defined threshold for circle, rectangle, square, triangle and diamond.
this component cannot be exceeded, the detection of such road Not only depth maps of good quality but also sufficient color
signs is not possible. In addition, there are some difficulties to segmentation is crucial for the detection success. For this
automatically detect road signs if the depth maps are poor or purpose, appropriate thresholds have to be applied. For the
incomplete. For several road signs, there exists no predefined presented investigations, the interval for each component was
template which leads to no or a wrong classification. A chosen to be quite large. However, this was only possible since
suboptimal threshold for the search image binarization can the search space could significantly be reduced due to depth
cause a too low correlation coefficient. information and false positives could be rejected using certain
built-in constraints.
Quan- Detec- Classi- False
tity tion fication Positives Since a detection and classification quality of 100% is unlikely,
all 152 55% 47% 2 it is possible to overlay the automatically mapped road signs in
Winter a georeferenced 3D video. The 3D videos can be viewed with a
relevant 65 89% 82%
all 96 71% 64% 4 stereovision client (e.g. Burkhard et al. 2011), the results
Summer visually verified and the missing road signs quickly digitized. In
relevant 46 91% 89%
the future, a first implementation of the algorithms for white
and gray road signs which uses the depth information in
Table 1. Detection and classification quality of the developed combination with the Hough transform (Cavegn & Nebiker
algorithms for two test campaigns 2012) will further be improved. The detection of other complex
road signs and the identification of text (Wu et al. 2005) are
For the evaluation of the geometric accuracy, 3D positions for also planned. An increase of the geometric accuracy and
22 reference road signs were determined using precise reliability could be achieved by matching in stereo image
tachymetric observations. For the first test campaign, the sequences (Huber et al. 2011). Tracking of road signs over
differences between the 3D positions which were automatically multiple stereo image pairs would particularly effect an
derived by the described algorithms and the reference positions enhancement of the semantic quality.
were computed. The maximal residual for a component is 16
cm; however, most differences are in the range of 5 cm (see The goal of related work in progress is to determine the impact
Table 2). For the empirical standard deviation of the 3D of different camera resolutions on the detection and
position difference, a value of 9.5 cm was calculated. classification quality. First investigations with a stereo system
composed of industry cameras with a higher resolution of
in mm ∆across ∆along ∆height ∆3D eleven megapixels show a slight improvement of the results. For
Mean -36 23 -36 86 the identification of text, the higher geometric resolution is
Maximum 152 146 157 159 mandatory. Current investigations also show that the depth map
quality can significantly be increased using both image sensors
mdiff 46 64 53 95
with a higher resolution and adequate radiometric adjustments,
which again positively affect the automated road sign mapping.
Table 2. Mapping accuracy of the developed algorithms for a
winter test campaign

65
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012
XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia

REFERENCES Kim, G.-H., Sohn, H.-G. & Song, Y.-S., 2006. Road
Infrastructure Data Acquisition Using a Vehicle-Based Mobile
Baró, X., Escalera, S., Vitrià, J., Pujol, O. & Radeva, P., 2009. Mapping System. Computer-Aided Civil and Infrastructure
Traffic Sign Recognition Using Evolutionary Adaboost Engineering, 21(5), pp. 346-356.
Detection and Forest-ECOC Classification. IEEE Transactions
on Intelligent Transportation Systems, 10(1), pp. 113-126. Lowe, D. G., 2004. Distinctive Image Features from Scale-
Invariant Keypoints. International Journal of Computer Vision,
Barrile, V., Cacciola, M., Meduri, G. M. & Morabito, F. C., 60(2), pp. 91-110.
2007. Automatic Recognition of Road Signs by Hough
Transform. 5th International Symposium on Mobile Mapping Madeira, S. R., Bastos, L. C., Sousa, A. M., Sobral, J. F. &
Technology, Padua, Italy. Santos, L. P., 2005. Automatic Traffic Signs Inventory Using a
Mobile Mapping System. International Conference and
Burkhard, J., Nebiker, S. & Eugster, H., 2011. Stereobild- Exhibition on Geographic Information GIS Planet, Estoril,
basiertes Mobile Mapping: Technologie und Anwendungen. Portugal.
Geomatik Schweiz, 109(6), pp. 295-298.
Maldonado-Bascón, S., Lafuente-Arroyo, S., Gil-Jiménez, P.,
Burkhard, J., Cavegn, S., Barmettler, A. & Nebiker, S., 2012. Gómez-Moreno, H. & López-Ferreras, F., 2007. Road-Sign
Stereovision Mobile Mapping: System Design and Performance Detection and Recognition Based on Support Vector Machines.
Evaluation. International Archives of Photogrammetry, Remote IEEE Transactions on Intelligent Transportation Systems, 8(2),
Sensing and Spatial Information Sciences, XXII ISPRS pp. 264-278.
Congress, Melbourne, Australia.
Maldonado-Bascón, S., Lafuente-Arroyo, S., Siegmann, P.,
Cavegn, S. & Nebiker, S., 2012. Automatisierte Gómez-Moreno, H. & Acevedo- Rodríguez, J., 2008. Traffic
Verkehrszeichenkartierung aus mobil erfassten Stereobilddaten Sign Recognition System for Inventory Purposes. IEEE
unter Verwendung der Tiefeninformation aus Dense-Stereo- Intelligent Vehicles Symposium, Eindhoven, The Netherlands,
Matching. Photogrammetrie – Fernerkundung – pp. 590-595.
Geoinformation (PFG), (accepted in March 2012).
Malik, R., Khurshid, J. & Ahmad, S. N., 2007. Road Sign
Chutatape, O. & Guo, L., 1999. A modified Hough transform Detection and Recognition using Colour Segmentation, Shape
for line detection and its performance. Pattern Recognition, Analysis and Template Matching. 6th International Conference
32(2), pp. 181-192. on Machine Learning and Cybernetics, Hong Kong, pp. 3556-
3560.
Cyganek, B., 2008. Road-Signs Recognition System for
Intelligent Vehicles. Second International Workshop, RobVis Nguwi, Y.-Y. & Kouzani, A. Z., 2008. Detection and
2008, Auckland, New Zealand, pp. 219-233. classification of road signs in natural environments. Neural
Computing and Applications, 17(3), pp. 265-289.
de la Escalera, A., Armingol, J. M. & Mata, M., 2003. Traffic
sign recognition and analysis for intelligent vehicles. Image OpenCV, 2012. OpenCV v2.1 Documentation. Camera
and Vision Computing, 21(3), pp. 247-258. Calibration and 3d Reconstruction. StereoSGBM.
http://opencv.willowgarage.com/documentation/cpp/camera_cal
de With, P., Hazelhoff, L., Creusen, I. & Bruinsma, H., 2010. ibration_and_3d_reconstruction.html#stereosgbm (13.4.2012).
Efficient Road Maintenance. Automatic Detection and
Positioning of Traffic Signs. GEOInformatics, 13(7), pp. 10- Piccioli, G., De Micheli, E., Parodi, P. & Campani, M., 1996.
12. Robust method for road sign detection and recognition. Image
and Vision Computing, 14(3), pp. 209-223.
Fleyeh, H., 2006. Shadow And Highlight Invariant Colour
Segmentation Algorithm For Traffic Signs. IEEE Conference Reiterer, A., Hassan, T. & El-Sheimy, N., 2009. Robust
on Cybernetics and Intelligent Systems, Bangkok, Thailand. Extraction of Traffic Signs from Georeferenced Mobile
Mapping Images. 6th International Symposium on Mobile
Habib, A. F., Uebbing, R. & Novak, K., 1999. Automatic Mapping Technology, Presidente Prudente, São Paulo, Brazil.
Extraction of Road Signs from Terrestrial Color Imagery.
Photogrammetric Engineering & Remote Sensing, 65(5), Ren, F., Huang, J., Jiang, R. & Klette, R., 2009. General
pp. 597-601. Traffic Sign Recognition by Feature Matching. 24th
International Conference on Image and Vision Computing New
Hirschmüller, H., 2008. Stereo Processing by Semiglobal Zealand, Wellington, pp. 409-414.
Matching and Mutual Information. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 30(2), pp. 328-341. Shi, Y., Shibasaki, R. & Shi, Z. C., 2008. Towards Automatic
Road Mapping by Fusing Vehicle-Borne Multi-Sensor Data.
Huber, F., Nebiker, S. & Eugster, H., 2011. Image Sequence International Archives of Photogrammetry, Remote Sensing and
Processing in Stereovision Mobile Mapping – Steps towards Spatial Information Sciences, 37(B5), pp. 867-872.
Robust and Accurate Monoscopic 3D Measurements and
Image-Based Georeferencing. Photogrammetric Image Trimble, 2009. Signs of Change in Belgium. Technology &
Analysis 2011, Lecture Notes in Computer Science 6952. more, 2009(3), pp. 12-13.
Springer, Berlin, pp. 85-95.
Wu, W., Chen, X. & Yang, J., 2005. Detection of Text on Road
Signs From Video. IEEE Transactions on Intelligent
Transportation Systems, 6(4), pp. 378-390.

66

View publication stats

You might also like