0% found this document useful (0 votes)
15 views7 pages

Color Detection and Segmentation For Road and Traffic Signs

Uploaded by

wiem wayma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views7 pages

Color Detection and Segmentation For Road and Traffic Signs

Uploaded by

wiem wayma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

http://www.diva-portal.

org

This is the published version of a paper presented at IEEE Conference on Cybernetics and
Intelligent Systems.

Citation for the original published paper:

Fleyeh, H. (2004)
COLOR DETECTION AND SEGMENTATION FOR ROAD AND TRAFFIC SIGNS
In: IEEE (ed.), CIS-RAM 2004 (pp. 809-814). Singapore

N.B. When citing this work, cite the original published paper.

Permanent link to this version:


http://urn.kb.se/resolve?urn=urn:nbn:se:du-30870
COLOR DETECTION AND SEGMENTATION FOR ROAD AND
TRAFFIC SIGNS
Hasan Fleyeh
[email protected]
Department of Computer Engineering, Dalarna University, Sweden
Guest Researcher, Transportation Research Institute, Napier University, Scotland

remaining used shape. RGB images are converted


Abstract by Vitabile and Sorbello [6] into HSV color space
which is divided into a number of subspaces
This paper aims to present three new methods for (regions). The S and V components are used to find
color detection and segmentation of road signs. in which region the hue is located. Paclik et al. [7]
The images are taken by a digital camera mounted segmented the color images by using HSV color
in a car. The RGB images are converted into IHLS space. Colors like red, blue, green, and yellow were
color space, and new methods are applied to extract segmented by H component and a certain threshold.
the colors of the road signs under consideration. Vitabile et al. [8] proposed a dynamic, optimized
The methods are tested on hundreds of outdoor HSV sub-space, according to the s and v values of
images in different light conditions, and they show the processed images. Color segmentation was
high robustness. This project is part of the research achieved by Vitabile et. al. [9, 10] by using priori
taking place in Dalarna University / Sweden in the knowledge about color signs in the HSV system. de
field of the ITS. la Escalera et al. [11] built a color classifier based
on two look-up tables derived from hue and
Keywords: Color segmentation, color detection, saturation of an HSI color space. Fang et al. [2]
road signs, outdoor images. developed a road sign detection and tracking
system in which the color images from a video
1. Introduction camera are converted into HSI system. Color
features are extracted from the hue by using a two-
layer neural network.
Road signs and traffic signals define a visual
The remaining of the paper is organized as
language that can be interpreted by drivers. They
follows. Section 2 describes the properties of the
represent the current traffic situation on the road,
road signs. Section 3 shows the difficulties behind
show danger and difficulties around the drivers,
working in the outdoor scenes, and the effect of
give them warnings, and help them with their
different factors on the perceived images. Section 4
navigation by providing useful information that
describes the improved HLS color space, and
makes driving safe and convenient [1, 2].
section 5 describes how the color varies in the
The human visual perception abilities depend on
outdoor images and the parameters affecting this. In
the individual’s physical and mental conditions. In
section 6, there is a description of the properties of
certain circumstances, these abilities can be
the hue and how it changes due to light variations.
affected by many factors such as fatigue, and
Section 7 shows the segmentation methods, and
observatory skills. Giving this information in a
section 8 shows the result and future research.
good time to drivers can prevent accidents, save
lives, increase driving performance, and reduce the
2. Road and Traffic Signs
pollution caused by vehicles [3-5].
Colors represent an important part of the
information provided to the driver to ensure the Road and traffic signs have been designed using
objectives of the road sign. Therefore, road signs special shapes and colors, very different from the
and their colors are selected to be different from the natural environment, which make them easily
nature or from the surrounding in order to be recognizable by drivers [12]. These may be
distinguishable. Detection of these signs in outdoor principally distinguishable from the natural and/or
images from a moving vehicle will help the driver man-made backgrounds [13]. They are designed,
to take the right decision in good time, which manufactured and installed according to stringent
means fewer accidents, less pollution, and better regulations [6]. They are designed in fixed 2-D
safety. shapes like triangles, circles, octagons, or
About 62% of the reviewed literature used colors rectangles [14, 15]. The colors are regulated to the
as the basic cue for road sign detection, the sign category (red = stop, yellow = danger) [16].
The information on the sign has one color and the provides independence between chromatic and
rest of the sign has another color. The tint of the achromatic components [21]. The conversion from
paint which covers the sign should correspond to a the RGB to this color space is calculated as follows:
specific wavelength in the visible spectrum [6, 10]. H =θ if B ≤ G
The signs are located in well-defined locations with H = 360 − θ if B > G
respect to the road, so that the driver can, more or
less, expect the location of these signs [16]. They where:
⎧ ⎡ G B⎤ ⎫
may contain a pictogram, a string of characters or ⎪ ⎢⎣ R − 2 − 2 ⎥⎦ ⎪⎪
−1 ⎪
both [10]. The road signs are characterized by using θ = cos ⎨ ⎬
fixed text fonts, and character heights. They can ⎪ R 2 + G 2 + B 2 − RG − RB − GB ⎪
appear in different conditions, including partly ⎩⎪ ⎭⎪
occulted, distorted, damaged and clustered in a
group of more than one sign [10, 14].

3. Difficulties

Due to the complex environment of the roads and


the scenes around them, the detection and
recognition of road and traffic signs may face many
difficulties such as:
o The color of the sign fades with time as a result
of long exposure to sun light, and the reaction
of the paint with the air [1, 5]. (A)
o The visibility is affected by weather conditions
such as fog, rain, clouds and snow [1]. Other
parameters like local light variations (the
direction of the light, the strength of the light
depending on the time of the day and the
season), and the shadows generated by other
objects [9, 10, 17] can also affect visibility.
o The color information is very sensitive to the
variations of the light conditions such as
shadows, clouds, and the sun. [1, 5, 8]. It can
be affected by the illuminant color (daylight),
illumination geometry, and viewing geometry (B)
[18].
o The presence of objects similar in color and/or
shapes to the road signs in the scene under
consideration, like buildings, or vehicles [5,
17].
o Signs may be found disoriented, damaged or
occulted.
o If the image is acquired from a moving car,
then it is often suffers from motion blur and car
vibration [19].
o The presence of obstacles in the scene, like (C)
trees, buildings, vehicles and pedestrians [8,
17].
o Another drawback is the absence of a standard
database for evaluation of the existing
classification methods [7].

4. The Improved HLS Color Space

Hanbury and Serra [20] introduced an improved


version of HLS color space which was later called
IHLS. This color space is very similar to the other
color spaces, but it avoids the inconveniences of the (D)
other color spaces designed for computer graphics Figure 1 (A) Original image, (B) Normalized
rather than image processing. The color space Hue, (C) Normalized Saturation, (D) Luminance
6. Hue Properties and Adrift due to
The other two parameters are calculated as follows: Illumination Changes
S = max( R, G, B) − min( R, G, B) In some color spaces, the hue plays a central role
L = 0.212 R + 0.715G + 0.072 B in the color detection. This is because it is invariant
Figure 1 shows the hue, saturation, and luminance to the variations in light conditions as it is
of this color space. multiplicative/scale invariant, additive/shift
invariant, and it is invariant under saturation
5. Color Variations in the Outdoor Images changes. But the hue coordinate is unstable, and
small changes in the RGB can cause strong
One of the most difficult problems in using the variation in hue [16], and it suffers from three
colors in the outdoor images is the chromatic problems. Firstly, when the intensity is very low or
variation of daylight. As a result of this chromatic very high, the hue is meaningless. Secondly, when
variation, the apparent color of the object varies as the saturation is very low, the hue is meaningless.
daylight changes. Thirdly, when the saturation is less than a certain
The irradiance of any object in a color image threshold, the hue becomes unstable.
depends on three parameters: Vitabile et al. [10] defined three different areas in
the HSV color space: The achromatic area:
The color of the incident light, its intensity and the characterized by s ≤ 0.25 or v ≤ 0.2 or v ≥ 0.9 .
position of the light source: The color of the The unstable chromatic area: characterized by
daylight varies along the characteristic curve in the 0.25 ≤ s ≤ 0.5 and 0.2 ≤ v ≤ 0.9 . The chromatic
CIE model. It is given by the following equation: area: characterized by s ≥ 0.5 and 0.2 ≤ v ≤ 0.9 .
In order to obtain robustness against changes in
y = 2.87 x − 3.0 x 2 − 0.275 for 0.25 ≤ x ≤ 0.38 .
external light conditions, these areas should be
According to this equation, the variation of the taken into consideration in any color segmentation
daylight’s color is a single variable change called system.
the temperature of the daylight, which is
independent of the intensity. 7. Color image segmentation
The reflectance properties of the object: The
7.1 Method 1:
reflectance of an object s (λ ) is a function of the
Color segmentation is carried out by converting
wavelength λ of the incident light. It is given by: the RGB image into the IHLS system. Three
s (λ ) = e(λ ) φ (λ ) .Where e(λ ) is the intensity of the images are generated as shown in figure 1. The
light at wavelength λ , and φ (λ ) is the object’s global mean of the luminance image is calculated
albedo function giving the percent of the light by:
m −1 n −1
reflected at each wavelength. This model did not 1
take into consideration the extended light sources, mean =
mn ∑ ∑ L(i, j)
inter-reflectance effects, shadowing or specularities, i =0 j =0
but it is the best available working model of color Nmean = mean / 256
reflectance. where m and n are the image dimensions, L(i, j ) is
the luminance of the current pixel, and Nmean is
The camera properties: Given the radiance of an
the normalized mean which is in the range [0,1] .
object L(λ ) , the observed intensities depend on the
The normalized mean specifies the threshold at
lens diameter d , the focal length f of the camera, which the Euclidian distance is specified. The hue
and the image position of the object measured as angle and saturation are affected by the light
angle a off the optical axis. This is given by the conditions at which the image is taken. Therefore,
standard irradiance equation the threshold is calculated as:
E (λ ) = L(λ ).(π / 4)(d / f ) 2 cos(4a ) . According to thresh = e − Nmean
this equation, the radiance L(λ ) is multiplied by a The reference color and the unknown color are
represented by two vectors by using the hue and
constant function of the camera parameters. This
saturation of these two colors as shown in figure 2.
means that it will not affect the observed color of
The Euclidian distance between the two vectors is
the object. Assuming that the chromatic aberration
then calculated by the following equation:
of the camera’s lens is negligible, only the density
of the observed light will be affected. (
d = (S 2 cos H 2 − S1 cos H 1 )2 + (S 2 sin H 2 − S1 sin H 1 )2 )1 / 2
As a result, the color of the light reflected by an
object located outdoors is a function of the The pixel is considered as to be the object pixel if
temperature of the daylight and the object’s albedo, the Euclidian distance is less than or equal to the
and the observed irradiance is the reflected light threshold; otherwise it is considered as background.
surface scaled by the irradiance equation [18, 22]. The main idea here is to develop a dynamic
threshold which is related to the brightness of the ⎧0 0 ≤ S in ≤ S min
image. When the brightness of the image is high, ⎪
S out = ⎨S in S min < S in < S max
the threshold is small, and vice versa. This will
⎪255 S
allow the luminance image to control the relation ⎩ max ≤ S in ≤ 255
between the reference pixel, and the unknown The hue is calculated by:
pixel. ⎧255 H min ≤ H in ≤ H max
H out = ⎨
⎩0 otherwise
A logical AND between S out and H out will
generate a binary image containing the road sign
with the desired color.

Figure 2 The Vector Model of the Hue and


Saturation.

7.2 Method 2
This method is based on the segmentation by
region growing. First, the RGB image is converted Figure 3 The Saturation Transfer Function.
into IHLS color space. Then, the hue image is
segmented according to color of the sign under
consideration. Normally, a range of hue angles is
specified where the color of the sign can be found.
The output of this step is a binary image for the
probable candidates.
This binary image is also used to calculate the
seeds for the saturation image. It is divided into
16x16 pixels sub-regions, and a seed is set at the
center of each sub-region if enough hue pixels are
found in the binary image generated by hue
segmentation. The number of sufficient hue pixels Figure 4 The Hue Transfer Function of Red.
is specified by one third of the area of each sub-
region.
The seeds generated by the former step together
with the saturation image are used as input to the
region growing algorithm. The saturation image is
segmented by these seeds to generate another
binary image representing the candidate objects of
road signs.
The final step is to apply a logical AND of the
two binary images; i.e. the hue binary image, and
the saturation binary image. The result is a
segmented image containing the road sign with the Figure 5 The Hue Transfer Function of Green.
specified color.

7.3 Method 3
This is a modified version of the method
described by de la Escalera et al. [11]. In this
method, the RGB image is converted into the IHLS
color space, and both the saturation and hue are
normalized to be [0,255]. To avoid the achromatic
area defined by Vitabile et al. [10], the minimum
and maximum values of the saturation are chosen to
be S min = 51 , S max = 170 , and saturation is then
calculated as follows: Figure 6 The Hue Transfer Function of Blue.
8. Results and Conclusions and Interpretation, San Antonio, Texas,
This paper shows three new methods for 1996.
color segmentation used for traffic signs. The [4] A. de la Escalera, L. Moreno, E. Puente, and
methods are based on invoking the IHLS color M. Salichs, "Neural traffic sign recognition
space, and all of them use hue and saturation to for autonomous vehicles," presented at Proc.
generate a binary image containing the road sign of 20th Inter. Conf. on Industrial Electronics
a certain color. The IHLS color space showed very Control and Instrumentation, Bologna, Italy,
good stability to represent the hue and saturation in 1994.
outdoor images taken in different light conditions. [5] J. Miura, T. Kanda, and Y. Shirai, "An active
The methods are tested on more than a hundred vision system for real-time traffic sign
images under different light conditions (sunny, recognition," presented at Proc. 2000 IEEE
cloudy, foggy, and snow conditions) and different Intelligent Transportation Systems,
backgrounds. They established very good Dearborn, MI, USA, 2000.
robustness. Method 1 showed the best detection [6] S. Vitabile, and F. Sorbello, "Pictogram road
results followed by method 2, and method 3, signs detection and understanding in outdoor
respectively. This is due to the fact that only a scenes," presented at Proc. Conf. Enhanced
single color is specified as a reference color on and Synthetic Vision, Orlando, Florida,
color circle shown in figure 2, while a range of hues 1998.
and saturations are specified in methods 2, and 3. [7] P. Paclik, J. Novovicova, P. Pudil, and P.
This allows objects with similar colors to be Somol, "Road sign classification using
detected by methods 2 and 3. Some results are Laplace kernel classifier," Pattern
shown in figure 7, where different colors are Recognition Letters, vol. 21, pp. 1165-1173,
detected under different light conditions. 2000.
Combining these results with shape recognition of [8] S. Vitabile, G. Pollaccia, G. Pilato, and E.
the road signs, and pictogram recognition, which Sorbello, "Road sign Recognition using a
are parts of the future work, will give a good means dynamic pixel aggregation technique in the
to build a complete system which provides the HSV color space," presented at Proc. 11th
drivers with the information about the signs in real Inter. Conf. Image Analysis and Processing,
time as part of the intelligent vehicle. The other key Palermo, Italy, 2001.
points for further study is the effect of the shadows [9] S. Vitabile, A. Gentile, G. Dammone, and F.
on the stability of the hue in outdoor images, and Sorbello, "Multi-layer perceptron mapping
how to deal with the reflections generated by the on a SIMD architecture," presented at Proc.
sign which changes the characteristics of the hue the 2002 IEEE Signal Processing Society
perceived by the camera. This paper is part of the Workshop, 2002.
sign recognition project conducted by Dalarna [10] S. Vitabile, A. Gentile, and F. Sorbello, "A
University / Sweden jointly with the Transportation neural network based automatic road sign
Research Institute – Napier University / Scotland to recognizer," presented at Proc. The 2002
invoke digital image processing and computer Inter. Joint Conf. on Neural Networks,
vision in the ITS field. Honolulu, HI, USA, 2002.
[11] A. de la Escalera, J. Armingol, and M. Mata,
Dedication: "Traffic sign recognition and analysis for
In memory of my mother and father. I will never intelligent vehicles," Image and Vision
forget you. Comput., vol. 21, pp. 247-258, 2003.
[12] G. Jiang, and T. Choi, "Robust detection of
References: landmarks in color image based on fuzzy set
theory," presented at Proc. Fourth Inter.
[1] C. Fang, C. Fuh, S. Chen, and P. Yen, "A
Conf. on Signal Processing, Beijing, China,
road sign recognition system based on 1998.
dynamic visual model," presented at Proc. [13] N. Hoose, Computer Image Processing in
2003 IEEE Computer Society Conf.
Traffic Engineering: John Wiley & sons Inc.,
Computer Vision and Pattern Recognition, 1991.
Madison, Wisconsin, 2003. [14] P. Parodi, and G. Piccioli, "A feature-based
[2] C. Fang, S. Chen, and C. Fuh, "Road-sign
recognition scheme for traffic scenes,"
detection and tracking," IEEE Trans. on presented at Proc. Intelligent Vehicles '95
Vehicular Technology, vol. 52, pp. 1329- Symposium, Detroit, USA, 1995.
1341, 2003.
[15] J. Plane, Traffic Engineering Handbook:
[3] L. Estevez, and N. Kehtarnavaz, "A real-time Prentice Hall, 1992.
histographic approach to road sign [16] M. Lalonde, and Y. Li, "Road sign
recognition," presented at Proc. IEEE
recognition. Technical report, Center de
Southwest Symposium on Image Analysis recherche informatique de Montrèal, Survey
of the state of Art for sub-Project 2.4, Advanced School for Computing and
CRIM/IIT," 1995. Imaging, Lommel, Belgium, 2000.
[17] M. Blancard, "Road Sign Recognition: A [20] A. Hanbury, and J. Serra, "A 3D-polar
study of Vision-based Decision Making for coordinate colour representation suitable for
Road Environment Recognition," in Vision- image analysis," Computer Vision and Image
based Vehicle Guidance, I. Masaki, Ed. Understanding, 2002.
Berlin, Germany: Springer-Verlag, 1992, pp. [21] J. Angulo, and J. Serra, "Color segmentation
162-172. by ordered mergings," presented at Proc. Int.
[18] S. Buluswar, and B. Draper, "Color Conf. on Image Processing, Barcelona,
recognition in outdoor images," presented at Spain, 2003.
Proc. Inter. Conf. Computer vision, Bombay, [22] S. Buluswar, and B. Draper, "Non-
India, 1998. parametric classification of pixels under
[19] P. Paclik, and J. Novovicova, "Road sign varying outdoor illumination," presented at
classification without color information," ARPA Image Understanding Workshop,
presented at Proc. Sixth Annual Conf. of the 1994.

Original Images

Method 1

Method 2

Method 3
Figure 7 Results of applying Color Segmentation Methods.

You might also like