Perceptual Color Differences Using Basic Color Terms
Perceptual Color Differences Using Basic Color Terms
Ofir Pele
The University of Pennsylvania
[email protected]
Michael Werman
The Hebrew University of Jerusalem
[email protected]
Abstract
We suggest a new color distance based on two observations. First, percep-
tual color differences were designed to be used to compare very similar colors.
They do not capture human perception for medium and large color differences
well. Thresholding was proposed to solve the problem for large color differences,
i.e. two totally different colors are always the same distance apart. We show that
thresholding alone cannot improve medium color differences. We suggest to alle-
viate this problem using basic color terms. Second, when a color distance is used
for edge detection, many small distances around the just noticeable difference may
account for false edges. We suggest to reduce the effect of small distances.
1 Introduction
Color difference perception of just notably different colors and defining distances be-
tween very-similar colors has received considerable work [22, 29, 21, 35, 46]. In
the CIE (International Commission on Illumination) community, distances of up to
7 CIELAB units, where 1 CIELAB unit approximately correspond to 1 just notice-
able difference, are considered medium distances [17]. In this paper we refer to 0-7
CIELAB distances as very-similar as they capture just a small fraction of similar col-
ors. See Fig. 1.
The CIEDE2000 color difference is considered the state of the art perceptual color
difference [21, 47, 17]. CIEDE2000’s recommended range for use is 0 to 5 CIELAB
units [2]. The COM dataset was used to train and perceptually test the CIEDE2000
color difference. More than 95 percent of the distances between color pairs in the
COM dataset are below 5 CIELAB units apart.
It was pointed out that the resulting color differences do not correspond well with
human perception for medium to large distances. Rubner et al. [32] and Ruzon and
1
1 INTRODUCTION 2
200
150
distance
100
50
Tomasi [33] used a negative exponent on the color difference. Namely, all totally
different colors are essentially assigned the same large distance. Pele and Werman [27]
noted that a negative exponent changes the values in the small distance range. Pele and
Werman [27] and Rubner et al. [32] observed a reduction in performance due to this
change. Pele and Werman suggested thresholding the color difference as it does not
change the small distances. Thresholding color distances is justified by the fact that if
people are directly asked for a judgment of the dissimilarity of colors far apart in color
space, subjects typically find themselves unable to express a more precise answer than
“totally different” [18]. An additional advantage of thresholding color distances is that
it allows fast computation of cross-bin distances such as the Earth Mover’s Distance
[27] or using the transformation to a similarity measure of one minus the distance
divided by the threshold, the Quadratic-Chi [28].
This paper shows that CIEDE2000 is not a good distance for the medium range
and using any monotonic function of CIEDE2000 (including a thresholding function)
cannot solve the problem. For example, using a thresholding function we cannot make
DarkSkyBlue be more similar to Blue than to HotPink. See Fig. 2 for more examples.
We suggest an improvement based on basic color terms. Specifically we use Berlin
and Kay’s eleven English basic color terms[6]. However, the generalization to other
color terms is straight forward. We suggest adding to color differences the distance
between their basic color terms probability vectors. As basic color terms are correlated
(e.g. red and orange) we suggest using a cross-bin distance for these probability vectors.
That is, a distance which takes the relationships between bins (each bin represents a
basic color term) into account. Specifically we use the Earth Mover’s Distance [32]
1 INTRODUCTION 3
(a) (b)
(c) (d)
as it was used successfully in many applications (e.g. [32, 31, 33, 27] and references
within).
The probability vectors are obtained with the color naming method developed by
van de Weijer et al. [44]. Other methods for color naming such as [9, 19, 34, 3, 16,
26, 4, 24, 23, 5] can also be used. We chose van de Weijer et al. method as it has
excellent performance on real-world images and as the code for it is publicly available.
However, CIEDE2000 was learned under calibrated conditions, while van de Weijer et
al. method was learned from natural images. Thus, other color naming methods might
produce better results. This is left for future work.
Our proposed solution is not equivalent to increasing the weight of the hue com-
ponent in the color difference. Color names are not equivalent to hue. For example,
although a rainbow spans a continuous spectrum of colors, people see in it distinct
bands which correspond to basic color terms: red, orange, yellow, green, blue and pur-
1 INTRODUCTION 4
COLDIST
CIEDE2000
COLDIST
CIEDE2000
COLDIST
CIEDE2000
COLDIST
CIEDE2000
COLDIST
CIEDE2000
Figure 3: This figure should be viewed in color, preferably on a computer screen. Use
the pdf viewer’s zoom to see the colors. We show colors, sorted by their distance to
the color on the left. COLDIST is our new perceptual color difference. Several observa-
tions can be derived from these graphs. First, our distance is perceptually better in the
medium distance range. Note that the group of similar colors (left side of each color
legend) is more similar to the color on the left using our distance. For example, in
the top light blues are similar to blue, while using CIEDE2000 they are very different
(thus appear on the right). It should be noted that our distance uses a sigmoid function,
so that very similar colors on the left are essentially assigned the same small distance
and totally different colors on the right are essentially assigned the same large distance.
Finally, although our distance is perceptually more meaningful, it is still far from being
perfect.
ple. In addition, some basic color terms are not different in their hue component. e.g.
achromatic colors such as white, gray and black or orange and brown which shares the
same hue.
A second problem that occurs when using color differences for edge detection is
that many small distances around the just noticeable difference may account for false
edges. We suggest to use a sigmoid function to reduce the small distances effect. As we
mentioned before, using a negative exponent function in order to assign to all totally
different color pairs the same distance reduced performance [32, 27]. We explain this
by the fact that a negative exponent is a concave function. We show that a convex
function should be applied to small differences.
We present experimental results for color edge detection. We show that by using
our new color difference the results are perceptually more meaningful.
Our solution is just the first step of designing a perceptual color difference for the
full range of distances. Our major contribution is raising the problem of current state-
of-the-art color differences in the small and medium distance range.
This paper is organized as follows. Section 2 is an overview of related work. Sec-
tion 3 introduces the new color difference. Section 4 presents the results. Finally,
conclusions are drawn in Section 5.
2 RELATED WORK 5
2 Related Work
MacAdam’s [22] pioneering work on chromaticity discrimination ellipses, which mea-
sured human perception of just noticeable differences led the way to the development
of the L*a*b* space [29] which is considered perceptually uniform; i.e. for very-similar
colors, the Euclidean distance in the L*a*b* space corresponds to the human percep-
tion of color difference well. Luo et al. [21] developed the CIEDE2000 color difference
which is now considered the state of the art perceptual color difference [21, 47, 17].
Although color is commonly experienced as an indispensable quality in describing
the world around us, state-of-the art computer vision methods are mostly based on
shape description and ignore color information. Recently this has changed with the
introduction of new color descriptors [39, 14, 25, 40, 42, 8, 36]. However, although
color is a point-wise property (e.g. bananas are yellow), most of these features capture
geometric relations such as color edges.
Wertheimer [1] suggested that among perceptual stimuli there are “ideal types”
that are anchor points for perception. Rosch [30] proposed that in certain perceptual
domains, such as color, salient prototypes develop non arbitrarily. An influential paper
by Berlin and Kay [6] defined basic colors as color names in a language which are ap-
plied to diverse classes of objects and whose meaning is not subsumable under one of
the other basic color names and which are used consistently and by consensus by most
of the speakers of the language. In their pioneering anthropological study, they found
that color was usually partitioned into a maximum of eleven basic color categories of
which three were achromatic (black, white, grey) and eight chromatic (red, green, yel-
low, blue, purple, orange, pink and brown). This partitioning was a universal tendency
to group color around specific focal points as was conjectured by Wertheimer [1] and
Rosch [30].
Considerable work has been carried out in the field of computational color nam-
ing, see e.g. [9, 19, 34, 3, 16, 26, 4, 24, 23, 5, 44] and references within. Recently
van de Weijer et al. [44] presented a new color naming method based on real-world
images. The color names are Berlin and Kay’s [6] eleven English basic color terms.
Van de Weijer and Schmid [43] showed that a color description based on these color
names outperforms descriptions based on photometric invariants. The explanation is
that photometric invariance reduces the discriminative power of the descriptor.
Inspired by van de Weijer and Schmid’s work we suggest using the basic color
names to correct the state-of-the-art color difference, CIEDE2000 in the medium dis-
tance range.
as the white point, which is also the default illuminant specified in the International Color Consortium spec-
ifications.
3 COLDIST : THE NEW COLOR DIFFERENCE 6
is the basic color term i (i.e. black, blue, brown, grey, green, orange, pink, purple, red,
white or yellow). These probability vectors are computed using the van de Weijer et al.
color naming method [44]. Now each color C n is represented by an 14-dimensional
vector: V n = [S n , P n ] = [Ln , an , bn , P1n , . . . , P11
n
]. The distance between the two
colors (parameterized with T , D, α and Z) is defined as:
min(CIEDE2000(S 1 , S 2 ), T )
d1 (S 1 , S 2 ) = (1)
T
d2 (P 1 , P 2 ) = EMD(P 1 , P 2 , D) (2)
1 2
d3 (V , V ) = αd1 + (1 − α)d2 (3)
1
COLDIST (V 1 , V 2 ) = −(Zd Z (4)
3− 2 )
1+e
In Eq. 1, d1 is a thresholded and scaled CIEDE2000 color difference. We threshold
it as it is recommended for use only for small distances [2]. We used T = 20 as was
used in Pele and Werman [27]. We divide by T so that d1 is between 0 and 1.
In Eq. 2 d2 is the distance between the two basic colors probability vectors. As
the bins in the eleven basic colors probability vectors are correlated (e.g. orange and
red), we use the Earth Mover’s Distance that takes this correlation into account. The
correlation is encoded in D which is an 11 × 11 matrix, where Dij is the distance
between basic color term i to basic color term j. We estimated D using the joint
distribution of the basic color terms. That is, given the matrix M of all probability
vectors for the colors in the RGB cube (215 × 11 matrix, as each dimension of the RGB
cube was quantized with jumps of 8 [44]) we define D = Dij as:
P
min(Mni , Mnj )
n
D̂ij = 1 − 2 P (5)
n Mni + Mnj
min(D̂ij , t)
Dij = (6)
t
1
black
0.9
blue
0.8
brown
0.7
grey
green 0.6
orange 0.5
pink 0.4
purple 0.3
red
0.2
white
0.1
yellow
0
black blue brown grey green orange pink purple red white yellow
X
EMD(P 1 , P 2 , D) = min Fij Dij s.t
{Fij }
i,j
X X
Fij = Pi1 , Fij = Pj2 , (7)
j i
X
Fij = 1 , Fij ≥ 0
i,j
4 Results
In this section we present color edge detection results. We used Ruzon and Tomasi
generalized compass edge detector [33]. We used this method for two reasons. First,
the code is publicly available. Second, the code uses only color cues for the edge
detection which enables us to isolate color difference performance.
Ruzon and Tomasi’s method [33] divides a circular window around each pixels in
half with a line segment. Then it computes a sparse color histogram (coined signature
5 CONCLUSIONS 8
in their paper) for each half and computes the Earth Mover’s Distance (EMD) [32]
between the two histograms. The EMD uses a ground distance matrix D between the
colors. Ruzon and Tomasi converted the images to L*a*b* and then used a negative
exponent of the Euclidean distance as the ground distance between colors:
Ruzon and Tomasi used γ = 14 in their experiments. We compare the edge de-
tection results using this distance to our proposed COLDIST . We compared also to d1
which is a thresholded CIEDE2000 distance as was used by Pele and Werman for im-
age retrieval [27]. We also tried our proposed COLDIST without the sigmoid function or
without the color correction (α = 1) or without the CIEDE2000 term (α = 0) but the
results using COLDIST were the best. Results are presented in Figs. 5,6,7,8. The results
show that the new color difference is able to detect color edges much better than the
state of the art. The resulting edge maps are much cleaner. See figures captions for
more details.
5 Conclusions
We presented a new color difference - COLDIST and showed that it is perceptually more
meaningful than the state of the art color difference - CIEDE2000. We believe that this
is just the first step in designing perceptual color differences which perform well in the
medium range.
It is easy to generalize our method to other color name sets (such as the Russian
which separates blue into goluboi and siniy). All one needs to do is to calculate the
ground distance between all color terms. This can be done by using the joint distribu-
tion of the new set of color terms. In future work it will be interesting to check other
color naming methods such as [9, 19, 34, 3, 16, 26, 4, 24, 23, 5]
A major difficulty of analyzing color images is the illumination variability of scenes.
Color invariants are often used to overcome this problem. However, Van de Weijer et al.
[44, 43] showed that invariants are not discriminative enough. For example, invariants
usually do not distinguish between achromatic colors (black, gray and white). Using
color constancy or partial normalization algorithms [12, 13, 15, 41, 11, 10, 20, 7, 37]
which do not necessarily reduce all distinctiveness may partially alleviate this problem.
This method was used to improve color naming by Benavente et al. [3].
Color perception is also affected by spatial and texture cues. It will be interesting
to combine COLDIST with spatial and texture models [48, 45, 38]
References
[1] Numbers and numerical concepts in primitive peoples.
[2] Cie pub. 142, 2001.
[3] R. Benavente, R. Baldrich, M. C. Olivé, and M. Vanrell. Colour naming considering the
colour variability problem. Computación y Sistemas, 4(1):30–43, 2000.
REFERENCES 9
(NE) (TC)
(COLDIST ) (IM)
Figure 5: Edge detection with the generalized compass edge detection [33] using
the following color differences: (NE) A negative exponent applied on the Euclidean
distance in L*a*b* space (used in [33]). (TC) A thresholded CIEDE2000 distance
(used in [27] for image retrieval). See Eq. 1. (COLDIST ) Our proposed COLDIST . (IM)
The original image.
Our result is much cleaner. Note that our method detects the right boundary of the
basket without detecting many false edges, while in (NE) and (TC) the false edges
magnitude is larger than the right boundary of the basket.
REFERENCES 10
(NE) (TC)
(COLDIST ) (IM)
(NE) (TC)
(COLDIST ) (IM)
Figure 6: Edge detection with the generalized compass edge detection [33] using
the following color differences: (NE) A negative exponent applied on the Euclidean
distance in L*a*b* space (used in [33]). (TC) A thresholded CIEDE2000 distance
(used in [27] for image retrieval). See Eq. 1. (COLDIST ) Our proposed COLDIST . (IM)
The original image.
Our results are much cleaner. Note that on the top the clean detection of the bushes
boundaries.
REFERENCES 11
(NE) (TC)
(COLDIST ) (IM)
Figure 7: Edge detection with the generalized compass edge detection [33] using
the following color differences: (NE) A negative exponent applied on the Euclidean
distance in L*a*b* space (used in [33]). (TC) A thresholded CIEDE2000 distance
(used in [27] for image retrieval). See Eq. 1. (COLDIST ) Our proposed COLDIST . (IM)
The original image.
Our results are much cleaner.
REFERENCES 12
(NE) (TC)
(COLDIST ) (IM)
(NE) (TC)
(COLDIST ) (IM)
Figure 8: Edge detection with the generalized compass edge detection [33] using
the following color differences: (NE) A negative exponent applied on the Euclidean
distance in L*a*b* space (used in [33]). (TC) A thresholded CIEDE2000 distance
(used in [27] for image retrieval). See Eq. 1. (COLDIST ) Our proposed COLDIST . (IM)
The original image.
Our results are much cleaner. Note the strong responses on the fur of the bear and in
the left person on the top T-shirt using (NE) and (TC). Note that the spots around the
swimmer in all methods are due to successful detection of the water drops.
REFERENCES 13
[4] R. Benavente, M. Vanrell, and R. Baldrich. A data set for fuzzy colour naming. COLOR
research and application, 31(1):48, 2006.
[5] R. Benavente, M. Vanrell, and R. Baldrich. Parametric fuzzy sets for automatic color
naming. Journal of the Optical Society of America A, 25(10):2582–2593, 2008.
[6] B. Berlin and P. Kay. Basic Color Terms: Their Universality and Evolution. University of
California Press Berkeley, 1969.
[7] S. Bianco, G. Ciocca, C. Cusano, and R. Schettini. Improving Color Constancy Us-
ing Indoor–Outdoor Image Classification. IEEE Transactions on Image Processing,
17(12):2381–2392, 2008.
[8] G. J. Burghouts and J. M. Geusebroek. Performance evaluation of local colour invariants.
Computer Vision and Image Understanding, 113:48–62, 2009.
[9] D. Conway. An experimental comparison of three natural language colour naming mod-
els. In Proc. east-west int. conf. on human-computer interaction, pages 328–339. Citeseer,
1992.
[10] G. Finlayson, B. Schiele, and J. Crowley. Comprehensive colour image normalization. In
ECCV, volume 1406, page 1406, 1998.
[11] G. D. Finlayson, S. D. Hordley, and P. M. Morovic. Colour constancy using the chroma-
genic constraint. In CVPR, pages 1079–1086, 2005.
[12] D. Forsyth. A novel algorithm for color constancy. In Color, page 271. Jones and Bartlett
Publishers, Inc., 1992.
[13] P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp. Bayesian color constancy
revisited. 2008.
[14] T. Gevers and H. Stokman. Robust histogram construction from color invariants for object
recognition. IEEE transactions on pattern analysis and machine intelligence, 26(1):113–
118, 2004.
[15] A. Gijsenij and T. Gevers. Color constancy using natural image statistics. In CVPR, 2007.
[16] L. Griffin. Optimality of the basic colours categories. Journal of Vision, 4(8):309, 2004.
[17] M. R. L. Han Wang, Guihua Cui and H. Xu. Evaluation of colour-difference formulae for
different colour-difference magnitudes. In CGIV, 2008.
[18] T. Indow. Metrics in Color Spaces: Im Kleinen und im Grofien. Contributions to mathe-
matical psychology, psychometrics, and methodology, page 3, 1994.
[19] J. Lammens. A computational model of color perception and color naming. PhD thesis,
Citeseer, 1994.
[20] R. Lu, A. Gijsenij, T. Gevers, D. Xu, V. Nedovic, and J. Geusebroek. Color Constancy
Using 3D Stage Geometry. In IEEE International Conference on Computer Vision.
[21] M. Luo, G. Cui, and B. Rigg. The Development of the CIE 2000 Colour-Difference For-
mula: CIEDE2000. Color Research & Application, 26(5):340–350, 2001.
[22] D. MacAdam. Visual sensitivities to color differences in daylight. Journal of the Optical
Society of America, 32(5):247–273, 1942.
[23] G. Menegaz, A. Le Troter, J. Boi, and J. Sequeira. Semantics driven resampling of the osa-
ucs. In Image Analysis and Processing Workshops, 2007. ICIAPW 2007. 14th International
Conference on, pages 216–220, 2007.
[24] G. Menegaz, A. Le Troter, J. Sequeira, and J. Boi. A discrete model for color naming.
EURASIP Journal on Advances in Signal Processing, 2007, 2006.
[25] F. Mindru, T. Tuytelaars, L. Gool, and T. Moons. Moment invariants for recognition under
changing viewpoint and illumination. Computer Vision and Image Understanding, 94(1-
3):3–27, 2004.
[26] A. Mojsilovic. A computational model for color naming and describing color composition
of images. IEEE Transactions on Image Processing, 14(5):690–699, 2005.
REFERENCES 14
[27] O. Pele and M. Werman. Fast and robust earth mover’s distances. In ICCV, 2009.
[28] O. Pele and M. Werman. The quadratic-chi histogram distance family. In ECCV, 2010.
[29] A. Robertson. Historical development of CIE recommended color difference equations.
Color Res. Appl, 15:167–170, 1990.
[30] E. Rosch. Cognitive reference points* 1. Cognitive psychology, 7(4):532–547, 1975.
[31] Y. Rubner, J. Puzicha, C. Tomasi, and J. Buhmann. Empirical evaluation of dissimilarity
measures for color and texture. CVIU, 2001.
[32] Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover’s distance as a metric for image
retrieval. International Journal of Computer Vision, 40(2):99–121, 2000.
[33] M. Ruzon and C. Tomasi. Edge, Junction, and Corner Detection Using Color Distributions.
IEEE Trans. Pattern Analysis and Machine Intelligence., pages 1281–1295, 2001.
[34] M. Seaborn, L. Hepplewhite, and J. Stonham. Fuzzy colour category map for content based
image retrieval. In BMVC, 1999.
[35] G. Sharma, W. Wu, and E. Dalal. The CIEDE2000 color-difference formula: implemen-
tation notes, supplementary test data, and mathematical observations. Color Research &
Application, 30(1):21–30, 2005.
[36] X. Song, D. Muselet, and A. Trémeau. Local Color Descriptor for Object Recognition
across Illumination Changes. In Advanced Concepts for Intelligent Vision Systems, pages
598–605. Springer.
[37] R. Tan, K. Ikeuchi, and K. Nishino. Color constancy through inverse-intensity chromaticity
space. Digitally Archiving Cultural Objects, pages 323–351.
[38] O. Tulet, M.-C. Larabi, and C. Fernandez-Maloigne. Image rendering based on a spatial
extension of the ciecam02. Applications of Computer Vision, IEEE Workshop on, 2008.
[39] K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek. Evaluation of color descriptors for
object and scene recognition. In CVPR, 2008.
[40] J. van de Weijer, T. Gevers, and J. Geusebroek. Edge and corner detection by photo-
metric quasi-invariants. IEEE transactions on pattern analysis and machine intelligence,
27(4):625–630, 2005.
[41] J. van de Weijer, T. Gevers, and A. Gijsenij. Edge-based color constancy. IEEE Transac-
tions on Image Processing, 16(9):2207–2214, 2007.
[42] J. van de Weijer and C. Schmid. Coloring local feature extraction. In ECCV, page 334,
2006.
[43] J. van de Weijer and C. Schmid. Applying Color Names to Image Description. In ICIP,
volume 3, 2007.
[44] J. van de Weijer, C. Schmid, J. Verbeek, and D. Larlus. Learning Color Names for Real-
World Applications. IEEE Transaction in Image Processing, 2009.
[45] M. Vanrell, R. Baldrich, A. Salvatella, R. Benavente, and F. Tous. Induction operators for a
computational colour-texture representation. Computer Vision and Image Understanding,
94(1-3):92–114, 2004.
[46] G. Wyszecki and W. S. Stiles. Color Science: Concepts and Methods, Quantitative Data
and Formulae. Wiley, 1982.
[47] H. Xu, H. Yaguchi, and S. Shioiri. Estimation of Color-Difference Formulae at Color
Discrimination Threshold Using CRT-Generated Stimuli. Optical Review, 8(2):142–147,
2001.
[48] X. Zhang and B. Wandell. A spatial extension of CIELAB for digital color-image repro-
duction. Journal of the Society for Information Display, 5:61, 1997.