0% found this document useful (0 votes)
11 views29 pages

Yang Jian

This document proposes two color space learning models for face representation and recognition using color images. The first model learns an optimal color space by finding coefficients to combine RGB channels to maximize between-class and minimize within-class scatter. The second model jointly learns the color space and discriminant projection subspace by maximizing a generalized Rayleigh quotient. An iterative algorithm is described to solve the second model. Experimental results on the FRGC database show the learned discriminant color space (DCS) improves over RGB for face verification, especially on uncontrolled images.

Uploaded by

Bhaskar B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views29 pages

Yang Jian

This document proposes two color space learning models for face representation and recognition using color images. The first model learns an optimal color space by finding coefficients to combine RGB channels to maximize between-class and minimize within-class scatter. The second model jointly learns the color space and discriminant projection subspace by maximizing a generalized Rayleigh quotient. An iterative algorithm is described to solve the second model. Experimental results on the FRGC database show the learned discriminant color space (DCS) improves over RGB for face verification, especially on uncontrolled images.

Uploaded by

Bhaskar B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Color Space Learning for Face

Representation and Recognition

Jian Yang
School of Computer Science
Nanjing University of Science and Technology
Email: [email protected]
Color Cue in Vision
• Color provides useful and important
information for object detection (e.g. face
detection) and tracking, image (or video)
segmentation, indexing and retrieval, etc.
• Different color spaces (or color models)
possess different characteristics as
applied to different visual tasks.
Role of Color in Face Recognition
• Does color help face recognition?
• A previous answer:
Color appears to confer no significant face
recognition advantage beyond the
luminance information

Kemp, R. et al. (1996). Perception and recognition of


normal and negative faces: the role of shape from
shading and pigmentation cues. Perception, 25, 37-52.
Role of Color in Face Recognition
• Recent research efforts, however, reveal that
color may provide useful information for face
recognition.
• Color cues do play a role in face recognition and
their contribution becomes evident when shape
cues are degraded (e.g. blurred images)
Role of Color in Face Recognition

Lower-resolution higher-resolution

A. Yip and P. Sinha, “Contribution of color to face


recognition”, Perception, 2002, volume 31, 995-1003.
Why does Color Aid Face
Recognition?
• Color provides discriminative information, e.g.
the color of eye or skin may help us identify the
individual (in particular the race)
• Color might facilitate low-level image analysis
(segment face features like eyes and lip), and
thus indirectly aid face recognition
How should we represent color
images for the recognition purpose?
• A common way is to linearly combine the three
color components into one intensity image:
E = 13 R + 13 G + 13 B
The intensity image E is then used for recognition.

• This representation is not theoretically optimal


(1) The color information is lost;
(2) The combination coefficients are not
necessary optimal
How should we represent color
images for the recognition purpose?
• The other research effort is to choose an
existing color space or to build a hybrid color
space by experience for achieving good
recognition performance
• Different color spaces used:
RGB, Rajapakse et al. 2004
YUV, Torres et al. 1999
Ig(r-g), Kittler and Sadeghi, 2004
YQCr, Shih and Liu, 2006,
where Y and Q are from the YIQ color space and Cr is
from the YCbCr color space
Which color space is the best for
face recognition?
From the previous research, we conclude
• There is no consistent result for color
space selection
• Color space selection seems to be data-
dependent
Thus, for a given new database, we still
don’t know which color space we should
choose
Motivation and Idea
• Our motivation is to learn an optimal color space
for a given face database
• Starting from the common RGB color space, our
goal is to find a set of optimal coefficients to
combine the R, G, and B color components. Let
D be the combined image given below:
D = x1R + x2G + x3B
• The remaining task is to find a set of optimal
coefficients with respect to a given criterion
Discriminant Color Model I:

• A model focuses on color space learning


• Criterion: In the D-space, the between-
class scatter is maximal and the within-
class scatter is minimal, i.e.
tr (S b )
J ( X) =
tr (S w )
where X = [ x1 , x2 , x3 ] , S b and S w are the
T

between-class scatter matrix and the


within-class scatter matrix in the D-space.
Discriminant Color Model I
• The foregoing criterion is equivalent to the
following criterion
T
X Lb X
J ( X) = T
X LwX
where L b and L w are the color space
between-class scatter matrix and color
space within-class scatter matrix, and they
are both 3 by 3 matrices.
Discriminant Color Model I
• Maximizing the criterion, we achieve a set
of optimal combination coefficient vectors
X 1 ,X 2 and X 3 .
• The three discriminant color components
of image A can be obtained by

D i = A X i = [R, G, B] X i , i = 1, 2, 3
Illustration of three discriminant
color component images

R G B

D1 D2 D3

Figure Illustration of R, G, B color component images and the three


discriminant color component images generated by the proposed method
The FRGC Database (v2)
• The Face Recognition Grand Challenge (FRGC)
version 2 database contains 12,776 training
images, 16,028 controlled target images, and
8,014 uncontrolled query images for the FRGC
Experiment 4.
• The controlled images have good image quality,
while the uncontrolled images display poor
image quality, such as large illumination
variations, low resolution of the face region, and
possible blurring.
Sample images from FRGC

Images taken in controlled environment

Images taken in uncontrolled environment


Experimental Results
1

0.9

0.8

0.7
Verification Rate

0.6

0.5 BEE Baseline ROC I


BEE Baseline ROC II
0.4
BEE Baseline ROC III
FLD on RGB ROC I
0.3
FLD on RGB ROC II
0.2 FLD on RGB ROC III
FLD on DCS ROC I
0.1 FLD on DCS ROC II
FLD on DCS ROC III
0
-3 -2 -1 0
10 10 10 10
False Accept Rate

ROCs corresponding to different color spaces using FLD and image-


level fusion strategy
Discriminant Color Model II:
• A model integrates color space and image
subspace learning
• Use the following criterion
ϕ T S b ( X)ϕ
J (ϕ , X) = T
ϕ S w ( X)ϕ

where ϕ is a discriminant projection vector and


X a color component combination coefficient
vector. S b (X ) And S w (X ) are the between-class
scatter matrix and the within-class scatter matrix
in D-space, which are defined by
Discriminant Color Model II
c
S b (X ) = ∑ i i
P [(
i =1
A − A )XX T
( A i − A ) T
]

c
1 Mi
S w (X ) = ∑ Pi ∑ [( A ij − A i ) XXT
( A ij − A i ) T
]
i =1 M i − 1 j =1

Maximizing the criterion is equivalent to solving


the following optimization model
⎧⎪max ϕ T S b ( X)ϕ
ϕ, X

⎪⎩subject to ϕ T S w ( X)ϕ = 1 ,
Discriminant Color Model II
To solve the model, we need construct the
general color-space between-class scatter
matrix and the general color-space within-
class scatter matrix as follows:
c
L b (ϕ ) = ∑ Pi [( A i − A ) T ϕϕ T ( A i − A ) ,
i =1

c
1 Mi
L w (ϕ ) = ∑ Pi ∑ [( A ij − A i )T
ϕϕ T
( A ij − A i )]
i =1 M i − 1 j =1
Discriminant Color Model II
• Finding the optimal solutions ϕ * and X* of
the optimization problem is equivalent to
solving the following generalized eigen-
equation set:
⎧S b ( X)ϕ = λ S w ( X)ϕ

⎩L b (ϕ) X = λ L w (ϕ) X
Iterative Algorithm for Model II

Step 0. Set k = 0, and provide an initial value for X: X = X[0] .

Step 1. Construct Sb (X) and Sw (X) based on X = X[k ] . Calculate their generalized eigenvectors

ϕ1 , ϕ2 ,L, ϕd corresponding to the d largest eigenvalues. Let P[k +1] = [ϕ1 , ϕ2 ,L, ϕd ] .

Step 2. Construct Lb (P) and Lw (P) based on P = P[k +1] . Calculate their generalized eigenvectors

X[k +1] corresponding to the largest eigenvalues.

Step 3. If | J (P[k +1] , X[k +1] ) − J (P[k ] , X[k ] ) | < ε , the iteration terminates and let P* = P[k +1] and

X* = X[k +1] . Otherwise, let X = X[k +1] and go to Step 1.


Choose an initial combination
coefficient vector X = X [ 0 ] .
Set k = 0

Construct S b(X) and S w(X) and calculate


their generalized eigenvector ϕ = ϕ [ k +1]
corresponding to the largest eigenvalue

Construct Lb( ϕ ) and Lw( ϕ ) and calculate


their generalized eigenvector X [ k +1]
corresponding to the largest eigenvalue

k=k+1 No
| J (ϕ [ k +1] , X [ k +1] ) − J (ϕ [ k ] , X [ k ] ) | < ε ?

Yes

X* = X [ k +1] , ϕ* = ϕ [ k +1]

The flowchart of the iterative algorithm for Model II


Illustration of three discriminant
color component images

Original image

R G B

D1 D2 D3

llustration of R, G, B color component images and the three discriminant


color component images generated by the proposed method
Experimental Results
1

0.9

0.8

0.7
Verification Rate

0.6

0.5
BEE Baseline ROC I
0.4 BEE Baseline ROC II
BEE Baseline ROC III
0.3 FLD on RGB ROC I
FLD on RGB ROC II
0.2 FLD on RGB ROC III
Extended GCID ROC I
0.1 Extended GCID ROC II
Extended GCID ROC III
0
-3 -2 -1 0
10 10 10 10
False Accept Rate

ROC curves corresponding to the BEE baseline algorithm, FLD using the RGB
images, and the extended GCID algorithm (for three color components) using
the decision-level fusion strategy
Experimental Results
1

0.9

0.8

Verification Rate 0.7

0.6

0.5
BEE Baseline ROC I
0.4 BEE Baseline ROC II
BEE Baseline ROC III
0.3 FLD on RGB ROC I
FLD on RGB ROC II
0.2 FLD on RGB ROC III
Extended GCID ROC I
0.1 Extended GCID ROC II
Extended GCID ROC III
0
-3 -2 -1 0
10 10 10 10
False Accept Rate

ROC curves corresponding to the BEE baseline algorithm, FLD using the
RGB images, and the extended GCID algorithm (for three color
components) using the image-level fusion strategy
Experimental Results

Table: Verification rate (%) comparison when the false accept rate is
0.1% using all of the three color components images

Fusion strategy Method ROC I ROC II ROC III


Decision-level FLD on RGB images 59.75 59.14 58.34
fusion Extended GCID 75.86 76.33 76.71
Image-level FLD on RGB images 66.68 66.85 66.89
fusion Extended GCID 78.90 78.66 78.26
Related Publications
• Jian Yang, Chengjun Liu, “A General Discriminant Model
for Color Face Recognition”, Eleventh IEEE International
Conference on Computer Vision (ICCV 2007), Rio de
Janeiro, Brazil, October 14-20, 2007
• Jian Yang, Chengjun Liu, “Color Image Discriminant
Models and Algorithms for Face Recognition”, IEEE
Transactions on Neural Networks, 2008, 19(12), 2088-
2098.
• Zhiming Liu, Jian Yang, Chengjun Liu, Extracting
Multiple Features in Discriminant Color Space for Face
Recognition, IEEE Transactions on Image Processing,
2010, vol. 19, no. 9, pp. 2502-2509.
• Su-Jing Wang, Jian Yang, Na Zhang, Chun-Guang Zhou,
Tensor Discriminant Color Space for Face Recognition,
IEEE Transactions on Image Processing, 2011, 20(9),
2490-2501.
Thank you!!!

You might also like