0% found this document useful (0 votes)
47 views6 pages

Sample New

This document proposes a new approach to principal component analysis (PCA) using image gradient orientations rather than pixel intensities. It argues that gradient orientations are more robust to outliers compared to intensities. The method defines a cosine-based distance measure between images based on differences in gradient orientations. This allows mapping images to a high-dimensional sphere where linear PCA can be performed efficiently. Experimental results on face recognition demonstrate some favorable properties of this gradient orientation-based PCA approach.

Uploaded by

easg130
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views6 pages

Sample New

This document proposes a new approach to principal component analysis (PCA) using image gradient orientations rather than pixel intensities. It argues that gradient orientations are more robust to outliers compared to intensities. The method defines a cosine-based distance measure between images based on differences in gradient orientations. This allows mapping images to a high-dimensional sphere where linear PCA can be performed efficiently. Experimental results on face recognition demonstrate some favorable properties of this gradient orientation-based PCA approach.

Uploaded by

easg130
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Principal Component Analysis of Image Gradient Orientations for Face

Recognition
Georgios Tzimiropoulos

, Stefanos Zafeiriou

and Maja Pantic


,
Department of Computing, Imperial College London
180 Queens Gate, London SW7 2AZ, UK.
EEMCS, University of Twente, 5 Drienerlolaan,
7522 NB Enschede, The Netherlands.
{gt204, s.zafeiriou,m.pantic}@imperial.ac.uk
AbstractWe introduce the notion of Principal Component
Analysis (PCA) of image gradient orientations. As image data
is typically noisy, but noise is substantially different from
Gaussian, traditional PCA of pixel intensities very often fails to
estimate reliably the low-dimensional subspace of a given data
population. We show that replacing intensities with gradient
orientations and the
2
norm with a cosine-based distance
measure offers, to some extend, a remedy to this problem. Our
scheme requires the eigen-decomposition of a covariance matrix
and is as computationally efcient as standard
2
intensity-
based PCA. We demonstrate some of its favorable properties
for the application of face recognition.
NOTATION
S, {.} set
set of reals
C set of complex numbers
x scalar or complex
j j
2
= 1
e
j
Euler form: cos + j sin
x column vector
X matrix
I
mm
mm identity matrix
x(k) k-th element of vector x
N(X) cardinality of set X
||.||
2
norm
||.||
F
Frobenius norm
X
H
conjugate transpose of X
Re[x], Im[x] real and imaginary part of x
U[a, b] uniform distribution in [a, b]
E[.] mean value operator
x U[a, b] x follows U[a, b]
I. INTRODUCTION
Provision for mechanisms capable of handling gross errors
caused by possible arbitrarily large model deviations is a
typical prerequisite in computer vision. Such deviations are
not unusual in real-world applications where data contains
artifacts due to occlusions, illumination changes, shadows,
reections or the appearance of new parts/objects. In most
This work has been funded by the European Research Council under
the ERC Starting Grant agreement no. ERC-2007-StG-203143 (MAHNOB).
The work of Maja Pantic is also funded in part by the European Communitys
7th Framework Programme [FP7/20072013] under the grant agreement no
231287 (SSPNet).
cases, such phenomena cannot be described by a mathemati-
cally well-dened generative model and are usually referred
as outliers in learning and parameter estimation.
In this paper, we propose a new avenue for Principal
Component Analysis (PCA), perhaps the most classical tool
for dimensionality reduction and feature extraction in pat-
tern recognition. Standard PCA estimates the krank linear
subspace of the given data population, which is optimal
in a least-squares sense. Unfortunately
2
norm of pixel
intensities enjoys optimality properties only when image
noise is i.i.d. Gaussian; for data corrupted by outliers, the
estimated subspace can be arbitrarily biased.
Robust formulations to PCA, such as robust covariance
matrix estimators [1], [2], are computationally prohibitive for
high dimensional data such as images. Robust approaches,
well-suited for computer vision applications, include
1
[3],
[4], robust energy function [5] and weighted combination
of nuclear norm and
1
minimization [6], [7].
1
-based ap-
proaches can be computationally efcient, however the gain
in robustness is not always signicant. The M-Estimation
framework of [5] is robust but suitable only for relatively
low dimensional data or off-line processing. Under weak
assumptions [7], the convex optimization formulation of [6],
[7] perfectly recovers the low dimensional subspace of a
data population corrupted by sparse arbitrarily large errors;
nevertheless efcient reformulations of standard PCA can be
orders of magnitude faster.
In this paper we look at robust PCA from a completely
different perspective. Our scheme does not operate on pixel
intensities. In particular, we replace pixel intensities with
gradient orientations. We dene a notion of pixel-wise image
dissimilarity by looking at the distribution of gradient ori-
entation differences; intuitively this must be approximately
uniform in [0, 2). We then assume that local orientation
mismatches caused by outliers can be also well-described
by a uniform distribution which, under some mild assump-
tions, is canceled out when we apply the cosine kernel.
This last observation has been noticed in recently proposed
schemes for image registration [8]. Following this line of
research, we show that a cosine-based distance measure has
a functional form which enables us to dene an explicit
mapping from the space of gradient orientations into a high-
dimensional sphere where essentially linear complex PCA is
performed. The mapping is one-to-one and therefore PCA-
based reconstruction in the original input space is direct
and requires no further optimization. Similarly to standard
PCA, the basic computational module of our scheme requires
the eigen-decomposition of a covariance matrix, while high
dimensional data can be efciently analyzed following the
strategy suggested in [9].
II.
2
-BASED PCA OF PIXEL INTENSITIES
Let us denote by x
i

p
the pdimensional vector
obtained by writing image I
i

m
1
m
2
in lexicographic
ordering. We assume that we are given a population of
n samples X = [x
1
| |x
n
]
pn
. Without loss of
generality, we assume zero-mean data. PCA nds a set of
k < n orthonormal bases B
k
= [b
1
| |b
k
]
pk
by
minimizing the error function
(B
k
) = ||XB
k
B
T
k
X||
2
F
. (1)
The solution is given by the eigenvectors corresponding
to the k largest eigenvalues obtained from the eigen-
decomposition of the covariance matrix XX
T
. Finally, the
reconstruction of X from the subspace spanned by the
columns of B
k
is given by

X = B
k
C
k
, where C
k
= B
T
k
X
is the matrix which gathers the set of projection coefcients.
For high dimensional data and Small Sample Size (SSS)
problems (i.e. n p), an efcient implementation of PCA
in O(n
3
) (instead of O(p
3
)) was proposed in [9]. Rather
than computing the eigen-analysis of XX
T
, we compute
the eigen-analysis of X
T
X and make use of the following
theorem
Theorem I
Dene matrices A and B such that A =
H
and B =

H
with C
mr
. Let U
A
and U
B
be the eigenvectors
corresponding to the non-zero eigenvalues
A
and
B
of A
and B, respectively. Then,
A
=
B
and U
A
= U
B

1
2
A
.
III. RANDOM NUMBER GENERATION FROM GRADIENT
ORIENTATION DIFFERENCES
We formalize an observation for the distribution of gradi-
ent orientation differences which does not appear to be well-
known in the pattern recognition community
1
. Consider a
set of images {J
i
}. At each pixel location, we estimate the
image gradients and the corresponding gradient orientation
2
. We denote by {
i
},
i
[0, 2)
m
1
m
2
the set of
orientation images and compute the orientation difference
image

il
=
i

l
. (2)
We denote by
i
and
il

i

l
the pdimensional
vectors obtained by writing
i
and
il
in lexicographic
1
This observation has been somewhat noticed in [10] without any further
comments on its implications.
2
More specically, we compute
i
= arctan G
i,y
/G
i,x
, where G
i,x
=
h
x
I
i
, G
i,y
= h
y
I
i
and h
x
, h
y
are lters used to approximate the ideal
differentiation operator along the image horizontal and vertical direction re-
spectively. Possible choices for h
x
, h
y
include central difference estimators
of various orders and discrete approximations to the rst derivative of the
Gaussian.
ordering and P = {1, . . . , p} the set of indices corresponding
to the image support. We introduce the following denition.
Denition Images J
i
and J
l
are pixel-wise dissimilar if k
P,
il
(k) U[0, 2).
Not surprisingly, nature is replete with images exemplifying
Denition 1. This, in turn, makes it possible to set up a naive
image-based random number generator. To conrm this, we
used more than 70, 000 pairs of image patches of resolution
200200 randomly extracted from natural images [11]. For
each pair, we computed
il
and formulated the following
null hypothesis
H
0
: k P
il
(k) U[0, 2).
which was tested using the Kolmogorov-Smirnov test [12].
For a signicance level equal to 0.01, the null hypothesis
was accepted for 94.05% of the image pairs with mean p-
value equal to 0.2848. In a similar setting, we tested Matlabs
random generator. The null hypothesis was accepted for
99.48% of the cases with mean p-value equal to 0.501. Fig.
1 (a)-(b) show a typical pair of image patches considered in
our experiment. Fig. 1 (c) and (d) plot the histograms of the
gradient orientation differences and 40,000 samples drawn
from Matlabs random number generator respectively.
(a) (b)
0 1 2 3 4 5 6.2832
0
50
100
150
(radius)
N
u
m
b
e
r o
f P
ix
e
ls
(c)
0 1 2 3 4 5 6.2832
0
50
100
150
(radius)
N
u
m
b
e
r o
f P
ix
e
ls
(d)
Fig. 1. (a)-(b) An image pair used in our experiment, (c) Image-
based random number generator: histogram of 40,000 gradient orientation
differences and (d) Histogram of 40,000 samples drawn from Matlabs
random number generator.
IV. PCA OF GRADIENT ORIENTATIONS
A. Cosine-based correlation of gradient orientations
Given the set of our images {I
i
}, we compute the corre-
sponding set of orientation images {
i
} and measure image
correlation using the cosine kernel
s(
i
,
l
)

kP
cos[
il
(k)] = cN(P) (3)
where c [1, 1]. Notice that for highly spatially correlated
images
il
(k) 0 and c 1.
Assume that there exists a subset P
2
P corresponding
to the set of pixels corrupted by outliers. For P
1
= P P
2
,
we have
s
1
(
i
,
l
) =

kP
1
cos[
il
(k)] = c
1
N(P
1
) (4)
where c
1
[1, 1].
Not unreasonably, we assume that in P
2
, the images are
pixel-wise dissimilar according to Denition 1. For example,
Fig. 2 (a)-(b) show an image pair where P
2
is the part of the
face occluded by the scarf and Fig. 2 (c) plots the distribution
of in P
2
.
(a) (b)
0 1 2 3 4 5 6.2832
0
5
10
15
20
25
(radius)
(c)
Fig. 2. (a)-(b) An image pair used in our experiments. (c) The distribution
of for the part of face occluded by the scarf.
Before proceeding for P
2
, we need the following theorem.
Theorem II
Let u(.) be a random process and u(t) U[0, 2) then:
E[

X
cos u(t)dt] = 0 for any non-empty interval X of
.
If u(.) is mean ergodic, then

X
cos u(t)dt = 0.
Proof: Let us dene the random process z(t) = cos u(t).
Let also f
U
(u) = U[0, 2) and we assumed that u f
U
(u).
The integral s =

b
a
z(t)dt of the stochastic process z(t)
is a random variable s [12]. By interpreting the above as a
Riemannian integral and using the linearity of the expectation
operator, we conclude that
E{s} =

b
a
E{z(t)}dt =

b
a

cos(u)f
U
(u)du
=

b
a

2
0
cos(u)du = 0
(5)
which shows that the integral

X
cos u(t)dt is equal to zero in
mean value. By further assuming mean-ergodicity, the time
average is equal to the mean, thus we get
E(Z)

z(t)dt

X
cos u(t)dt = 0 (6)
which proves the Theorem.
We also make use of the following approximation

X
cos[
il
(t)]dt

kP
cos[
il
(k)] (7)
where with some abuse of notation,
il
is dened in the
continuous domain on the left hand side of (7). Completely
analogously, the above theorem and approximation hold for
the case of the sine kernel.
Using the above results, for P
2
, we have
s
2
(
i
,
l
) =

kP
2
cos[
il
(k)] 0 (8)
It is not difcult to verify that
2
-based correlation i.e. the
inner product between two images will be zero if and only
if the images have interchangeably black and white pixels.
Our analysis and (8) show that cosine-based correlation of
gradient orientations allows for a much broader class of un-
correlated images. Overall, unlike
2
-based correlation where
the contribution of outliers can be arbitrarily large, s(.) mea-
sures correlation as s(
i
,
l
) = s
1
(
i
,
l
) + s
2
(
i
,
l
)
c
1
N(P
1
), i.e. the effect of outliers is approximately canceled
out.
B. The principal components of image gradient orientations
To show how (3) can be used as a basis for PCA, we rst
dene the distance
d
2
(
i
,
l
) =
p

k=1
{1 cos[
il
(k)]} (9)
We can write (9) as follows
d
2
(
i
,
l
) =
1
2
p

k=1
{2 2 cos[
i
(k)
l
(k)]}
=
1
2
||e
j
i
e
j
l
||
2
(10)
where e
j
i
= [e
j
i
(1)
, . . . , e
j
i
(p)
]
T
. The last equality
makes the basic computational module of our scheme ap-
parent. We dene the mapping from [0, 2)
p
onto a subset
of complex sphere with radius

N(P)
z
i
(
i
) = e
j
i
(11)
and apply linear complex PCA to the transformed data z
i
.
Using the results of the previous subsection, we can
remark the following
Remark I If P = P
1
P
2
with
il
(k) U[0, 2), k
P
2
, then Re[z
H
i
z
l
] c
1
N(P
1
)
Remark II If P
2
= P, then Re[z
H
i
z
l
] 0 and Im[z
H
i
z
l
]
0.
Further geometric intuition about the mapping z
i
is pro-
vided by the chord between vectors z
i
and z
l
crd(z
i
, z
l
) =

(z
i
z
l
)
H
(z
i
z
l
) =

2d
2
(
i
,
l
) (12)
Using crd(.), the results of Remark 1 and 2 can be refor-
mulated as crd(z
i
, z
l
)

2((1 c
1
)N(P
1
) + N(P
2
)) and
crd(z
i
, z
l
)

2N(P) respectively.
Overall, Algorithm 1 summarizes the steps of our PCA
of gradient orientations.
Algorithm 1. Estimating the principal subspace
Inputs: A set of n orientation images
i
, i = 1, . . . , n of
p pixels and the number k of principal components.
Step 1. Obtain
i
by writing
i
in lexicographic ordering.
Step 2. Compute z
i
= e
j
i
, form the matrix of the
transformed data Z = [z
1
| |z
n
] C
pn
and compute the
matrix T = Z
H
Z R
nn
.
Step 3. Compute the eigen-decomposition of T = UU
H
and denote by U
k
C
pk
and
k
R
kk
the
kreduced set. Compute the principal subspace from
B
k
= ZU
k

1
2
k
C
pk
.
Step 4. Reconstruct using

Z = B
k
B
H
k
Z.
Step 5. Go back to the orientation domain using

=

Z.
Let us denote by Q = {1, . . . , n} the set of image indices
and Q
i
any subset of Q. We can conclude the following
Remark III If Q = Q
1
Q
2
with z
H
i
z
l
0 i Q
2
,
l Q and i = l, then, eigenvector b
l
of B
n
such that
b
l

1
N(P)
z
i
.
A special case of Remark III is the following
Remark IV If Q = Q
2
, then
1
N(P)
I
nn
and B
n

1
N(P)
Z.
To exemplify Remark IV, we computed the eigen-spectrum
of 100 natural image patches. In a similar setting, we
computed the eigen-spectrum of samples drawn from Mat-
labs random number generator. Fig. 3 plots the two eigen-
spectrums.
20 40 60 80 100
0
0.2
0.4
0.6
0.8
1
Number of Principal Components
E
i
g
e
n
v
a
l
u
e
s


Sampled Uniform Distributions
Natural Image Patches
Fig. 3. The eigen-spectrum of natural images and the eigen-spectrum of
samples drawn from Matlabs random number generator.
Finally, notice that our framework also enables the direct
embedding of new samples. Algorithm 2 summarizes the
procedure.
Algorithm 2. Embedding of new samples
Inputs: An orientation image of p pixels and the principal
subspace B
k
of Algorithm 1.
Step 1. Obtain by writing in lexicographic ordering.
Step 2. Compute z = e
j
and reconstruct using z =
B
k
B
H
k
z.
Step 3. Go back to the orientation domain using

= z.
V. RESULTS
A. Face reconstruction
The estimation of a low-dimensional subspace from a set
of a highly-correlated images is a typical application of PCA
[13]. As an example, we considered a set of 50 aligned face
images of image resolution 192 168 taken from the Yale
B face database [14]. The images capture the face of the
same subject under different lighting conditions. This setting
usually induces cast shadows as well as other specularities.
Face reconstruction from the principal subspace is a natural
candidate for removing these artifacts.
We initially considered two versions of this experiment.
The rst version used the set of original images. In the sec-
ond version, 20% of the images was articially occluded by
a 7070 Baboon patch placed at random spatial locations.
For both experiments, we reconstructed pixel intensities and
gradient orientations with
2
intensiy-based PCA and PCA of
gradient orientations respectively using the rst 5 principal
components.
Fig. 4 and Fig. 5 illustrate the quality of reconstruction
for 2 examples of face images considered in our experi-
ments. While PCA-based reconstruction of pixel intensities
is visually appealing in the rst experiment, Fig. 4 (g)-
(h) clearly illustrate that, in the second experiment, the
reconstruction suffers from artifacts. In contrary, Fig. 5 (e)-(f)
and (g)-(h) show that PCA-based reconstruction of gradient
orientations not only reduces the effect of specularities but
also reconstructs the gradient orientations corresponding to
the face component only.
This performance improvement becomes more evident
by plotting the principal components for each method and
experiment. Fig. 6 shows the 5 dominant Eigenfaces of
2
intensity-based PCA. Observe that, in the second experiment,
the last two Eigenfaces (Fig. 6 (i) and (j)) contain Baboon
ghosts which largely affect the quality of reconstruction. In
contrary, a simple visual inspection of Fig. 7 reveals that,
in the second experiment, the principal subspace of gradient
orientations (Fig. 7 (f)-(j)) is artifact-free which in turn makes
dis-occlusion in the orientation domain feasible.
Finally, to exemplify Remark III, we considered a third
version of our experiment where 20% of the images were
replaced by the same 192 168 Baboon image. Fig. 8
(a)-(e) and (f)-(j) illustrate the principal subspace of pixel
intensities and gradient orientations respectively. Clearly, we
may observe that
2
PCA was unable to handle the extra-class
outlier. In contrary, PCA of gradient orientations successfully
separated the face from the Baboon subspace, i.e. no
eigenvector was corrupted by the Baboon image (the Ba-
boon orientation image appeared as a separate eigenvector).
Note that the face principal subspace is not the same as the
one obtained in versions 1 and 2. This is because only 80%
of the images in our dataset was used in this experiment.
Fig. 4. PCA-based reconstruction of pixel intensities. (a)-(b) Original
images used in version 1 of our experiment. (c)-(d) Corrupted images used in
version 2 of our experiment. (e)-(f) Reconstruction of (a)-(b) with 5 principal
components. (g)-(h) Reconstruction of (c)-(d) with 5 principal components.
Fig. 5. PCA-based reconstruction of gradient orientations. (a)-(b) Orig-
inal orientations used in version 1 of our experiment. (c)-(d) Corrupted
orientations used in version 2 of our experiment. (e)-(f) Reconstruction of
(a)-(b) with 5 principal components. (g)-(h) Reconstruction of (c)-(d) with
5 principal components.
Fig. 6. The 5 principal components of pixel intensities for (a)-(e) version
1 and (f)-(j) version 2 of our experiment.
Fig. 7. The 5 principal components of gradient orientations for (a)-(e)
version 1 and (f)-(j) version 2 of our experiment.
B. Face recognition
PCA-based feature extraction for face recognition goes
back to the classical work by Turk and Pentland [9] and still
remains a standard benchmark for performance evaluation
of new algorithms. We considered a single-sample-per-class
experiment using aligned frontal-view face images taken
Fig. 8. (a)-(e) The 5 principal components of pixel intensities for version
3 of our experiment and (f)-(j) The 5 principal components of gradient
orientations for the same experiment.
from the AR database [15]. The database consists of more
than 4,000 frontal view facial images of 126 subjects. Each
subject has up to 26 images taken in two sessions. The
rst session contains 13 images, numbered from 1 to 13,
including different facial expressions (1-4), different lighting
(5-7) and different occlusions under different lighting (8-13).
The second session duplicated the rst session two weeks
later. We randomly selected a subset with 100 subjects. For
training, we used 100 face images of 100 different subjects
from session 1. We investigated the robustness of our scheme
for the case of illumination variations and occlusions. In
particular, we carried out the following experiments:
1) In experiment 1, we used the images 5-7 of session 2
for testing (different illumination).
2) In experiment 2, we used images 8-13 of session 2 for
testing (occlusion by scarf or glasses under different
illumination).
Note that the second experiment is very challenging since
our single-sample-per-class training set does not allows us to
nd discriminant projection bases by exploiting class-label
information, while the presence of the scarf or glasses in the
testing set occludes approximately 25-40 % of the total face
area.
Table I and Fig. 9 summarize the results for our single
sample per class experiments. The robustness of the proposed
scheme is evident. As our results show, PCA of gradient
orientations achieves almost 100% recognition rate for the
case of illumination changes (experiment 1) and approx-
imately 94% recognition rate for the case of occlusions
(experiment 2). The latter result is approximately 20% better
than the best reported recognition rate [18], which is obtained
using as testing set a subset of the occluded images with
no illumination variations.
Moreover, the presented results should not be compared
with those achieved by recent approaches such as the ones
in [16] or [17] which use for training 8 images per subject
taken from both sessions, while the test images are also
taken from both sessions (On the contrary, we used only
one sample from the rst session and tested on the second
session). For our single-sample-per-class experiment, we
applied the method in [16] using
features extracted using image resizing, intensity-based
PCA, LaplacianFaces as described in [16] for experi-
ment 1.
the robust extended
1
minimization formulation for
experiment 2.
The third column of Table I shows the best results achieved
by the method in [16]. As we may observe, our PCA of
gradient orientations is not only signicantly faster but also
much more robust.
Recognition rate IGO-PCA I-PCA Best of [16]
Experiment 1 (illumination)% 99.67 74 74
Experiment 2 (occlusions)% 94 28 32.33
TABLE I
RECOGNITION RESULTS ON AR DATABASE FOR EXPERIMENTS 1 AND 2
(IGO-PCA STANDS FOR PCA OF IMAGE GRADIENT ORIENTATIONS
AND I-PCA STANDS FOR INTENSITY-BASED PCA.
0 10 20 30 40 50 60 70 80 90 100
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
R
e
c
o
g
n
i
t
i
o
n

R
a
t
e
Number of Features


PCAIntensity
PCAOrientation
(a)
0 10 20 30 40 50 60 70 80 90 100
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
R
e
c
o
g
n
i
t
i
o
n

R
a
t
e
Number of Features


PCAIntensity
PCAOrientation
(b)
Fig. 9. Single sample per class face recognition experiment on the
AR database. (a) Experiment 1 (illumination changes) (b) Experiment 2
(occlusions-illumination changes).
VI. CONCLUSIONS
We introduced a new concept: PCA of gradient orienta-
tions. Our framework is as simple as standard
2
intensity-
based PCA, yet much more powerful for efcient subspace-
based data representation. Central to our analysis is the
distribution of gradient orientation differences and the cosine
kernel which provide us a consistent way to measure image
dissimilarity. We showed how this dissimilarity measure can
be naturally used to formulate a robust version of PCA.
We demonstrated some of the favorable properties of our
framework for the application of face recognition. Extensions
of our scheme span a wide range of theoretical topics and
applications; from statistical machine learning and clustering
to object recognition and tracking.
REFERENCES
[1] N.A. Campbell, Robust procedures in multivariate analysis I: Robust
Covariance estimation, Applied Statistics, 29 (1980), pp. 231237.
[2] C. Croux and G. Haesbroeck, Principal component analysis based on
robust estimators of the covariance or correlation matrix: inuence
functions and efciencies, Biometrika, 87 (2000), pp. 603.
[3] Q. Ke and T. Kanade, Robust L1 norm factorization in the presence
of outliers and missing data by alternative convex programming, in
IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, CVPR (2005).
[4] N. Kwak, Principal component analysis based on L1-norm maximiza-
tion, IEEE Transactions on Pattern Analysis and Machine Intelligence,
30 (2008), pp. 16721680.
[5] F.D.L. Torre and M.J. Black, A framework for robust subspace learn-
ing, International Journal of Computer Vision, 54 (2003), pp. 117142.
[6] V. Chandrasekaran, S. Sanghavi, P.A. Parrilo and A.S. Willsky, Rank-
sparsity incoherence for matrix decomposition, preprint, (2009).
[7] E.J. Candes, X. Li, Y. Ma, and J. Wright, Robust principal component
analysis?, Arxiv preprint arXiv:0912.3599, (2009).
[8] G. Tzimiropoulos, V. Argyriou, S. Zafeiriou, and T. Stathaki, Robust
FFT-Based Scale-Invariant Image Registration with Image Gradients,
IEEE Transactions on Pattern Analysis and Machine Intelligence, 32
(2010), pp. 18991906.
[9] M. Turk and A.P. Pentland, Eigenfaces for recognition, Journal of
Cognitive Neuroscience, 3 (1991), pp. 7186.
[10] A.J Fitch, A. Kadyrov, W.J. Christmas, and J. Kittler, Orientation
correlation, in British Machine Vision Conference, 1 (2002), pp. 133
142.
[11] H.P. Frey, P. Konig, and W. Einhauser, The Role of First and Second-
Order Stimulus Features for Human Overt Attention, Perception and
Psychophysics, 69 (2007), pp. 153161.
[12] A. Papoulis and S.U. Pillai, Probability, random variables, and sto-
chastic processes, McGraw-Hill New York (2004).
[13] M. Kirby and L. Sirovich, Application of the karhunen-loeve procedure
for the characterization of human faces, IEEE Transactions Pattern
Analysis and Machine Intelligence, 12 (1990), pp. 103108.
[14] A.S. Georghiades, P.N. Belhumeur and D.J. Kriegman, From few to
many: Illumination cone models for face recognition under variable
lighting and pose, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 23 (2001), pp. 643660.
[15] A.M. Martinez and R. Benavente, The AR face database, Tech. Rep.,
CVC Technical report (1998).
[16] J. Wright, A.Y Yang, A Ganesh, S.S. Sastry and Y. Ma, Robust face
recognition via sparse representation, IEEE Transactions on Pattern
Analysis and Machine Intelligence, 31 (2009), pp. 210227.
[17] Z. Zhou, A. Wagner, H. Mobahi, J. Wright and Y. Ma, Face recognition
with contiguous occlusion using Markov random elds, in Proceedings
of International Conference on Computer Vision, ICCV (2009).
[18] H. Jia, and A.M. Martinez, Face recognition with occlusions in the
training and testing sets, in Proc. Conf. Automatic Face and Gesture
Recognition, FG(2008).
[19] X. Tan, S. Chen, Z.H. Zhou and F. Zhang, Recognizing partially
occluded, expression variant faces from single training image per
person with SOM and soft k-NN ensemble, IEEE Transactions on
Neural Networks, 16 (2005), pp. 875887.
[20] A.M. Martinez, Recognizing imprecisely localized, partially occluded,
and expression variant faces from a single sample per class, IEEE
Transactions on Pattern Analysis and Machine Intelligence, 24 (2002),
pp. 748763.

You might also like