Principal Component Analysis

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 25

Principal component analysis

Hypothesis: Hebbian synaptic


plasticity enables perceptrons
to perform principal
component analysis
Outline
Variance and covariance
Principal components
Maximizing parallel variance
Minimizing perpendicular variance
Neural implementation
covariance rule
Principal component
direction of maximum variance in the
input space
principal eigenvector of the covariance
matrix
goal: relate these two definitions
Variance
A random variable
fluctuating about
its mean value.


Average of the square of the
fluctuations.

ox = x x

ox
( )
2
= x
2
x
2
Covariance
Pair of random
variables, each
fluctuating about
its mean value.



Average of product of fluctuations.

ox
1
= x
1
x
1
ox
2
= x
2
x
2

ox
1
ox
2
= x
1
x
2
x
1
x
2
Covariance examples
Covariance matrix
N random variables
NxN symmetric matrix


Diagonal elements are variances

C
ij
= x
i
x
j
x
i
x
j
Principal components
eigenvectors with k
largest eigenvalues



Now you can calculate
them, but what do they
mean?
Handwritten digits

Principal component in 2d
One-dimensional projection
Covariance to variance
From the covariance, the variance of
any projection can be calculated.
Let w be a unit vector

w
T
x
( )
2
w
T
x
2
= w
T
Cw
= w
i
C
ij
w
j
ij

Maximizing parallel variance


Principal eigenvector of C
the one with the largest eigenvalue.

w
*
= argmax
w: w =1
w
T
Cw

max
C
( )
= max
w: w =1
w
T
Cw
= w
*T
Cw
*
Orthogonal decomposition

Total variance is conserved
Maximizing parallel variance =
Minimizing perpendicular variance
Rubber band computer

Aw= qyx
Aw= q y y
( )
x x
( )
Correlation/covariance rule
presynaptic activity x
postsynaptic activity y
Hebbian
Stochastic gradient ascent
Assume data has zero mean
Linear perceptron

y = w
T
x

Aw= qxx
T
w

E =
1
2
w
T
Cw
C = xx
T

Aw = q
cE
cw
Preventing divergence
Bound constraints
Ojas rule

Aw= q yx y
2
w
( )
Converges to principal component
Normalized to unit vector
Multiple principal components
Project out principal component
Find principal component of remaining
variance
clustering vs. PCA
Hebb: output x input
binary output
first-order statistics
linear output
second-order statistics
Data vectors
x
a
means ath data vector
ath column of the matrix X.
X
ia
means matrix element X
ia
ith component of x
a

x is a generic data vector
x
i
means ith component of x
Correlation matrix
Generalization of second moment

x
i
x
j
=
1
m
X
ia
a=1
m

X
ja

xx
T
=
1
m
XX
T

You might also like