0% found this document useful (0 votes)
28 views43 pages

Singular Value Decomposition

The document provides an overview of Singular Value Decomposition (SVD) and its applications in data science, particularly in low-rank approximations and Latent Semantic Analysis (LSA). It explains the mathematical foundations of eigenvalues, eigenvectors, and matrix decomposition, emphasizing the importance of SVD in reducing dimensionality while preserving essential information. The document also discusses the benefits of using SVD for improving text retrieval and addressing issues related to lexical semantics in natural language processing.

Uploaded by

Manish Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views43 pages

Singular Value Decomposition

The document provides an overview of Singular Value Decomposition (SVD) and its applications in data science, particularly in low-rank approximations and Latent Semantic Analysis (LSA). It explains the mathematical foundations of eigenvalues, eigenvectors, and matrix decomposition, emphasizing the importance of SVD in reducing dimensionality while preserving essential information. The document also discusses the benefits of using SVD for improving text retrieval and addressing issues related to lexical semantics in natural language processing.

Uploaded by

Manish Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 43

School of Computing

Science and Engineering

Program: M.C.A.
Course Code: MCAS9220
Course Name: Data Science Fundamentals
Singular Value
Decomposition (SVD)
Linear Algebra
Background
Eigenvalues & Eigenvectors
 Eigenvectors (for a square mm matrix S)

Example

(right) eigenvector eigenvalue

 How many eigenvalues are there at most?

only has a non-zero solution if


this is a m-th order equation in λ which can
have at most m distinct solutions (roots of
the characteristic polynomial) – can be complex even
Matrix-vector multiplication

 3 0 0
S  0 2 0 has eigenvalues 3, 2, 0 with
 0 0 0 corresponding eigenvectors
 1  0  0
     
v1  0  v2  1  v3  0 
 0  0  1
     

On each eigenvector, S acts as a multiple of the


identity
matrix: but as a different
 2
  multiple on each.
Any vector (say x=   ) can be viewed as a combination of
4
 6
the eigenvectors: x = 2v1 + 4v2 + 6v3
Matrix vector multiplication
 Thus a matrix-vector multiplication such as Sx
(S, x as in the previous slide) can be rewritten in
terms of the eigenvalues/vectors:

Sx S (2v1  4v2  6v 3 )
Sx 2 Sv1  4 Sv2  6 Sv 3 21v1  42 v2  6 3v 3

 Even though x is an arbitrary vector, the action of


S on x is determined by the eigenvalues/vectors.
 Suggestion: the effect of “small” eigenvalues is
small.
Eigenvalues & Eigenvectors
For symmetric matrices, eigenvectors for distinct
eigenvalues are orthogonal
Sv{1, 2} {1, 2}v{1, 2} , and 1 2  v1  v2 0
All eigenvalues of a real symmetric matrix are real.
for complex  , if S  I 0 and S ST    

All eigenvalues of a positive semidefinite matrix


are non-negative
w   n , wT Sw 0, then if Sv v   0
Example
 Let  2 1 Real, symmetric.
S  
 1 2

 Then 2  1 
S  I    (2   )  1 0.
2

 1 2  
 The eigenvalues are 1 and 3 (nonnegative, real).
 The eigenvectors are orthogonal (and real):
 1  1 Plug in these values
    and solve for
  1  1 eigenvectors.
Eigen/diagonal Decomposition
 Let be a square matrix with m linearly
independent eigenvectors (a “non-defective”
Uniqu
matrix) e for
distinc
 Theorem: Exists an eigen decomposition t
diagonal
eigen-
values

 (cf. matrix diagonalization theorem)


 Columns of U are eigenvectors of S
 Diagonal elements of are eigenvalues of
Diagonal decomposition: why/how
 
Let U have the eigenvectors as columns: U  v1 ... vn 
 
Then, SU can be written
       1 
SU S  v1 ... vn   1v1 ... n vn   v1 ... vn   ... 

       n 

Thus SU=U, or U–1SU=

And S=UU–1.
Diagonal decomposition - example
 2 1
Recall S   ; 1 1, 2 3.
 1 2
 1   1  1 1
The eigenvectors  and   form U  
  1  1   1 1
   
1 1 / 2  1 / 2 Recall
Inverting, we have U  
1 / 2 1 / 2 UU–1 =1.
 
 1 1  1 0 1 / 2  1 / 2
Then, S=UU–1 =      
  1 1  0 3 1 / 2 1 / 2 
Example continued
Let’s divide U (and multiply U–1) by 2

 1 / 2 1 / 2   1 0 1 / 2  1/ 2 
Then, S=     
  1 / 2 1 / 2   0 3  1 / 2 1/ 2 
Q  (Q-1= QT )

Why? Stay tuned …


Symmetric Eigen Decomposition
 If is a symmetric matrix:
 Theorem: Exists a (unique) eigen
T
decomposition S QQ
 where Q is orthogonal:
 Q-1= QT
 Columns of Q are normalized eigenvectors
 Columns are orthogonal.
 (everything is real)
Exercise
 Examine the symmetric eigen decomposition, if
any, for each of the following matrices:

 0 1  0 1  1 2  2 2
  1 0  1 0   2 3  2 4
       
Time out!
 I came to this class to learn about text retrieval
and mining, not have my linear algebra past
dredged up again …
 But if you want to dredge, Strang’s Applied
Mathematics is a good place to start.
 What do these matrices have to do with text?
 Recall m n term-document matrices …
 But everything so far needs square matrices – so

Singular Value Decomposition
For an m n matrix A of rank r there exists a factorization
(Singular Value Decomposition = SVD) as follows:
T
A UV
m m m n V is nn

The columns of U are orthogonal eigenvectors of AAT.


The columns of V are orthogonal eigenvectors of ATA.
Eigenvalues 1 … r of AAT are the eigenvalues of ATA.
 i  i
 diag  1... r  Singular values.
Singular Value Decomposition
 Illustration of SVD dimensions and sparseness
SVD example
1  1
Let

A  0 1  
 1 0 
Thus m=3, n=2. Its SVD is

 0 2/ 6 1/ 3   1 0 
   1 / 2 1/ 2 
1 / 2  1/ 6 1/ 3   0 3  
1 / 2   1/ 2  1/ 2 
 1/ 6  1 / 3   0 0 

Typically, the singular values arranged in decreasing order.


Low-rank Approximation
 SVD can be used to compute optimal low-rank
approximations.
 Approximation problem: Find Ak of rank k such that

Ak  min
X :rank ( X ) k
A X F Frobenius norm

Ak and X are both mn matrices.


Typically, want k << r.
Low-rank Approximation
 Solution via SVD

Ak U diag( 1 ,...,  k ,0,...,0)V T


set smallest r-k
singular values to zero

Ak i 1 i ui viT


k
column notation: sum
of rank 1 matrices
Approximation error
 How good (bad) is this approximation?
 It’s the best possible, measured by the Frobenius
norm of the error:

min
X :rank ( X ) k
A X F
 A  Ak F
 k 1

where the i are ordered such that i  i+1.


Suggests why Frobenius error drops as k
increased.
Recall random projection
 Completely different method for low-rank
approximation
 Was data-oblivious
 SVD-based approximation is data-dependent
 Error for random projection depended only on
start/finish dimensionality
 For every distance
 Error for SVD-based approximation is for the
Frobenius norm, not for individual distances
SVD Low-rank approximation
 Whereas the term-doc matrix A may have
m=50000, n=10 million (and rank close to 50000)
 We can construct an approximation A100 with rank
100.
 Of all rank 100 matrices, it would have the lowest
Frobenius error.
 Great … but why would we??
 Answer: Latent Semantic Indexing

C. Eckart, G. Young, The approximation of a matrix by another of lower rank.


Psychometrika, 1, 211-218, 1936.
Latent Semantic
Analysis via SVD
What it is
 From term-doc matrix A, we compute the
approximation Ak.
 There is a row for each term and a column for
each doc in Ak
 Thus docs live in a space of k<<r dimensions
 These dimensions are not the original axes
 But why?
Vector Space Model: Pros
 Automatic selection of index terms
 Partial matching of queries and documents
(dealing with the case where no document contains all
search terms)
 Ranking according to similarity score (dealing
with large result sets)
 Term weighting schemes (improves retrieval
performance)
 Various extensions
 Document clustering
 Relevance feedback (modifying query vector)
 Geometric foundation
Problems with Lexical Semantics
 Ambiguity and association in natural language
 Polysemy: Words often have a multitude of meanings and
different types of usage (more urgent for very
heterogeneous collections).
 The vector space model is unable to discriminate between
different meanings of the same word.

 Synonymy: Different terms may have an identical or a


similar meaning (weaker: words indicating the same topic).
 No associations between words are made in the vector
space representation.
Polysemy and Context
 Document similarity on single word level:
polysemy and context ring
jupiter
•••
space
meaning 1 voyager
… …
planet saturn
... ...
meaning 2 car
company
•••
contribution to similarity, dodge
if used in 1st meaning, but ford
not if in 2nd
Latent Semantic Analysis
 Perform a low-rank approximation of document-term
matrix (typical rank 100-300)
 General idea
 Map documents (and terms) to a low-dimensional
representation.
 Design a mapping such that the low-dimensional space
reflects semantic associations (latent semantic space).
 Compute document similarity based on the inner product in
this latent semantic space
 Goals
 Similar terms map to similar location in low dimensional
space
 Noise reduction by dimension reduction
Latent Semantic Analysis
 Latent semantic space: illustrating example

courtesy of Susan Dumais


Performing the maps
 Each row and column of A gets mapped into the
k-dimensional LSI space, by the SVD.
 Claim – this is not only the mapping with the best
(Frobenius error) approximation to A, but in fact
improves retrieval.
 A query q is also mapped into this space, by
T 1
qk q U k  k

 Query NOT a sparse vector.


Empirical evidence
 Experiments on TREC 1/2/3 – Dumais
 Lanczos SVD code (available on netlib) due to
Berry used in these expts
 Running times of ~ one day on tens of thousands
of docs
 Dimensions – various values 250-350 reported
 (Under 200 reported unsatisfactory)
 Generally expect recall to improve – what about
precision?
Empirical evidence
 Precision at or above median TREC precision
 Top scorer on almost 20% of TREC topics
 Slightly better on average than straight vector
spaces
 Effect of dimensionality:

Dimensions Precision
250 0.367
300 0.371
346 0.374
Failure modes
 Negated phrases
 TREC topics sometimes negate certain
query/terms phrases – automatic conversion of
topics to
 Boolean queries
 As usual, freetext/vector space syntax of LSI
queries precludes (say) “Find any doc having to do
with the following 5 companies”
 See Dumais for more.
But why is this clustering?
 We’ve talked about docs, queries, retrieval and
precision here.
 What does this have to do with clustering?
 Intuition: Dimension reduction through LSI brings
together “related” axes in the vector space.
Intuition from block matrices

n documents

Block 1 What’s the rank of this matrix?

0’s
Block 2
m
terms

0’s
Block k

= non-zero entries.
Intuition from block matrices

n documents

Block 1

0’s
Block 2
m
terms

0’s
Block k
Vocabulary partitioned into k topics
(clusters); each doc discusses only one topic.
Intuition from block matrices

n documents

What’s the best rank-k


Block 1
approximation to this matrix?

0’s
Block 2
m
terms

0’s
Block k

= non-zero entries.
Intuition from block matrices
Likely there’s a good rank-k
approximation to this matrix.
wiper
tire Block 1
V6

Few nonzero entries


Block 2


Few nonzero entries
Block k
car 10
automobile 0 1
Simplistic picture
Topic 1

Topic 2

Topic 3
Some wild extrapolation
 The “dimensionality” of a corpus is the number of
distinct topics represented in it.
 More mathematical wild extrapolation:
 if A has a rank k approximation of low Frobenius
error, then there are no more than k distinct topics
in the corpus.
LSI has many other applications
 In many settings in pattern recognition and
retrieval, we have a feature-object matrix.
 For text, the terms are features and the docs are
objects.
 Could be opinions and users … more in 276B.
 This matrix may be redundant in dimensionality.
 Can work with low-rank approximation.
 If entries are missing (e.g., users’ opinions), can
recover if dimensionality is low.
 Powerful general analytical technique
 Close, principled analog to clustering methods.
Resources
 http://www.cs.utk.edu/~berry/lsi++/
 http://lsi.argreenhouse.com/lsi/LSIpapers.html
 Dumais (1993) LSI meets TREC: A status report.
 Dumais (1994) Latent Semantic Indexing (LSI) and
TREC-2.
 Dumais (1995) Using LSI for information filtering:
TREC-3 experiments.
 M. Berry, S. Dumais and G. O'Brien. Using
linear algebra for intelligent information
retrieval. SIAM Review, 37(4):573--595, 1995.

You might also like