26 Matrix Factorization
26 Matrix Factorization
Matrix Factorization
Dong-Kyu Chae
[a, b, c] x = 5
5
[a, b, c] x = 3
V
A low-rank matrix of
U
item latent factors
R A low-rank matrix of
user latent factors
Matrix Factorization
❑ Perspective of latent space
Matrix Factorization
❑ How can I know such latent factor values?
❑ “Learn” the latent factors, 𝑈 and 𝑉, that may originate the ratings
❑ All the latent factors in 𝑈 and 𝑉 are model parameters
❑ Randomly initialized, and learned through some optimization
techniques.
Objective:
Matrix Factorization
❑ Rating prediction with the completely learned U and V
❑ We can reconstruct “dense” rating matrix through 𝑈⋅𝑉𝑇
Matrix Factorization
❑ Objective function: minimizing the sum of squared error
100 -81
Matrix Factorization
❑ Objective function with the regularization term
❑ Process:
𝜕𝐿 1 ′
= 2𝑒𝑢𝑖 ∙ 𝑒𝑢𝑖 +2𝜆𝑈𝑢
𝜕𝑈𝑢 2
′
= 𝑒𝑢𝑖 ∙ 𝑒𝑢𝑖 +𝜆𝑈𝑢
𝜕𝐿 1 ′
= 2𝑒𝑢𝑖 ∙ 𝑒𝑢𝑖 +2𝜆𝑉𝑖
𝜕𝑉𝑖 2
′
= 𝑒𝑢𝑖 ∙ 𝑒𝑢𝑖 +𝜆𝑉𝑖
𝝏𝑬
=0
𝝏𝑼𝒊
Alternating Least Squares
❑ Objective:
❑ Procedure
𝑉𝑖𝑇 = 𝑈 𝑇 𝑈 + 𝜆𝐼 −1 𝑈 𝑇 𝑟
𝑖
(here, 𝑈 is constant)
3. For each user 𝑢, let 𝑟𝑢 be the vector of ratings of that item. Compute:
𝑇
𝐿 = 𝑟𝑖 − 𝑈𝑉𝑖𝑇 𝑟𝑖 − 𝑈𝑉𝑖𝑇 + 𝜆𝑉𝑖 𝑉𝑖𝑇 *
𝐿 = 𝑟𝑖 − 𝑈𝛽 𝑇 𝑟𝑖 − 𝑈𝛽 + 𝜆𝛽 𝑇 𝛽
𝜕𝐿
Let = 0. Then,
𝜕𝛽
𝛽 = 𝑉𝑖𝑇 = 𝑈 𝑇 𝑈 + 𝜆𝐼 −1 𝑈 𝑇 𝑟
𝑖
repeat i=1 to n
Alternating Least Squares
❑ Partial derivative of the loss
❑ When V is constant, 𝐿 = )
𝐿 = 𝑟𝑢 − 𝑉𝛽 𝑇 𝑟𝑢 − 𝑉𝛽 + 𝜆𝛽 𝑇 𝛽
𝜕𝐿
Let = 0. Then,
𝜕𝛽
𝛽 = 𝑈𝑢𝑇 = 𝑉 𝑇 𝑉 + 𝜆𝐼 −1 𝑇
𝑉 𝑟𝑢
repeat u=1 to m
Autoencoder for MF
❑ Same objective function, different model
❑ Matrix factorization => Autoencoder
❑ Two types of Autoencoders for recommendation
❑ user-based and item-based
❑ Users and items are represented as vectors
Learned embeddings
[ x,
3,
…
…
…
…
…
x,
5]
reconstruction loss
2
ℒ𝑚𝑠𝑒 =σ𝑖 (𝐫𝒊 − 𝐫ො𝒊 ) ∙ 𝐲𝒊
Actual Predicted Indicator
ratings ratings {0, 1}
Thank You