0% found this document useful (0 votes)
58 views

Homework 3: Least-Squares Khatri-Rao Factorization

This document is a homework assignment on least-squares Khatri-Rao factorization (LSKRF) for estimating matrices A and B from their Khatri-Rao product X. It is divided into two parts: 1) applying LSKRF to sample data, verifying the estimates match the original data. 2) Performing Monte Carlo experiments with noisy data, comparing estimation accuracy for different signal-to-noise ratios and matrix dimensions. The results show estimation error decreases with higher SNR and that larger matrix dimensions improve accuracy.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Homework 3: Least-Squares Khatri-Rao Factorization

This document is a homework assignment on least-squares Khatri-Rao factorization (LSKRF) for estimating matrices A and B from their Khatri-Rao product X. It is divided into two parts: 1) applying LSKRF to sample data, verifying the estimates match the original data. 2) Performing Monte Carlo experiments with noisy data, comparing estimation accuracy for different signal-to-noise ratios and matrix dimensions. The results show estimation error decreases with higher SNR and that larger matrix dimensions improve accuracy.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Universidade Federal do Ceará

Programa de Pós-graduação do Departamento de Engenharia de


Teleinformática

Homework 3:
Least-Squares Khatri-Rao Factorization

Student: Michel Gonzaga dos Santos - 504504


Subject: Tensor Algebra
Professors: André Lima Ferrer de Almeida
Maryam Dehgan
Data: 21/12/2020

Fortaleza - 2020
Contents
1 First part 1

2 Second part 2
In this homework we implement the Least-Squares Khatri-Rao Factorization (LSKRF)
algorithm that solves the following problem

(Â, B̂) = kX − A♦Bk2F , (1)

where A ∈ CI×R , B ∈ CJ×R and X = A♦B ∈ CIJ×R .


This homework is divided into two parts.

1 First part
In the first part, we randomly chose A ∈ C4×2 , B ∈ C6×2 , such that X = A♦B ∈ C24×2 .
We are supposed to estimate  and B̂ using the LSKRF algorithm that was implemented
in MATLAB as it follows
1 % Least−S q u a r e s Khatri−Rao F a c t o r i z a t i o n f u n c t i o n
2 % Author : Michel
3 % Course : Tensor Algebra
4 % Homework 3
5 f u n c t i o n [ As , Bs ] = k h a t r i s v d (X,M, N,K)
6 % Inputs :
7 % − X: The i n i c i a l matrix
8 % − M: Number o f rows o f A
9 % − N: Number o f rows o f B
10 % − K: Number o f columns o f A and B
11 %
12 % Outputs :
13 % − As : Estimated matrix A
14 % − Bs : Estimated matrix B
15 %−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
16 % Xk = z e r o s (M,N) ;
17 As = z e r o s (M,K) ;
18 Bs = z e r o s (N,K) ;
19

20 f o r k = 1 :K
21 Xk = r e s h a p e (X( : , k ) , [ N,M] ) ;
22 [ U, S ,V] = svd (Xk) ;
23 As ( : , k ) = ( s q r t ( S ( 1 , 1 ) ) ) ∗ c o n j (V( : , 1 ) ) ;
24 Bs ( : , k ) = ( s q r t ( S ( 1 , 1 ) ) ) ∗U( : , 1 ) ;
25 end
26 end
This function receives X and the dimensions of A and B and returns the estimated versions
of  and B̂.
We have also tested khatri svd with the file krf matrix.mat. In both cases, we
have observed that

Â:,r = αr A:,r , (2)


B̂:,r = α−1
r B:,r , (3)
(4)

such that,

(Â♦B̂):,r = Â:,r ⊗ B̂:,r , (5)


= αr A:,r ⊗ α−1 r B:,r , (6)
= A:,r ⊗ B:,r , (7)
= (A♦B):,r . (8)

for r = 1, . . . , R, where αr stands for a scaling factor.

2 Second part
In the second part of this homework, we are supposed to perform 1000 Monte Carlo
experiments, while generating X0 = A♦B ∈ CIJ×R , from randomly chosen A ∈ CI×R and
B ∈ CJ×R , with R = 4. Let X = X0 + αV be a noisy version of X0 , where V is the
additive noise term. The parameter α controls the power(variance) of the noise term, and
is defined as a function of the signal to noise ratio (SNR),in dB, as follows

kX0 k2F
SNRdB = 10 log10 ( ) (9)
kαVk2F
Since the matrices are randomly generated at each execution, we can not affirm that
kX0 k2F and kVk2F will be the same at each execution. In order to overcome this limitation,
we have decide to normalize X0 and V, such that kX0 k2F = 1 and kVk2F = 1, it follows
that
1 kX0 k2F
SNRdB = 10 log10 ( ) (10)
α2 kVk2F
1
= 10 log10 ( ) (11)
α
1 SNRdB
= 10 10 (12)
α2 q
SNRdB
α = 10− 10 (13)

Assuming the SNRdB range [0, 5, 10, 15, 20, 25, 30]dB, we find the estimates  and
B̂ obtained with LSKRF algorithm for the configurations (I, J) = (10, 10) and (I, J) =
(30, 10). These comparisons are made through the Normalized Mean Squared Error
(NMSE) of X̂ = Â♦B̂ with respect to X. The resulting NMSE curves are shown below.

2
As it can be seen, the NMSE values decrease for both curves for greater values of
SNR, we also observe that the configuration (I = 30, J = 10) presents the best results.
According to this behavior we may conclude that, if we increase the parameter I, we
increase the performance of the estimations. This difference can be explained by the fact
that in the first configuration we compute the SVD of square matrices while in the second
one we compute the SVD of rectangular matrices.

You might also like