Homework 3: Least-Squares Khatri-Rao Factorization
Homework 3: Least-Squares Khatri-Rao Factorization
Homework 3:
Least-Squares Khatri-Rao Factorization
Fortaleza - 2020
Contents
1 First part 1
2 Second part 2
In this homework we implement the Least-Squares Khatri-Rao Factorization (LSKRF)
algorithm that solves the following problem
1 First part
In the first part, we randomly chose A ∈ C4×2 , B ∈ C6×2 , such that X = A♦B ∈ C24×2 .
We are supposed to estimate  and B̂ using the LSKRF algorithm that was implemented
in MATLAB as it follows
1 % Least−S q u a r e s Khatri−Rao F a c t o r i z a t i o n f u n c t i o n
2 % Author : Michel
3 % Course : Tensor Algebra
4 % Homework 3
5 f u n c t i o n [ As , Bs ] = k h a t r i s v d (X,M, N,K)
6 % Inputs :
7 % − X: The i n i c i a l matrix
8 % − M: Number o f rows o f A
9 % − N: Number o f rows o f B
10 % − K: Number o f columns o f A and B
11 %
12 % Outputs :
13 % − As : Estimated matrix A
14 % − Bs : Estimated matrix B
15 %−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
16 % Xk = z e r o s (M,N) ;
17 As = z e r o s (M,K) ;
18 Bs = z e r o s (N,K) ;
19
20 f o r k = 1 :K
21 Xk = r e s h a p e (X( : , k ) , [ N,M] ) ;
22 [ U, S ,V] = svd (Xk) ;
23 As ( : , k ) = ( s q r t ( S ( 1 , 1 ) ) ) ∗ c o n j (V( : , 1 ) ) ;
24 Bs ( : , k ) = ( s q r t ( S ( 1 , 1 ) ) ) ∗U( : , 1 ) ;
25 end
26 end
This function receives X and the dimensions of A and B and returns the estimated versions
of  and B̂.
We have also tested khatri svd with the file krf matrix.mat. In both cases, we
have observed that
such that,
2 Second part
In the second part of this homework, we are supposed to perform 1000 Monte Carlo
experiments, while generating X0 = A♦B ∈ CIJ×R , from randomly chosen A ∈ CI×R and
B ∈ CJ×R , with R = 4. Let X = X0 + αV be a noisy version of X0 , where V is the
additive noise term. The parameter α controls the power(variance) of the noise term, and
is defined as a function of the signal to noise ratio (SNR),in dB, as follows
kX0 k2F
SNRdB = 10 log10 ( ) (9)
kαVk2F
Since the matrices are randomly generated at each execution, we can not affirm that
kX0 k2F and kVk2F will be the same at each execution. In order to overcome this limitation,
we have decide to normalize X0 and V, such that kX0 k2F = 1 and kVk2F = 1, it follows
that
1 kX0 k2F
SNRdB = 10 log10 ( ) (10)
α2 kVk2F
1
= 10 log10 ( ) (11)
α
1 SNRdB
= 10 10 (12)
α2 q
SNRdB
α = 10− 10 (13)
Assuming the SNRdB range [0, 5, 10, 15, 20, 25, 30]dB, we find the estimates  and
B̂ obtained with LSKRF algorithm for the configurations (I, J) = (10, 10) and (I, J) =
(30, 10). These comparisons are made through the Normalized Mean Squared Error
(NMSE) of X̂ = Â♦B̂ with respect to X. The resulting NMSE curves are shown below.
2
As it can be seen, the NMSE values decrease for both curves for greater values of
SNR, we also observe that the configuration (I = 30, J = 10) presents the best results.
According to this behavior we may conclude that, if we increase the parameter I, we
increase the performance of the estimations. This difference can be explained by the fact
that in the first configuration we compute the SVD of square matrices while in the second
one we compute the SVD of rectangular matrices.