Entezari 2017
Entezari 2017
Entezari 2017
Regular paper
A R T I C L E I N F O A B S T R A C T
Keywords: This paper considers the problem of measurement matrix optimization for compressed sensing (CS) in which the
Measurement matrix optimization dictionary is assumed to be given, such that it leads to an effective sensing matrix. Due to important properties of
Incoherent frame equiangular tight frames (ETFs) to achieve Welch bound equality, the measurement matrix optimization based
Unit norm tight frame on ETF has received considerable attention and many algorithms have been proposed for this aim. These
Partial Fourier matrix
methods produce sensing matrix with low mutual coherence based on initializing the measurement matrix with
Compressed sensing (CS)
random Gaussian ensembles. This paper, use incoherent unit norm tight frame (UNTF) as an important frame
with the aim of low mutual coherence and proposes a new method to construction a measurement matrix of any
dimension while measurement matrix initialized by partial Fourier matrix. Simulation results show that the
obtained measurement matrix effectively reduces the mutual coherence of sensing matrix and has a fast con-
vergence to Welch bound compared with other methods.
1. Introduction matrices, have a nice universal property for any basis matrix and re-
sulting random sensing matrices satisfy the RIP with overwhelming
Compressed sensing (CS) is a technique in signal processing to probability and m = O(k log(n/ k ) ) measurements [2]. Now, it is pos-
sample sparse or compressible signals at a sub-Nyquist rate [1,2]. In sible to find a unique solution by the following optimization problem:
recent years, CS has been widely studied in areas of radar imaging [3], s = min‖x|0 , s.t. y = Fs. (2)
image and video processing [4], wireless communication [5], coding
theory [6] and etc. Let a signal x of length n has a sparse representation where ‖. ‖0 is the ℓ0 norm. The ℓ0 norm is non-convex and finding a
in a known domain Ψn × n: xn × 1 = Ψn × n·sn × 1; where Ψ is called basis solution for (2) is NP-hard [8]. However, there are a variety of re-
(dictionary) matrix. In this case, s is a k-sparse signal; i.e., it has at most construction algorithms to solve this problem such as Greedy, optimi-
k nonzero elements. By making m reduced-dimensional measurements zation and statistical methods.
(k < m < n ), we have ym × 1 = Φm × n ·xn × 1, where Φm × n is referred to as the The construction of the measurement matrix with respect to dic-
measurement matrix. Finally the CS equation can be formed as: tionary matrix is one of the main challenge in compressed sensing and
ym × 1 = Φm × n ·Ψn × n·sn × 1 there are many research to show the effectiveness of non-random
(1)
measurement matrices to random measurement matrices [9–12]. The
The product F ≜ Φ. Ψ is referred to as sensing matrix. The Eq. (1) is an random matrices generated by various distributions, have some draw-
under-determined equation and generally has an infinite set of solu- backs [13]:
tions. However, under certain conditions on sensing matrix, it is pos-
sible to find a unique solution. The best condition on sensing matrix in
presence of noise for stable recovery of k-sparse signals is the restricted
• Require high storage space due to store and process each realization
of random matrices.
isometry property (RIP) [7] and is a sufficient condition on sensing
matrices to recovery guarantee. However, it is difficult to prove the
• Satisfy the RIP with overwhelming probability, which is no guar-
antee to recover all sparse signals.
sensing matrix satisfies the RIP, because require a combinatorial search
over all sub-matrices and therefore certifying RIP for an arbitrary ma-
• High computational complexity to reconstruction when compared to
non-random sensing matrices.
trix is NP-hard problem. However, it has been shown that many random
measurement matrices such as Gaussian, Bernoulli and partial Fourier A way to start designing a good measurement matrix is frame
⁎
Corresponding author.
E-mail addresses: [email protected] (R. Entezari), [email protected] (A. Rashidi).
http://dx.doi.org/10.1016/j.aeue.2017.09.015
Received 12 June 2017; Accepted 21 September 2017
1434-8411/ © 2017 Elsevier GmbH. All rights reserved.
R. Entezari, A. Rashidi Int. J. Electron. Commun. (AEÜ) 82 (2017) 321–326
theory. The frame theory, has many different types of application in- mutual coherence satisfy the RIP condition. However, the lower bound
cluding, compressed sensing [14], sparse approximation [15], quantum of μ(F ) is known as Welch bound (WB):
information theory [16], coding and communications [17]. Therefore, n−m
in this paper, we focus on measurement matrix optimization based on μWB =
m(n−1) (5)
frame theory while reconstruction process is performed by SL0 [18]
algorithm,1 because it is fast and robustness to noise. This bound is achievable for ETF [17]. For this reason, ETFs are
This paper is organized as follows: Section 2 describes some basic sometimes called maximum Welch-Bound-Equality (WBE) sequences.
definitions related to frames along with a summary of the previous In frame design theory, an important factor is frame dimension, so-
works on incoherent frames design. Section 3 describes the proposed called redundancy. Unfortunately, the ETFs are not existing for all frame
m(m + 1)
method for designing the measurement matrix based on incoherent dimensions and only exist for n ⩽ 2
in real case and n ⩽ m2 in
UNTF. Simulation and numerical results are presented in Section 4 and complex case [22]. There are only a few general construction methods
finally, the conclusion is given in Section 5. of ETFs and the ETF design is very difficult [23]. Therefore, the frame
design with low mutual coherence was considered. These frames are
2. Frame theory background referred to as incoherent frames. For the first time, the idea of the
measurement matrix optimization with respect to dictionary matrix,
In this section, the concept of frames as a key point of the best such that it leads to an effective sensing matrix was introduced by Elad
measurement matrix optimization is introduced. A frame is set of n in [12]. Their algorithm tries to minimize the t-averaged mutual co-
vectors {fi }n in Hilbert space m , if ∀ f ∈ m : herence between Φ and Ψ . However, it needs many iterations to achieve
i=1
n better performance. From formulation of Gram matrix and mutual co-
2 2 2 herence, it was concluded that the zero off-diagonal elements of the
α f 2
⩽ ∑ f ,fi ⩽β f 2
i=1 (3) Gram matrix is desired. Based on this idea, Duarte-Carvajalino and
Sapiro [24] and abolghasemi et al. [25], making the Gram matrix as
where α and β are lower and upper bound of frames respectively, with
closely as possible to identity matrix by following minimization pro-
0 < α ⩽ β < ∞. The frame synthesis operator, is defined as a matrix with
blem:
columns the frame vectors {fi }n that is, F = [f1 ,…,fn ]. Usually, frame
i=1
2
means its synthesis operator. When α = β , the frame is calledα -tight min G−I ∞
F (6)
frame. If fi = 1 for all i, we have unit norm tight frame (UNTF) and
2
exists only for α = n/ m [19]. where ‖. ‖∞ is infinity norm. However, this case only occur for m = n ,
which is not happened in CS framework. In [26], another minimization
Definition ([14]). A matrices Fm × n = [f1 ,f2 ,…,fn ]with rows f1 ,̂ f2 ,̂ …,fm̂ is problem has been proposed for compressed sensing radar (CSR) as
said to be an equiangular tight frame (ETF) in a Hilbert space , if these follows:
conditions are satisfied:
∼ 2
min G−G
• The columns have unit norm:
F (7)
F
∼
f = 1; i = 1,2,…,n . where ‖. ‖F is the Frobenius norm and G is a diagonal matrix with the
• The
i
rows are orthogonal and have equal-norm: auto-correlation of each column of F. However, this method is based on
f = n/ m ; k = 1,2,…,m . identity matrix too and diagonal elements of the Gram matrix do not
• The inner products between distinct columns are equal in modulus:
k
have a significant impact on mutual coherence.
322
R. Entezari, A. Rashidi Int. J. Electron. Commun. (AEÜ) 82 (2017) 321–326
323
R. Entezari, A. Rashidi Int. J. Electron. Commun. (AEÜ) 82 (2017) 321–326
0.6 0.6
ELAD Elad
Xu 0.55 Xu
LZYCB
0.5 Tsiligianni
Tsiligianni 0.5
Proposed Proposed
WB 0.45 WB
Mutual Coherence
0.4
0.4
0.3 0.35
0.3
0.2 0.25
0.2
0.1
20 40 60 80 100 0.15
Iteration
0.1
15 20 25 30 35 40 45 50 55 60
Number of measurements
5000 Random
Optimal Table 2
Mutual coherence values by Elad [12], LZYCB [10], Tsiligianni [11] and proposed
method for sensing matrix F ∈ 15 × n with different values of n and 100 iteration.
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
n Elad LZYCB Tsiligianni Proposed Welch bound
5000 Proposed
Optimal 80 0.4412 0.7909 0.3110 0.2669 0.2342
85 0.4814 0.7519 0.3235 0.2757 0.2357
90 0.4920 0.8217 0.3336 0.2830 0.2370
0 95 0.5194 0.7594 0.3462 0.2920 0.2382
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
100 0.5324 0.7599 0.3483 0.3006 0.2392
5000 Elad 105 0.5516 0.7906 0.3454 0.3066 0.2402
Optimal 110 0.5682 0.7884 0.3620 0.3150 0.2410
115 0.5718 0.8157 0.3554 0.3223 0.2418
120 0.5718 0.7932 0.3579 0.3294 0.2425
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
324
R. Entezari, A. Rashidi Int. J. Electron. Commun. (AEÜ) 82 (2017) 321–326
0 35
10
Proposed
30 Random
25
20
15
−2 10
10
Elad
5
Xu
Tsiligianni
Proposed 0
Random
−3
10 −5
3 4 5 6 7 8 9 10 2 4 6 8 10 12 14 16 18 20
Sparsity order Sparsity Order (k)
Fig. 4. Relative errors as a function of the sparsity order, where n = 120,p = 80,m = 25 Fig. 7. The reconstruction SNR for different sparsity order 1 < k < 20 using OMP for the
and 150 iteration. noisy measurements with the SNR of 15 dB where the dimension of the random Gaussian
matrix and proposed matrix is 25 × 120 .
0
10
100
Proposed
90 Random
−1
80
10
Reconstruction SNR (dB)
Relative Errors
70
60
50
−2
10 40
Elad
Xu 30
Tsiligianni
Proposed 20
Random
−3
10 10
16 18 20 22 24 26 28 30
Number of measurements
10 20 30 40 50 60 70 80 90 100
Fig. 5. Relative errors as a function of the number of measurements using SL0, where Input SNR (dB)
n = 120,p = 80,m = 16: 2: 30 and 150 iteration for fixed sparsity order k = 4 .
Fig. 8. The reconstruction SNR of a 5-sparse signal from its noisy measurement using
OMP for various input SNRs where the dimension of the random Gaussian matrix and
proposed matrix is 25 × 120 .
n−k of its elements set to zero and the location of k non-zero elements
100 (support) is chosen uniformly at random. The sensing matrices Fm × n are
Proposed generated according to their structures and the measurements by
90 Random ym × 1 = Fm × n·sn × 1. For each sensing matrix, the sampling and re-
80 construction process are averaged over 1000 different runs.
The perfect recovery percentage is the ratio between the numbers of
70
Recovery Percentage
325
R. Entezari, A. Rashidi Int. J. Electron. Commun. (AEÜ) 82 (2017) 321–326
326