0% found this document useful (0 votes)
17 views5 pages

Incoherent Projection Matrix Design For

This document summarizes a paper presented at the 2018 26th European Signal Processing Conference that proposes a method for designing incoherent projection matrices for compressed sensing using alternating optimization. The method aims to construct projection matrices with reduced pairwise mutual coherence compared to existing random projection matrices by designing equiangular tight frames and updating the frames iteratively using an inertial force approach. Simulation results showed the proposed method achieved better sparse signal reconstruction performance compared to state-of-the-art projection matrix design algorithms.

Uploaded by

Zorba Zorba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views5 pages

Incoherent Projection Matrix Design For

This document summarizes a paper presented at the 2018 26th European Signal Processing Conference that proposes a method for designing incoherent projection matrices for compressed sensing using alternating optimization. The method aims to construct projection matrices with reduced pairwise mutual coherence compared to existing random projection matrices by designing equiangular tight frames and updating the frames iteratively using an inertial force approach. Simulation results showed the proposed method achieved better sparse signal reconstruction performance compared to state-of-the-art projection matrix design algorithms.

Uploaded by

Zorba Zorba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2018 26th European Signal Processing Conference (EUSIPCO)

Incoherent Projection Matrix Design for


Compressed Sensing Using Alternating
Optimization
Meenakshi∗ and Seshan Srirangarajan∗†
∗ Department of Electrical Engineering
† Bharti School of Telecommunication Technology and Management
Indian Institute of Technology Delhi, New Delhi, India

Abstract—In this paper we address the design of projection on x to be able to solve (1) for a unique x. The CS system
matrix for compressed sensing. In most compressed sensing exploits the sparse structure of the underlying phenomena:
applications, random projection matrices have been used but
it has been shown that optimizing these projections can greatly L
X
improve the sparse signal reconstruction performance. An inco- x= θi ψi = Ψθ (2)
herent projection matrix can greatly reduce the recovery error for i=1
sparse signal reconstruction. With this motivation, we propose an
algorithm for the construction of an incoherent projection matrix where Ψ ∈ <N ×L is the (sparsifying) transform basis and
with respect to the designed equiangular tight frame (ETF) for θ is the vector of sparse coefficients. Using (1) and (2), the
reducing pairwise mutual coherence. The designed frame consists measurement vector can be expressed as:
of a set of column vectors in a finite dimensional Hilbert space
with the desired norm and reduced pairwise mutual coherence. y = Φx = ΦΨθ = Dθ (3)
The proposed method is based on updating ETF with inertial
force and constructing incoherent frame and projection matrix where D , [d1 , d2 , . . . , dL ] ∈ <M ×L is an overcomplete
using alternating minimization. We compare the performance of frame or dictionary with L  M . With an overcomplete
the proposed algorithm with state-of-the-art projection matrix frame D, the vector θ is typically not unique for a given
design algorithms via numerical experiments and the results show measurement vector y [6]. The additional sparsity constraint
that the proposed algorithm outperforms the other algorithms.
on x thus plays an important role. The signal x is said to be
Index Terms—Compressed sensing, projection matrix, mutual
coherence, equiangular tight frame. sparse if most of the coefficients of θ and the sparse signal
is said to be K-sparse if the number of non-zero coefficients
is K, also known as the sparsity level of the signal. With
I. I NTRODUCTION the sparsity assumption, the reconstruction problem can be
formulated as:
Compressed sensing (CS) has generated a lot of research
min kθk0 s.t. y = Dθ = ΦΨθ (4)
interest in the signal and image processing communities θ
since its introduction [1]–[3]. CS provides an alternative to which is NP-hard. Greedy algorithms, such as orthogonal
the Shannon-Nyquist sampling theorem via a single step matching pursuit (OMP) [7], matching pursuit (MP) [8], gener-
compression and sampling scheme. It has gained popularity alized OMP [9], and others [10] can under certain conditions
due to its ability to recover a high dimensional signal from solve for θ (with theoretical guarantees) and recover x. In
significantly fewer measurements than the number of ambient CS, the early work relied on random projection matrices but
signal measurements required in conventional schemes. Com- it has been shown that an appropriately designed projection
pressed sensing allows us to exploit the sparse structure of matrix offers better signal recovery from the under-sampled
the signal or underlying phenomena for capturing incoherent measurements [4]. The spark of a matrix is defined as the
measurements using a projection (or sensing) matrix [4]. CS smallest number of linearly dependent columns and this yields
provides the mathematical framework for reconstructing a a guarantee for uniqueness of the sparse solution provided
signal x ∈ <N ×1 from linear measurements y ∈ <M ×1 kθk0 ≥ spark(D)/2 [11]. In other words, a larger value
acquired through a projection matrix Φ ∈ <M ×N [5]: of spark results in a larger signal space for the exact sparse
recovery, which in turn implies the need to design a projection
y = Φx (1) matrix with maximized spark. However, this is a computation-
ally intensive task and hence CS systems rely on projection
where M  N . We would like to reconstruct the signal x matrix design with reduced pairwise mutual coherence µ(D),
from the projections y, however since (1) is an underdeter- which will be introduced in the next section.
mined system, there are infinite number of participant signals In this paper, we focus on designing incoherent projection
x which satisfy (1). We need to introduce additional constraint matrix for improved recovery performance in CS systems.

ISBN 978-90-827970-1-5 © EURASIP 2018 1784


2018 26th European Signal Processing Conference (EUSIPCO)

This is achieved by designing equiangular tight frame (ETF) set of atoms (or columns dk of matrix D ∈ <M ×L ) that
for the corresponding Gram matrix and updating the frame so satisfies the Parseval’s condition αkvk22 ≤ kD T vk22 ≤
as to reduce the mutual coherence. The matrices are updated βkvk22 , ∀ v ∈ <M where α and β are the positive con-
iteratively through the target ETF using the weighted distance stants. If these constants are equal i.e., α = β then D is known
between the previous iterations as inertial force. Numerical as α-tight frame, and if α = β = 1 then D is known as unit
experiments were performed to evaluate the recovery perfor- norm tight frame (UNTF). Welch bound on mutual coherence
mance of the proposed algorithm and the results demonstrate µW , given in (8), is a lower bound on the maximum pairwise
that our method has a better overall performance compared to correlation between the frame atoms and can be achieved with
the state-of-the-art projection matrix design algorithms. an ETF as it has the minimum mutual coherence. The UNTF
The rest of the paper is organized as follows. In Section II, with the minimum mutual coherence among all frames having
we present some preliminaries including the metrics used for the same dimension is called a Grassmannian frame [13].
evaluating the CS performance followed by a discussion of s
related work. Section III presents the proposed incoherent (L − M )
µ(D) ≥ µW , (8)
frame and projection matrix design method. Simulation results M (L − 1)
are presented in Section IV demonstrating the performance
B. Related Work
of the proposed algorithm. Finally, concluding remarks are
presented in Section V. Next we discuss in brief some of the key frameworks in
the literature for the design and optimization of the projection
II. BACKGROUND matrix. The first work on optimization of the projection matrix
A. Preliminaries was the shrinkage scheme proposed by Elad [4]. CS recovery
The mutual coherence of a frame D, represented by µ(D), based on mutual coherence (5) does not reflect the average
is defined as the largest absolute and normalized inner product signal reconstruction performance. Elad proposed optimizing
between the different columns of the frame [11], [12]: the projection matrix based on the t-averaged mutual co-
herence (7). However, the method in [4] is computationally
|dTi dj | intensive and the shrinkage function introduces some large
µ(D) = max (5)
i6=j kdi k2 kdj k2 values as off-diagonal elements of the Gram matrix that were
1≤i,j≤L
not present earlier. Due to these large magnitude off-diagonal
Given a frame D, a K-sparse signal x can be recovered elements in the Gram matrix the worst case guarantees of
from the measurements y using (4) provided the following the recovery algorithms no longer hold. A different shrinkage
is satisfied [11]: function was introduced in [14] for projecting the Gram matrix
1

1
 onto a convex non-empty set by reducing the off-diagonal
K< 1+ (6) elements towards the Welch bound on mutual coherence.
2 µ(D)
Authors in [15] propose a method for constructing D by
Mutual coherence as a metric considers sparse representation making any subset of its columns as orthogonal as possible,
and recovery performance from a worst-case perspective as it or equivalently minimizing the difference between G and the
reflects the extreme pair-wise correlation in the frame which identity matrix (the simplest ETF). The sensing matrix Φ for a
can be misleading. However, it has the capability to capture the fixed Ψ is computed by reducing the largest M components of
behavior of uniform dictionaries and is easy to compute [11]. the error matrix. This method is non-iterative with significant
In the Gram matrix G = D T D = ΨT ΦT ΦΨ, the (i, j)th computational improvements compared to Elad’s method but
element is represented as gij = dTi dj where we assume D to with only a slight reduction in the reconstruction error. Zelnik-
be the normalized effective frame. In addition to the mutual Manor et al. introduced optimized measurement matrix design
coherence defined in (5), an alternative coherence metric that based on block-sparse representations and its application to
can be used to evaluate the recovery efficiency of the measure- block-sparse decoding. They obtained a weighted surrogate
ment matrix is the t-averaged mutual coherence µt for a given function given in (9).
coherence threshold t, defined in (7). µ and µt represent the
maximum and averaged values of the off-diagonal elements kD T D − Ik2F
of the Gram matrix G, respectively. These two coherence B X
X B
X (9)
parameters will be used as performance measures for the = kD[i]T D[j]k2F + kD[j]T D[j] − Ik2F
projection matrix design in this paper. j=1 i6=j j=1
P
(|gij | ≥ t) · |gij | where D is represented as a concatenation of B column-
1≤i,j≤L,i6=j blocks D[j]. The first term in the right hand side (RHS) of
µt (D) = P . (7)
(|gij | ≥ t) (9) is the total interblock coherence and the second term is
1≤i,j≤L,i6=j
diagonal penalty [16]. In [17], randomly initialized sensing
We employ the concept of tight frames in order to design matrix is optimized using gradient descent based alternating
projection matrix while minimizing the mutual coherence. minimization resulting in a matrix with reduced coherence
A finite frame for the Hilbert space <M ×1 is defined as a than the initial one.

ISBN 978-90-827970-1-5 © EURASIP 2018 1785


2018 26th European Signal Processing Conference (EUSIPCO)

III. P ROBLEM F ORMULATION A. Incoherent Frame Design


An incoherent frame D is designed w.r.t. the updated ETF For updating D and Φ, we apply the alternating minimiza-
in each iteration for the given or learned sparsifying basis Ψ, tion method to the updated H. In [18], authors have shown
which is similar to minimizing the off-diagonal elements of different methods for designing incoherent projection matrix
Gram matrix G. Hence, the cost function for projection matrix corresponding to an ETF using alternating minimization. At
design (10) aims to reduce the mutual coherence by designing each iteration, D is updated corresponding to the ETF H
an ETF and then updating the D and Φ at each iteration based via the gradient method with inertial force. Let us denote the
on the designed ETF. objective function for D by J = minkD 0 D − Hk2F and the
D
derivative of J with respect to D as ∇D (J):
kD T D − Ik2F = kΨT ΦT ΦΨ − Ik2F (10)
∇D (J) = 4D D T D − H

In this paper, Gram matrix G = D T D is not optimized
directly with respect to the identity matrix but with a designed Dk+1 = Dk − ηD D T D − Hp+1

(15)
ETF at each iteration. Here, updated G will be close to the
corresponding ETF designed by the algorithm and contained Similar to (13), the update step for D in (15) can be modified
in the convex set Hµ : to use an inertial force by taking a weighted difference of the
estimates of D from iterations k and (k − 1):
Hµ , {H ∈ <L×L : H = H T , Hii = 1, max|Hij | ≤ µW }
Dk+1/2 = Dk − ηD D T D − Hp+1

i6=j

For designing an incoherent frame D, cost function in (10) is Dk+1 = PΠ Dk+1/2 + w2 (Dk − Dk−1 ) (16)
reformulated and posed as the following minimization:
where η is the step-size and w2 ≥ 0 is a weighting parameter.
minkEk2F + PΠ (D) s.t. E = D T D − H (11) Using the updated D, we update Φ by solving:
D

where H ∈ Hµ , and PΠ (D) defines the projection of D onto Φk+1 = minkDk+1 − ΦΨk2F (17)
Φ
a convex set Π which regulates its column norm, given by:
( In each iteration, the algorithm alternately updates Hp+1
di kdi k2 < 1 using (14) followed by updating Dk+1 , Φk+1 using (15)
L
PΠ (D) : {di }i=1 = di and (17), respectively. The alternating minimization is repeated
kdi k2 otherwise.
for few iterations until convergence is achieved. The projection
For updating the ETF H, we reduce the larger off-diagonal matrix design algorithm is summarized in Algorithm 1 below.
elements by projecting G onto Hµ at each iteration and then In the proposed incoherent projection matrix design (IPMD)
updating D. We use µW as the threshold for updating H.
Let Ep = DkT Dk − I at the pth outer iteration, then after Algorithm 1: Incoherent projection matrix design (IPMD)
projecting E onto Hµ we obtain the ETF. This is achieved using alternating optimization
by constraining the off-diagonal elements of E using the 1 Objective: Design incoherent projection matrix.
shrinkage function SΩµ . 2 Input: Sparsifying basis Ψ, weighting parameters w1 ,
( w2 , step size η, number of iterations Ntotal , Nouter ,
Eij |Eij| ≤ µW
SΩµ (E) : Eij = (12) and Ninner .
(i6=j) sign(Eij ) · µW otherwise 3 Initialization: Initialize Φ to a random matrix.
At each iteration this ensures that the large off-diagonal ele- 4 while i < Ntotal do
ments will be reduced in magnitude. Unlike other algorithms 5 for p = 1 : Nouter do
which project E onto the set of ETF via a shrinkage function, 6 Update ETF Hp+1 using (14)
here we apply shrinkage on the weighted difference of the off- 7 for k = 1 : Ninner do
diagonal elements which is an estimate of the distance between 8 Update Dk+1 using (15)
the updated E and the corresponding ETF. The update scheme 9 Update Φk+1 using (17)
can be accelerated by incorporating an inertial force. The 10 end
inertial force is computed as the weighted difference of the 11 Φk+1 = ΦNinner
estimates of E from iterations p and (p − 1). 12 end
13 i=i+1
Ep = DkT Dk − I 14 end
Hp = Ep + w1 (Ep − Ep−1 ) (13) 15 Output: Projection matrix Φoutput

where w1 ≥ 0 is a weighting parameter. At each iteration,


Ep is computed using the current distance between the Gram algorithm, shrinkage function SΩµ is applied only on the off-
matrix and ETF along with the inertial force. diagonal elements and updated with an inertial force. Diagonal
elements of H remain unity during the update step (14) due
Hp+1 = SΩµ (Hp ) + I (14) to which a normalization step is not required. Step size η and

ISBN 978-90-827970-1-5 © EURASIP 2018 1786


2018 26th European Signal Processing Conference (EUSIPCO)

0.31
weighting parameters w1 , w2 were determined empirically. Elad = 0.55 [4]

With w1 = w2 = 0.80 incoherent frames were obtained and a 0.3 Elad = 0.75 [4]
Elad = 0.95 [4]
Proposed IPMD
step-size η = 0.09 is used to avoid divergence. For updating Φ, 0.29

Ninner = 3 gives good results. We set the number of iterations 0.28

Ntotal = 100 for stopping the update process. However, other 0.27

stopping criteria can also be used. For example, the algorithm 0.26

may iterate until the change in cost function value is less than 0.25
a certain threshold or until D achieves a predefined coherence
0.24
threshold.
0.23

IV. S IMULATION R ESULTS 0.22

In this section, we illustrate the performance of the proposed 0.21


0 10 20 30 40 50 60 70 80 90 100

IPMD algorithm and compare it with some of the key projec-


tion matrix design methods from literature including random (a) Evolution of t-averaged mutual coherence (µt )
Gaussian matrix, Elad [4], method in [16] referred to as 2.5
Random

”Zelnik-Manor-Rosenblum-Eldar” (ZMRE), gradient descent Elad [4]


DCS [15]
ZMRE [16]
based algorithm in [17] referred to as ”Abolghasemi-Ferdowsi- 2 AFS [17]
Proposed IPMD
Sanei” (AFS), and [15] referred to as ”Duarte-Carvajalino-
Sapiro” (DCS). We compare these methods via extensive set of 1.5
numerical experiments for 100 iterations each and for IPMD,
Nouter = 15. The algorithm parameters are kept fixed across
1
the entire set of experiments. The initialized random matrix
is generated for N = 80, L = 100, M = 25 with sparsity
level K = 15. For Elad’s algorithm, the other parameters are: 0.5

t = 0.4 and γ = 0.95. In Fig. 1, histogram of the absolute


correlation between atoms of the frame is presented. It illus- 0
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
trates the distribution of the absolute off-diagonal values of the
normalized Gram matrix for each algorithm. It is noted that (b) Evolution of cumulative coherence
the histogram for the IPMD algorithm displays a shift towards
the left (or origin), which indicates reduction in the pairwise Fig. 2: Evolution of coherence parameters for N = 80, L = 100,
correlation. Next we consider the averaged mutual coherence M = 25, and t = µW = 0.4.
µt and cumulative coherence as a performance metrics since
these are better measure of average recovery performance than
results in significant smaller values for µt and cumulative
the mutual coherence µ. Cumulative coherence measures the
coherence.
maximum total mutual coherence between a fixed set of atoms
As seen from the Table I, CSIP M D has lowest averaged
and a collection of other atoms in the dictionary for better
mutual coherence but ||I − G||F is comparable to CSZM RE .
insight. The evolution of these two performance parameters is
However, IPMD yields an improved performance in terms of
shown in Fig. 2. It is seen that the proposed IPMD algorithm
the signal reconstruction accuracy. For evaluating the recovery
performance we performed two set of experiments. Using a
1000 learned or given transform basis Ψ we generate a set of
500 Random Gaussian Ns = 1500 signals with θj (j = 1, . . . , Ns ) which are
0

1000
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 K-sparse. Measurement vectors yj are computed for these
500 Elad [4] signals using Φ designed by the above mentioned algorithms
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 with a fixed sparsifying basis Ψ. OMP algorithm is used for
1000

500 DCS [15]


recovering the sparse vectors from the measurements using (4).
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
The average reconstruction error is computed as:
1000
Ns
500 ZMRE [16] 1 X kxj − xbj k22
0
er = (18)
1000
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Ns j=1 kxj k22
500 AFS [17]
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
where xbj is the recovered sparse signal. We first study the
1000
reconstruction performance with the number of iterations.
500 Proposed IPMD
0 Fig. 3 shows that as the signal becomes less sparse (higher
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
K), the reconstruction error increases gradually. Next we study
Fig. 1: Histogram of the absolute off-diagonal values of the optimized the reconstruction performance as a function of the number of
Gram matrix (N = 80, L = 100, and M = 25). measurements. In Fig. 4, we see that the recovery performance

ISBN 978-90-827970-1-5 © EURASIP 2018 1787


2018 26th European Signal Processing Conference (EUSIPCO)

1
V. C ONCLUSION
0.9
Random Gaussian
We presented the framework for the design of an inco-
0.8 Elad [4]
DCS [15] herent projection matrix for CS applications. The proposed
ZMRE [16]
0.7 AFS [17]
Proposed IPMD
IPMD algorithm was shown to design an optimized projection
0.6 matrix whose columns have reduced mutual coherence in
0.5 order to achieve improved recovery performance. We design
0.4
an ETF using inertial force update and the corresponding
0.3
frame and projection matrix are updated using the alternating
minimization method. The designed projection matrix was also
0.2
shown to have reduced cumulative coherence. The proposed
0.1
method achieves improved recovery performance (or achieves
0
0 2 4 6 8 10 12 14 16 18 20 the same recovery performance with fewer measurements) as
some of the state-of-the-art algorithms in the literature. The
Fig. 3: Recovery performance error er along varying sparsity for experiments illustrated that proposed method outperforms the
N = 80, L = 100, M = 25 with optimized projection matrix other methods in recovery performance.
R EFERENCES
[1] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52,
improves as the number of measurements (M ) increases. Fig. 3 pp. 1289–1306, Apr. 2006.
and Fig. 4 illustrate the improvement in recovery performance [2] E. J. Candes and T. Tao, “Near-optimal signal recovery from random
when using an optimized projection matrix Φ using the IPMD. projections: Universal encoding strategies?,” IEEE Trans. Inf. Theory,
vol. 52, pp. 5406–5425, Dec. 2006.
In addition, the performance of the proposed IPMD algorithm [3] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles:
is consistently better compared to the other methods when exact signal reconstruction from highly incomplete frequency informa-
using OMP algorithm for reconstruction. Algorithms such as tion,” IEEE Trans. Inf. Theory, vol. 52, pp. 489–509, Feb. 2006.
[4] M. Elad, “Optimized projections for compressed sensing,” IEEE Trans.
OMP used for sparse reconstruction rely on the orthogonality Signal Process., vol. 55, pp. 5695–5702, Dec. 2007.
of the columns of D and the IPMD algorithm achieves this [5] R. G. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal
to a greater extent than the other methods. Process. Mag., vol. 24, pp. 118–121, Jul. 2007.
[6] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for
designing overcomplete dictionaries for sparse representation,” IEEE
1 Trans. Signal Process., vol. 54, pp. 4311–4322, Nov. 2006.
[7] J. A. Tropp and A. C. Gilbert, “Signal recovery from random mea-
0.9
surements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory,
0.8 vol. 53, pp. 4655–4666, Dec. 2007.
[8] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency
0.7
dictionaries,” IEEE Trans. Signal Process., vol. 41, pp. 3397–3415, Dec.
0.6 1993.
[9] J. Wang, S. Kwon, and B. Shim, “Generalized orthogonal matching
0.5
pursuit,” IEEE Trans. Signal Process., vol. 60, pp. 6202–6216, Dec.
0.4 2012.
Random Gaussian
[10] G. Chen, D. Li, and J. Zhang, “Iterative gradient projection algorithm
0.3 Elad [4]
DCS [15]
for two-dimensional compressive sensing sparse image reconstruction,”
0.2
ZMRE [16] Signal Process., vol. 104, pp. 15–26, Nov. 2014.
AFS [17]
Proposed IPMD [11] D. L. Donoho and M. Elad, “Optimally sparse representation in gen-
0.1
eral (nonorthogonal) dictionaries via l1 minimization,” Proc. National
0
Academy of Sciences, vol. 100, no. 5, pp. 2197–2202, 2003.
25 30 35 40 45
[12] J. A. Tropp, “Greed is good: algorithmic results for sparse approxima-
tion,” IEEE Trans. Inf. Theory, vol. 50, pp. 2231–2242, Oct. 2004.
Fig. 4: Recovery performance error er along varying number of [13] T. Strohmer and R. W. Heath, “Grassmannian frames with applications
to coding and communication,” Applied and Computational Harmonic
measurements for N = 80, L = 100, K = 15 with optimized
Analysis, vol. 14, no. 3, pp. 257 – 275, 2003.
projection matrix [14] J. Xu, Y. Pi, and Z. Cao, “Optimized projection matrix for compressive
sensing,” EURASIP J. Adv. Signal Process., vol. 10, pp. 1–8, Jun. 2010.
[15] J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse
signals: Simultaneous sensing matrix and sparsifying dictionary opti-
TABLE I: Performance evaluation of various CS systems (M = 25, mization,” IEEE Trans. Image Process., vol. 18, pp. 1395–1408, Jul.
N = 80, L = 100 AND SNR = 20 dB) 2009.
[16] L. Zelnik-Manor, K. Rosenblum, and Y. C. Eldar, “Sensing matrix
||I − G||2F × 105 µavg optimization for block-sparse decoding,” IEEE Trans. Signal Process.,
CSrandn 4.465 0.3038 vol. 59, pp. 4300–4312, Sep. 2011.
CSElad 0.00473 0.2875 [17] V. Abolghasemi, S. Ferdowsi, and S. Sanei, “A gradient-based alternat-
CSDCS 0.00492 0.2961 ing minimization approach for optimization of the measurement matrix
CSZM RE 0.00301 0.2647 in compressive sensing,” Signal Process., vol. 92, pp. 999–1009, Apr.
CSAF S 0.00341 0.2339 2012.
CSIP M D 0.00307 0.2126 [18] J. A. Tropp, I. S. Dhillon, R. W. Heath, and T. Strohmer, “Designing
structured tight frames via an alternating projection method,” IEEE
Trans. Inf. Theory, vol. 51, pp. 188–209, Jan. 2005.

ISBN 978-90-827970-1-5 © EURASIP 2018 1788

You might also like