0% found this document useful (0 votes)
46 views9 pages

Research Article: Tree-Based Backtracking Orthogonal Matching Pursuit For Sparse Signal Reconstruction

guide
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views9 pages

Research Article: Tree-Based Backtracking Orthogonal Matching Pursuit For Sparse Signal Reconstruction

guide
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Hindawi Publishing Corporation

Journal of Applied Mathematics


Volume 2013, Article ID 864132, 8 pages
http://dx.doi.org/10.1155/2013/864132

Research Article
Tree-Based Backtracking Orthogonal Matching
Pursuit for Sparse Signal Reconstruction

Yigang Cen,1 Fangfei Wang,1 Ruizhen Zhao,1 Lihong Cui,2


Lihui Cen,3 Zhenjiang Miao,1 and Yanming Cen4
1
School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China
2
Department of Mathematics, Beijing University of Chemical Technology, Beijing 100029, China
3
School of Information Science and Engineering, Central South University, Changsha, Hunan 410083, China
4
Polytechnic College, Guizhou Minzu University, Guiyang, Guizhou 550025, China

Correspondence should be addressed to Lihui Cen; [email protected]

Received 17 July 2013; Accepted 5 September 2013

Academic Editor: Dewei Li

Copyright © 2013 Yigang Cen et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Compressed sensing (CS) is a theory which exploits the sparsity characteristic of the original signal in signal sampling and coding.
By solving an optimization problem, the original sparse signal can be reconstructed accurately. In this paper, a new Tree-based
Backtracking Orthogonal Matching Pursuit (TBOMP) algorithm is presented with the idea of the tree model in wavelet domain. The
algorithm can convert the wavelet tree structure to the corresponding relations of candidate atoms without any prior information of
signal sparsity. Thus, the atom selection process will be more structural and the search space can be narrowed. Moreover, according
to the backtracking process, the previous chosen atoms’ reliability can be detected and the unreliable atoms can be deleted at each
iteration, which leads to an accurate reconstruction of the signal ultimately. Compared with other compressed sensing algorithms,
simulation results show the proposed algorithm’s superior performance to that of several other OMP-type algorithms.

1. Introduction matrix. We say that 𝑥 is 𝐾-sparse under the orthogonal basis


Ψ if only 𝐾 ≪ 𝑁 coefficients 𝛼𝑘 of 𝑥 are nonzero.
Compressive sensing (CS) [1, 2] aims to recover sparse or Usually, the signal is not sparse but its coefficient can be
compressible signal with low amount of information and high considered to be sparse or compressible after some transfor-
probability. It breaks the traditional rule of Nyquist sampling mations, such as the wavelet transformation.
theorem, which states that a signal’s information is preserved Suppose that a matrix Φ represents the 𝑀 × 𝑁 measure-
if it is uniformly sampled at a rate at least two times faster than ment matrix. Then 𝛼 is accomplished by collecting a measure-
its Fourier bandwidth. By this state-of-the-art signal com- ment vector 𝑦 of dimension 𝑀 with 𝑀 ≪ 𝑁. 𝑦 can be
pression and processing theory, the signal sampling fre- expressed as 𝑦 = Φ𝛼. Then, (1) becomes
quency, the cost of processing time, data storage, and trans-
mission can be greatly reduced. 𝑦 = Φ𝛼 = ΦΨ𝐻𝑥. (2)
For a given orthogonal basis Ψ = {𝜓1 , . . . , 𝜓𝑁}, the signal
Φ is called as the CS measurement matrix and its columns
𝑥 ∈ 𝑅𝑁×1 can be represented in terms of the coefficient vector
are called atoms. The matrix Φ is rank deficient and hence
𝛼 as
loses information in general. The CS reconstruction problem
𝑁
wishes to recover the coefficient vector 𝛼 from the set of 𝑀
𝑥 = ∑ 𝜓𝑘 𝛼𝑘 = Ψ𝛼. (1)
linear measurements 𝑦. Since 𝑀 < 𝑁, the reconstruction of
𝑘=1
𝛼 from 𝑦 is generally ill-posed.
The corresponding inverse transformation is 𝛼 = Ψ𝐻𝑥, The two major algorithmic approaches to sparse recov-
ΨΨ = Ψ𝐻Ψ = 𝐼, and Ψ ∈ 𝐶𝑁×𝑁. Here, 𝐼 is the identity
𝐻
ery are methods based on (𝑙1 ) minimization and iterative
2 Journal of Applied Mathematics

methods (matching pursuits). We now briefly describe these fixed 𝐾-sparse 𝑁-dimensional signal 𝛼 and a random Gaus-
methods, as follows. sian measurement matrix Φ, OMP recovers (the support of)
𝛼 from the measurements 𝑦 correctly with high probability,
1.1. (𝑙1 ) Minimization. The sparse recovery of this approach provided the number of measurements is 𝑀 ∼ 𝐾 log 𝑁.
can be stated as the problem of finding the sparsest signal 𝛼 = The (𝑙1 )-minimization method has the strongest known
Ψ𝐻𝑥 with the given measurements 𝑦: guarantees of sparse recovery. Once the measurement matrix
Φ satisfies the Restricted Isometry Condition, this method
󵄩 󵄩 works correctly for all sparse signals 𝛼. (𝑙1 )-minimization is
(𝑙0 ) : min 󵄩󵄩󵄩󵄩Ψ𝐻𝑥󵄩󵄩󵄩󵄩𝑙
0 based on linear programming, which has its advantages and
(3)
𝐻 disadvantages. One thinks of linear programming as a black
s.t. 𝑦 = ΦΨ 𝑥. box and any development of fast solvers will reduce the run-
ning time of the sparse recovery method. On the other hand,
Donoho and his associates advocated the principle that it is not very clear what this running time is, as there is no
for some measurement matrices Φ, the highly nonconvex strongly polynomial time algorithm in linear programming
combinatorial optimization problem (𝑙0 ) should be equiva- yet. All known solvers take time polynomial not only in the
lent to its convex relaxation: dimension of the program 𝑁 but also on certain condition
󵄩 󵄩
(𝑙1 ) : min 󵄩󵄩󵄩󵄩Ψ𝐻𝑥󵄩󵄩󵄩󵄩𝑙
numbers of the program. While for some classes of random
1 matrices the expected running time of linear programming
(4)
solvers can be bounded, estimating condition numbers is
s.t. 𝑦 = ΦΨ𝐻𝑥. hard for specific matrices. For example, there is no result yet
showing that the Restricted Isometry Condition implies that
Reference [3] showed that if the measurement matrix sat- the condition numbers of the corresponding linear program
isfies the restricted isometry property (RIP), then a 𝐾-sparse is polynomial in 𝑁.
signal can be recovered exactly; that is, OMP is quite fast, both theoretically and experimentally.
It makes 𝑛 iterations, where each iteration amounts to a mul-
(1 − 𝛿𝐾 ) ‖𝑥‖22 ≤ ‖Φ𝑥‖22 ≤ (1 + 𝛿𝐾 ) ‖𝑥‖22 . (5) tiplication by a 𝑁×𝑀 matrix Φ∗ (computing the observation
vector 𝛼) and solving a least squares problem in dimensions
𝛿𝐾 is called as the Restricted Isometry Constant of Φ. It has at most 𝑀 × 𝑛. This yields strongly polynomial running time.
been shown that (𝑙1 ) minimization can recover a sparse signal In practice, OMP is observed to perform faster and is easier to
exactly under various conditions on restricted isometry con- implement than (𝑙1 )-minimization. For more details, see [6].
stants, see [4, 5]. Then, the convex problem (𝑙1 ) can be solved OMP is quite transparent; at each iteration, it selects a new
using method of convex and even linear programming. coordinate from the support of the signal 𝛼 in a very specific
and natural way. In contrast, the known (𝑙1 )-minimization
1.2. Orthogonal Matching Pursuit (OMP). An alternative solvers, such as the simplex method and interior point meth-
approach to sparse recovery is via iterative algorithms, which ods, compute a path toward the solution. However, the geom-
find the support of the 𝐾-sparse signal 𝛼 progressively. Once etry of (𝑙1 ) is clear, whereas the analysis of greedy algorithms
𝑆 = supp(𝛼) is found correctly, it is easy to compute the signal can be difficult simply because they are iterative.
𝛼 from its measurements 𝑦 as 𝛼 = (Φ𝑆 )−1 𝑦, where Φ𝑆 denotes On the other hand, OMP has weaker guarantees of exact
the measurement matrix Φ restricted to columns indexed by recovery. Unlike (𝑙1 )-minimization, the guarantees of OMP
𝑆. are nonuniform: for each fixed sparse signal 𝛼 and not for all
A basic iterative algorithm is Orthogonal Matching Pur- signals, the algorithm performs correctly with high probabil-
suit (OMP) [6]. OMP recovers the support of 𝛼, one index at ity. Rauhut has shown that uniform guarantees for OMP are
a time, in 𝑛 steps. Under a hypothetical assumption that is an impossible for natural random measurement matrices [9].
isometry, that is, the columns of Φ are orthogonal, the signal Moreover, OMP’s condition on measurement matrices
𝛼 can be exactly recovered from its measurements 𝑦 as 𝛼 = given in [6] is more restrictive than the Restricted Isometry
Φ∗ 𝑦. Condition. In particular, it is not known whether OMP suc-
The problem is that the 𝑀 × 𝑁 matrix Φ is never an ceeds in the important class of partial Fourier measurement
isometry in the interesting range where the number of mea- matrices.
surements 𝑀 is smaller than the ambient dimension 𝑁. Even These open problems about OMP, first stated in [6] and
though the matrix is not an isometry, one can still use the often reverberated in the Compressed Sensing community,
notion of coherence in recovery of sparse signals. In that motivated the recent works on the modified OMP algorithms,
setting, greedy algorithms are used with incoherent dictio- such as the model-based Compressive Sensing [10], Tree-
naries to recover such signals, see [7, 8]. In our setting, for the Based Orthogonal Matching Pursuit [11], Compressive Sam-
commonly used random matrices, one expects the columns pling Matching Pursuit (CoSaMP) [12], Regularized Orthog-
to be approximately orthogonal, and the observation vector onal Matching Pursuit (ROMP) [13], and Backtracking-
𝛼 = Φ∗ 𝑦 to be a good approximation to the original signal 𝛼. Based Matching Pursuit (BAOMP) [14]. ROMP and CoSaMP
Tropp and Gilbert [6] analyzed the performance of OMP require the sparsity level as an input parameter. However,
for Gaussian measurement matrices Φ; a similar result holds in the most practical applications, this information may not
for general sub-gaussian matrices. They proved that, for every be known before reconstruction. Although the sparsity level
Journal of Applied Mathematics 3

is not required for the OMP and BAOMP, they do not use that contains all of these 𝑝 coefficients; and then (3) increase
the characteristics of the sparse representation, such as the 𝑝 until |Ω| = 4.
tree structure of wavelet transform. In this paper, a new Tree- Initializing 𝑝 = 2, two coefficients 10 and 8 will be found
based Backtracking Orthogonal Matching Pursuit (TBOMP) and will form a minimum, connected subtree Ω. Gradually
algorithm is presented based on the tree model in wavelet increase 𝑝 until 𝑝 = 4, the greedy tree approximation forms
domain. Our algorithm converts the wavelet tree structure to the connected rooted subtree Ω, 10-8-4-3, with 4 nodes that
the corresponding relations of candidate atoms without the maximize the sum of the wavelet coefficients in the subtree.
prior information of signal sparsity level. Also, combing with This process was shown in Figure 1(a), the error is small.
the backtracking algorithm, the unreliable atoms can be Another case was shown in Figure 1(b), when the wavelet
deleted. Compared with OMP, ROMP, and BAOMP algo- coefficients do not decay monotonically along the tree
rithms, the atom selection process will be more traceable, branches toward the leaves, an isolated significant coefficient
normalizable, and structural. away from the root will be selected, either of its all ancestor
coefficients. These ancestor coefficients may be very small,
2. Tree-Based Backtracking Orthogonal which will increase the approximation error. For example,
initializing 𝑝 = 2, then two coefficients 10 and 8 will be found
Matching Pursuit (TBOMP) Algorithm and the resulted subtree is 10-0-0-8 with 𝑝 = |Ω| = 4. Obvi-
In this section, we will first review the wavelet tree structure. ously, the error is large.
Second, the proposed TBOMP algorithm will be presented in We can see that the process of greedy tree approximation
detail. is simple, but when the tree includes isolated large coefficients
far from the tree root, the approximation error will be
2.1. Wavelet Tree Structure. Consider a signal 𝑥 of length 𝑁 = increased. Thus, backtracking is imposed to deleting the
2𝐿 , after 𝐿-level wavelet transformations, the set of 𝐾-tree wrong nodes selected by the greedy tree. This will be illus-
sparse signals is defined as trated in the Section 2.2.

𝐿−𝑖
{ 1 2 } 2.2. Tree-Based Backtracking Orthogonal Matching Pursuit
Γ𝑘 = {𝑥 = 𝜐𝐿 ] + ∑ ∑ 𝜔𝑖,𝑗 𝜓𝑖,𝑗 : 𝜔|Ω𝐶 = 0, |Ω| = 𝐾} , (6) (TBOMP) Algorithm. Our proposed Tree-based Backtrack-
𝑖=𝐿 𝑗=1
{ } ing Orthogonal Matching Pursuit (TBOMP) is as follows.
where ] is the scaling function and 𝜓𝑖,𝑗 is the wavelet function
Algorithm 1 (TBOMP).
at scale 𝑖 and offset 𝑗. The wavelet transform consists of the
scale coefficient 𝜐𝐿 and wavelet coefficients 𝜔𝑖,𝑗 at scale 𝑖, 1 ≤ Symbol Description
𝑖 ≤ 𝐿, and position 𝑗, with 1 ≤ 𝑗 ≤ 2𝐿−𝑖 .
𝜔—wavelet high frequency coefficient vector;
Suppose that 𝛼 = [𝜐𝐿 , 𝜔𝐿,0 , 𝜔𝐿−1,0 , 𝜔𝐿−1,1 , 𝜔𝐿−2,0 , . . . ]𝑇 is
the vector of the scaling and wavelet coefficients of 𝑥 with the ̂
𝜔—reconstruction wavelet high frequency coefficient
maximum decomposition level 𝐿. Also, it is a set of wavelet vector;
coefficients Ω forms a connected subtree [10]. The set Ω 𝐴—measurement matrix, 𝑦 = 𝐴𝜔;
defines a subspace of signals whose support is contained in 𝑎𝑖 —the 𝑖th column vector of 𝐴, 1 ≤ 𝑖 ≤ 𝑁;
Ω, which means that all wavelet coefficients outside Ω are
approximately zero. The nested structure of wavelet coef- 𝜇1 , 𝜇2 —parameters of thresholds, 𝜇1 , 𝜇2 ∈ [0, 1];
ficients creates a parent/child relationship between wavelet Λ 𝑛 —index set, Λ denotes the index set of all columns
coefficients at different scales. We say that 𝜔𝑖+1,⌊𝑗/2⌋ (⌊⋅⌋ {𝑎𝑖 } of matrix 𝐴;
denotes rounded down) is the parent of 𝜔𝑖,𝑗 . Also, 𝜔𝑖−1,2𝑗 and 𝑛max —number of maximum iterations allowed;
𝜔𝑖−1,2𝑗+1 are the children of 𝜔𝑖,𝑗 . These relations can be
expressed graphically by the wavelet coefficient tree in Fig- Γ𝑛 —atom-deleting set in the 𝑛th iteration;
ure 2(a). The relationship between the parent and child nodes 𝐶𝑛 —candidate set of the root atoms in the 𝑛th
is that the index value of the parent node in a level is 1/2 times iteration;
the index of the child node. 𝐹𝑛 —family set that consists of the subtrees corre-
A kind of tree structure (greedy tree) was proposed in sponding to the root nodes in 𝐶𝑛 .
[15]. For the greedy tree, if a coefficient is significant then
its child and all of its grandchildren are likely significant Initialization. 𝑟0 = 𝑦 (initial residual), Λ 0 = 0, Γ0 = 0, and
[11]. Figure 1 depicts two cases of greedy tree approximation. 𝐶0 = 0.
The number of each node is the wavelet coefficient modulus. Loop
Nodes not labeled depict zeros. In the first case, the wavelet
coefficients decay monotonically along the tree branches (1) Initial selection: select the candidate set 𝐶𝑛 with abso-
toward the leaves. Suppose that the wavelet tree Ω containing lute values of correlations satisfying:
𝑃 wavelet coefficients; that is, |Ω| = 𝑃. The 𝑃-term greedy 󵄨󵄨 󵄨
󵄨󵄨⟨𝑟𝑛−1 , 𝑎𝐶𝑛 ⟩󵄨󵄨󵄨 ≥ 𝜇1 ⋅ max 󵄨󵄨󵄨󵄨⟨𝑟𝑛−1 , 𝑎𝑖 ⟩󵄨󵄨󵄨󵄨 ,
tree approximation (here, we assume that 𝑃 = 4) can be pro- 󵄨 󵄨 𝑖∈Λ 𝑛
(7)
ceeded in three steps: (1) find the 𝑝, 𝑝 ≤ 4 largest wavelet coef-
ficient terms; (2) form the smallest connected rooted subtree Λ 𝑛 = Λ \ Λ 𝑛−1 .
4 Journal of Applied Mathematics

10 10
8 4

4 3

3 2 2 8
(a) (b)
Figure 1: Greedy tree search.

𝜔L,0 1 𝜔L,0

𝜔L−1,0 𝜔L−1,1 𝜔L−1,0 𝜔L−1,1


2
𝜔L−2,1 𝜔 𝜔L−2,0 𝜔L−2,1 𝜔 𝜔L−2,3
L−2,2
𝜔L−2,0 L−2,2 𝜔L−2,3

··· ··· ··· ··· ··· ··· ··· ···


(a) Wavelet tree structure (b) Process of tree nodes selection in the
TBOMP
Figure 2: Wavelet tree structure.

(2) According to the 2-times relationship of wavelet tree between the columns of ΦΛ 𝑛 and the residual 𝑟𝑛−1 are not
node indices, find the wavelet tree rooted at each node smaller than 𝜇1 ⋅ max𝑖∈Λ 𝑛 |⟨𝑟𝑛−1 , 𝑎𝑖 ⟩|, Λ 𝑛 = Λ \ Λ 𝑛−1 . Here, the
in 𝐶𝑛 . Then the family set 𝐹𝑛 consists of the atoms constant 𝜇1 is used to adaptively decide how many atoms are
indexed by 𝐶𝑛 and all of their families can be found. chosen at each time. Then the atoms corresponding to the ele-
For example, assume that 𝐶𝑛 = {𝑐𝑛1 , 𝑐𝑛2 , . . . , 𝑐𝑛𝑄}, then ments of 𝐶𝑛 are set as the root nodes of subtrees. As we men-
the wavelet subtrees rooted at 𝑐𝑛1 , 𝑐𝑛2 , . . . , 𝑐𝑛𝑄 will be tioned in Section 2.1, due to the 2-times relationship between
found, respectively, in this step. The index sets of these the indices of parent and child nodes, the subtree of each atom
corresponding to an index in 𝐶𝑛 can be found to form the
𝑄 trees are denoted as 𝐹𝑛1 , 𝐹𝑛2 , . . . , 𝐹𝑛𝑄.
family set 𝐹𝑛𝑞 , which consists of the indices of the family atoms
−1 𝐻
̂𝐹𝑛 𝑞 = (𝐴𝐻
(3) Compute 𝜔 𝑞𝐴 𝑞)
𝐹𝑛 𝐹𝑛
𝐴 𝐹𝑛𝑞 𝑦, 1 ≤ 𝑞 ≤ 𝑄. in the 𝑞th subtree. In the third step, least square method is
𝑛
applied to obtain the reconstruction wavelet high frequency
(4) Find 𝐹𝑛𝑞̃ such that 𝜔𝐹𝑛 𝑞̃ minimizing the residual as fol- coefficients 𝜔̂𝐹𝑛 𝑞 corresponding to the atoms indexed by 𝐹𝑛𝑞 .
𝑛 𝑛
lows: Then the optimal subtree indexed by 𝐹𝑛𝑞̃ will be selected
󵄩 󵄩
𝑞̃ = arg min 󵄩󵄩󵄩󵄩𝑦 − 𝐴 𝐹𝑛𝑞 𝜔
̂𝐹𝑛𝑛𝑞 󵄩󵄩󵄩󵄩 . (8)
according to step (4). In this step, there may exist insignificant
1≤𝑞≤𝑄 2 atoms in 𝑎𝐹𝑞̃ . This is because that we only simply applied the
𝑛
2-times relationship discipline in the searching processing of
(5) Select atom deleting index set Γ𝑛 satisfying subtrees. Thus, the backtracking deleting method is intro-
󵄨󵄨 𝑛 󵄨󵄨 󵄨󵄨 𝑛 󵄨󵄨 duced in the algorithm to identify the true support set of 𝐹𝑛𝑞̃.
󵄨󵄨𝜔 󵄨 󵄨 ̂ 𝑞̃ 󵄨󵄨 .
󵄨󵄨 ̂Λ 𝑛−1 ∪𝐹𝑛𝑞̃ 󵄨󵄨󵄨 ≤ 𝜇2 ⋅ max 󵄨󵄨󵄨𝜔 𝐹𝑛 󵄨󵄨
(9) The backtracking deleting set Γ𝑛 consists of the indices cor-
responding to all the reconstructed coefficients satisfying (9).
(6) Set Λ 𝑛 = {Λ 𝑛−1 ∪ 𝐹𝑛𝑞̃} \ Γ𝑛 , 𝑎{𝑖:𝑖∈Λ 𝑛 } = 0, and update the Then, the index set is updated by Λ 𝑛 = {Λ 𝑛−1 ∪ 𝐹𝑛𝑞̃} \ Γ𝑛 at this
residual as follows: iteration. According to the atoms corresponding to the
indices in the set Λ 𝑛 , the reconstruction coefficients 𝜔 ̂Λ𝑛 𝑛 can
̂Λ𝑛 𝑛 .
𝑟𝑛 = 𝑦 − 𝐴 Λ 𝑛 𝜔 (10) be computed. Finally, update the residual by (10) and go to the
next iteration. If ‖𝑟𝑛 ‖2 < 𝜀 or 𝑛 = 𝑛max , quit the iteration.
(7) If ‖𝑟𝑛 ‖2 < 𝜀 or if 𝑛 = 𝑛max , quit the iteration; otherwise, In the TBOMP, the process of tree nodes selection was
set 𝑛 = 𝑛 + 1, go to step 1. shown in Figure 2; the first step of the algorithm is to select
candidate set 𝐶𝑛 by (7). For example, suppose that 𝐶1 =
End Loop. {𝜔𝐿,0 , 𝜔𝐿−2,3 } was chosen at the first iteration. The nodes of
subtree B rooted at 𝜔𝐿,0 and the family nodes rooted at 𝜔𝐿−2,3
Output. the estimated support set Λ 𝑛 and the nonzero values are the significant coefficients needed to be found. According
−1 𝐻
̂Λ 𝑛 = (𝐴𝐻
𝜔 Λ 𝑛 𝐴 Λ 𝑛 ) 𝐴 Λ 𝑛 𝑦. to the 2-times relationship of wavelet tree node indices and
As seen in the above algorithm, we combined the charac- Figure 2(b), 𝜔𝐿−1,0 and 𝜔𝐿−2,0 are the child and grandchild
teristics of tree structure and the BAOMP algorithm. In the nodes of 𝜔𝐿,0 . Thus, subtree A rooted at the node 𝜔𝐿,0 will
first step, TBOMP selects candidate set 𝐶𝑛 whose correlations be found in the first iteration.
Journal of Applied Mathematics 5

100
20
10 50
0 0
−10 −50
0 50 100 150 200 250 0 50 100 150 200 250
(a) Original signal (b) Wavelet coefficients of 4-level decompositions. The coefficients
are arrayed as {𝑎4 , 𝑑4 , 𝑑3 , 𝑑2 , 𝑑1 }
100
50
0
−50
0 50 100 150 200 250
(c) Wavelet coefficients recovery of the TBOMP algorithm

Figure 3: Reconstruction signal by TBOMP algorithm.

80
80
40 40
0 0
−40 −40
0 50 100 150 200 250 0 50 100 150 200 250
(a) Reconstructed wavelet coefficients after the first selection of the (b) Backtracking deleting of the wrong nodes after the first iteration
wavelet nodes of TBOMP
80 80
40 40
0 0
−40 −40
0 50 100 150 200 250 0 50 100 150 200 250
(c) Reconstructed wavelet coefficients after the third selection of the (d) Backtracking deleting of the wrong nodes after the third iteration
wavelet nodes of TBOMP
Figure 4: Reconstruction of TBOMP algorithm.

Now we assume that the subtree A is the optimal tree cor- matrix. For the reconstruction, we combine the reconstructed
responding to 𝜔𝐿,0 . At the end of this iteration, the backtrack- high-frequency coefficients 𝜔 ̂ and the unprocessed low-
ing algorithm will remove the node 𝜔𝐿−2,0 according to step 5 frequency coefficients. Then the inverse wavelet transform is
of the TBOMP algorithm described above. In the remaining applied to obtain a reconstructed 𝑥̂ of the original signal 𝑥.
iteration, node 𝜔𝐿−2,1 will be choosen as the child node
of 𝜔𝐿−1,0 . Ultimately, subtree B will be found accurately.
Analogously, the searching process of the subtree rooted at 3. Simulation Results
the node 𝜔𝐿−2,3 is the same, and it can be proceeded simul- In this section, several experiments will be given for the
taneously. TBOMP algorithm. In the first experiment, the original signal
These characteristics of tree structure provide a new way 𝑥 is a one-dimensional blocks signal with length 𝑁 = 256.
for the study of reconstruction algorithm. Thanks to the tree It was recovered from 𝑀 = 64 measurements by using the
structure of wavelet coefficients, when the signal is sparsely
Gaussian random measurement matrix. The wavelet decom-
represented by the wavelet transform, it also provides a clew
position level is 4 and the wavelet function is Db1. Figure 3
for the selection of atoms in the reconstruction algorithm.
shows the reconstruction result of 7th iterations by using the
This will greatly improve the reliability of the atom selection.
TBOMP algorithm.
The coefficients of wavelet decomposition include low-
frequency coefficients and high-frequency coefficients (scal- In the first iteration of the TBOMP algorithm, according
ing coefficients and wavelet coefficients in 𝛼). The more levels to the parent-child relations of wavelet tree, some unreliable
of wavelet decomposition, the less low-frequency coefficients, atoms will be chosen, which leads to a wrong reconstruction
and more important information is reserved in the high- result. As marked by the cycles in Figure 4(a). Then according
frequency coefficients. Compared with the high-frequency to the backtracking deleting method, the wrong selected
coefficients, the number of low-frequency coefficients are atoms can be deleted. After the second and the third itera-
much less if the decomposition level is big enough. Since tions, some atoms are still not found. After the 7th iteration,
the low-frequency coefficients play an important role in the the reconstruction result (Figure 3(c)) with TBOMP algo-
wavelet reconstruction, in our proposed algorithm, only the rithm is exactly same as the original wavelet coefficients
high-frequency coefficients are measured by measurement shown in Figure 3(b).
6 Journal of Applied Mathematics

4 1.5 4
SNR = 34.2446 SNR = 28.9947
3 1 3
2 0.5 2
1 1
0
0 0
−0.5
−1 −1
−2 −1 −2
−3 −1.5 −3
−4 −2 −4
0 500 1000 1500 2000 0 20 40 60 80 100 0 500 1000 1500 2000
(a) Original and Reconstruction signals of (b) A zoom-in view of Figure 5(a) (c) Original and Reconstruction signals of
Doppler signal by TBOMP Doppler signal by OMP

1.5 3 1.5
SNR = 37.4449
2 1
1
1 0.5
0.5 0
0
0 −1 −0.5
−2 −1
−0.5
−3 −1.5
−1
−4 −2
−1.5 −5 −2.5
0 20 40 60 80 100 0 500 1000 1500 2000 1200 1300 1400 1500 1600 1700
(d) A zoom-in view of Figure 5(c) (e) Original and Reconstruction signals of (f) A zoom-in view of Figure 5(e)
Heavysine signal by TBOMP
3 1.5
SNR = 34.8215
2 1
1 0.5
0 0
−1 −0.5
−2 −1
−3 −1.5
−4 −2
−5 −2.5
0 500 1000 1500 2000 1200 1300 1400 1500 1600 1700

Original Original
Recovery Recovery
(g) Original and Reconstruction signals of (h) A zoom-in view of Figure 5(g)
Heavysine signal by OMP

Figure 5: Reconstruction results of Doppler and HeavySine signal by TBOMP and OMP algorithms.

Similar results can be obtained for other signals. Recon- the standard deviation. Because of the randomness of the
struction results of Doppler and Heavysine signals by using sensing matrix, numerical result at each time is different.
our TBOMP algorithm are shown in Figure 5. Here, we com- Hereafter, we use the same sensing matrix in one experiment
pared our reconstruction results with the classical OMP algo- for these four algorithms.
rithm, 𝑀/𝑁 = 1/4. We use the Bumps signal of length 𝑁 = 2048 and change
In the next experiment, we will compare the TBOMP with the values of 𝑀 simultaneously in order to guarantee the same
some popular algorithms such as OMP, ROMP, and BAOMP. experiment condition. After 5 layers of wavelet decomposi-
Here, only the high frequency coefficients are measured; the tions, there are 64 low-pass coefficients in the 5th decom-
low-frequency coefficients will not be processed [16]. The position layer and total 1984 high-pass coefficients in the 5
wavelet function is choosen as the “sym8” in MATLAB. The decomposition layers. In order to obtain a fair comparison,
decomposition level is 5 for these four algorithms. Define in the Figure 6, the measurements number used in these four
SNR = 20 log10 (std(𝑥)/std(𝑥̂ − 𝑥)), where std denotes algorithms is 500 − 64 = 436. For sake of simplicity, when
Journal of Applied Mathematics 7

6 SNR = 35.5026 6 SNR = 33.4468


4 4
2 2
0 0
0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 200 400 600 800 1000 1200 1400 1600 1800 2000

Original Original
Recovery Recovery
(a) TBOMP algorithm, 𝑀 = 500, SNR = 35.3231 (dB) (b) BAOMP algorithm, 𝑀 = 500, SNR = 33.5404 (dB)
Figure 6: Comparison signal of TBOMP and BAOMP in time domain.

45 CoSaMP algorithms employ the sparsity 𝐾 as the prior


knowledge for exact recovery, which has many limitations for
40 the realistic applications. However, although the sparsity level
are not required for OMP and BAOMP algorithms, they do
35 not use the characteristics of special sparse basis to improve
the performance of the algorithms. In this paper, a new Tree-
30 based Backtracking Orthogonal Matching Pursuit (TBOMP)
SNR

algorithm was proposed based on the tree model in wavelet


25 domain. Our algorithm can convert the wavelet tree struc-
tures to the corresponding relations of candidate atoms with-
20
out any prior information of signal sparsity level. Moreover,
the unreliable atoms can be deleted according to the back-
15
tracking algorithm. Compared with other compressive sens-
10
ing algorithms (OMP, ROMP, and BAOMP), the signal recon-
200 400 600 800 1000 1200 1400 struction results of TBOMP outperform the above mentioned
M CS algorithms.
ROMP OMP
BAOMP TBOMP Acknowledgment
Figure 7: SNR comparison for different values of 𝑀. This work was supported by the National Natural Sci-
ence Foundation of China (Grants nos. 61272028, 61104078,
61073079, and 61273274); the Fundamental Research Funds
we mention that 𝑀 measurements in the TBOMP, we means for the Central Universities of China (Grant no. 2013JBZ003);
that 𝑀 is the sum of the low-pass coefficient number and the the Specialized Research Fund for the Doctoral Program
measurement number of the high-pass coefficients. of Higher Education of China (Grants nos. 20110162120045
When 𝑀 = 500, the compression ratio is about 1/4. and 20120009110008); the Program for New Century Excel-
The reconstruction results of Bumps signal of TBOMP and lent Talents in University (Grant no. NCET-12-0768); the
BAOMP are shown Figure 6. The SNR of TBOMP is about National Key Technology R&D Program of China (Grant no.
1.8 dB higher than the BAOMP. 2012BAH01F03); the National Basic Research (973) Program
Since ROMP requires the sparsity level 𝐾 to be known of China (Grant no. 2011CB302203); and the Foundation of
for exact recovery, in the experiments, the best sparsity value Key Laboratory of System Control and Information Process-
𝐾 of the wavelet coefficients can be estimated according to ing (Grant no. SCIP2011009).
repeated experiments and then used in the simulations. Fig-
ure 7 shows the SNR comparison results for different values References
of 𝑀. The values of 𝑀 are selected as 200, 500, 800, 1100, and
[1] E. Candès, “Compressive sampling,” in Proceedings of the Inter-
1400, respectively. For each 𝑀, we conduct the experiment 10
national Congress of Mathematicians, pp. 1433–1452, Madrid,
independent trials and calculate the average SNR. It is obvi-
Spain, 2006.
ously that the reconstruction result of TBOMP algorithm is
[2] D. L. Donoho, “Compressed sensing,” IEEE Transactions on
superior to others. Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
[3] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty prin-
4. Conclusion ciples: exact signal reconstruction from highly incomplete fre-
quency information,” IEEE Transactions on Information Theory,
Sparse reconstruction algorithm is one of the three core vol. 52, no. 2, pp. 489–509, 2006.
problems (signal sparse representation, measurement matrix [4] T. T. Cai and A. Zhang, “Sharp RIP bound for sparse signal and
design, and reconstruction algorithm design) of CS. The low-rank matrix recovery,” Applied and Computational Har-
existed sparse reconstruction algorithms such as ROMP and monic Analysis, vol. 35, pp. 74–93, 2012.
8 Journal of Applied Mathematics

[5] Y. G. Cen, R. Z. Zhao, Z. J. Miao, and L. H. Cen, “A new approach


of conditions on 𝛿2𝑠 (𝜙) for s-sparse recovery,” Science in China
Series F, 2013.
[6] J. A. Tropp and A. C. Gilbert, “Signal recovery from random
measurements via orthogonal matching pursuit,” IEEE Trans-
actions on Information Theory, vol. 53, no. 12, pp. 4655–4666,
2007.
[7] D. L. Donoho, M. Elad, and V. N. Temlyakov, “Stable recovery
of sparse overcomplete representations in the presence of noise,”
IEEE Transactions on Information Theory, vol. 52, no. 1, pp. 6–18,
2006.
[8] D. L. Donoho, M. Elad, and V. N. Temlyakov, “On Lebesgue-
type inequalities for greedy approximation,” Journal of Approx-
imation Theory, vol. 147, no. 2, pp. 185–195, 2007.
[9] H. Rauhut, “On the impossibility of uniform sparse reconstruc-
tion using greedy methods,” Sampling Theory in Signal and
Image Processing, vol. 7, no. 2, pp. 197–215, 2008.
[10] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-
based compressive sensing,” IEEE Transactions on Information
Theory, vol. 56, no. 4, pp. 1982–2001, 2010.
[11] C. La and M. N. Do, “Tree-based orthogonal matching pursuit
algorithm for signal reconstruction,” in Proceedings of the IEEE
International Conference on Image Processing (ICIP ’06), pp.
1277–1280, October 2006.
[12] D. Needell and J. Tropp, “CoSaMP, iterative signal recovery
from incomplete and inaccurate samples,” Tech. Rep., California
Institute of Technology, Pasadena, Calif, USA, 2008.
[13] D. Needell and R. Vershynin, “Uniform uncertainty principle
and signal recovery via regularized orthogonal matching pur-
suit,” Foundations of Computational Mathematics, vol. 9, no. 3,
pp. 317–334, 2009.
[14] H. Huang and A. Makur, “Backtracking-based matching pursuit
method for sparse signal reconstruction,” IEEE Signal Processing
Letters, vol. 18, no. 7, pp. 391–394, 2011.
[15] R. Baraniuk, “Optimal tree approximation with wavelets,” in
Proceedings of the Wavelet Applications in Signal and Image
Processing VII, pp. 196–207, July 1999.
[16] Y.-G. Cen, X.-F. Chen, L.-H. Cen, and S.-M. Chen, “Compressed
sensing based on the single layer wavelet transform for image
processing,” Journal on Communications, vol. 31, no. 8, pp. 53–
55, 2010 (Chinese).
Advances in Advances in Journal of Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
Applied Mathematics
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

The Scientific International Journal of


World Journal
Hindawi Publishing Corporation
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at


http://www.hindawi.com

International Journal of Advances in


Combinatorics
Hindawi Publishing Corporation
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of Journal of Mathematical Problems Abstract and Discrete Dynamics in


Complex Analysis
Hindawi Publishing Corporation
Mathematics
Hindawi Publishing Corporation
in Engineering
Hindawi Publishing Corporation
Applied Analysis
Hindawi Publishing Corporation
Nature and Society
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences

Journal of International Journal of Journal of

Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014


Function Spaces
Hindawi Publishing Corporation
Stochastic Analysis
Hindawi Publishing Corporation
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like