On Application of Omp and Cosamp Algorithms For Doa Estimation Problem

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

International Conference on Communication and Signal Processing, April 6-8, 2017, India

On application of OMP and CoSaMP algorithms


for DOA estimation problem
Abhishek Aich and P. Palanisamy

 coefficients. Hence, to deal with such a scenario, we take help


Abstract—Remarkable properties of Compressed sensing (CS) coefficients. Hence, to deal with such a scenario, we take help
has led researchers to utilize it in various other fields where a of compressive techniques like transform coding. This process
solution to an underdetermined system of linear equations is finds a basis that provides sparse representation for signal of
needed. One such application is in the area of array signal interest, thus aiming to find the most concise representation of
processing e.g. in signal denoising and Direction of Arrival (DOA)
the signal so as to trade off for an acceptable level of
estimation. From the two prominent categories of CS recovery
algorithms, namely convex optimization algorithms and greedy
distortion. By sparse representation, we mean that for a signal
sparse approximation algorithms, we investigate the application of length N, we can represent it with M << N nonzero
of greedy sparse approximation algorithms to estimate DOA in coefficients.
the uniform linear array (ULA) environment. We conduct an
Signal sources have been found to be sparse in spatial
empirical investigation into the behavior of the two state-of-the-
art greedy algorithms: OMP and CoSaMP. This investigation domain making it convenient to exploit CS to solve DOA
takes into account the various scenarios such as varying degrees estimation problem [2]. Direction of Arrival estimation
of noise level and coherency between the sources. We perform problem is major field of study in the area of array signal
simulation to demonstrate the performances of these algorithms processing with continuous research to eliminate the
and give a brief analysis of the results. drawbacks in existing algorithms and techniques. The existing
state-of-the-art algorithms such as the class of subspace-based
Index Terms—Compressive Sensing, Greedy algorithms, Array
signal processing, Direction of Arrival
algorithms like multiple signal classification (MUSIC),
estimation of parameters by rotational invariant techniques
I. INTRODUCTION (ESPRIT), the nonlinear least squares (NLS) method, better

C
known as the maximum likelihood estimation method, come
ompressed sensing (CS) has been shown to be a robust
with certain limitations unfortunately [3]. For example, they
paradigm to sample, process, and recover the signals
need to have a priori knowledge of the source number, require
which are sparse in some domain [1]. Developed
recently in the last decade, it is found to be a suitable to compute sample data covariance matrix and consecutively
alternative to classical signal processing operations such as require a sufficiently large number of snapshots. Again, source
sampling, compression, estimation, and detection. Nyquist- coherency give inaccurate results as it affects the properties of
Shannon’s sampling theorem shows the optimal way to acquire covariance matrix and their time complexity is high as they
and reconstruct analog signals from their sampled version. It involve a multiple dimensional search. Here is where CS
states that to restore an analog signal from its discrete sampled comes into the picture to tackle these problems, prompting for
version accurately, the sampling rate should be at least twice further studies in the connection between array signal
its bandwidth. Whereas the fundamental theorem of linear processing and CS theory [4] [5].
algebra for the case of discrete signals, states that the number
In this paper, we concentrate on the application of CS
of samples in a linear system should be greater than or equal to
recovery based greedy algorithms out of the two categories
the length of the input signal to ensure its the accurate
mentioned earlier. In this context, the word “greedy” implies
recovery. The samples collected from such a process are too
recovery strategies in which, at each step, we have to take a
costly- both computationally and logistically. Also, often these
“hard” decision, generally based on some locally optimal
bounds are found to be too stringent in a situation where
optimization condition. The typical greedy sparse recovery
signals of interest are sparse, i.e. when these signals can be
approaches are basis pursuit (BP), orthogonal matching pursuit
represented using a relatively small number of nonzero
(OMP) [6], compressive sampling matching pursuit (CoSaMP)
and forward backward pursuit (FBP) [7]. All these approaches
make a very compelling and favorable case for solving the
Abhishek Aich is with the Department of Electronics and Communication DOA estimation problem. The major advantages of these
Engineering, National Institute of Technology, Tiruchirappalli, 620 015, algorithms are low computational complexity and low time
India. (e-mail: [email protected]). complexity for desired property recovery. The subspace-based
P. Palanisamy is with the Department of Electronics and Communication
Engineering, National Institute of Technology, Tiruchirappalli, 620 015, algorithms have huge computational cost (estimation of
India. (e-mail: [email protected]). covariance matrix and eigen decomposition) and memory cost

978-1-5090-3800-8/17/$31.00 ©2017 IEEE

1983
(large number of snapshots) which creates inconvenience for = (5)
real time applications. Our previous work involved using CS
Here = is called the sparsifying or the
beamformer for improving the MUSIC algorithm by adapting
transformation matrix. Hence, is said to be M - sparse and to
the measuring matrix using an optimal bound for its dimension
[8]. This motivates current research to implement the reconstruct x accurately, ideally M measurements are required.
algorithms such as CS based algorithms to solve the DOA Our aim here therefore is to recover , given . This is done
estimation problem, thus making them a suitable alternative for by finding the sparsest solution of the following objective
engineering practice. The next sections of the paper are problem
organized as follows. The system model for DOA estimation min || ||0 s. t. (6)
problem is presented in Section II. The two greedy algorithms
are explained in Section III. Section IV demonstrates the where || ||0 corresponds to non-zero entries in .
performances these algorithms and Section V concludes the Mathematically, (6) has been shown non-deterministic
work. polynomial-time hard (NP-hard) problem. However, a unique
sparse solution is found using l1-minimisation by converting
(6) as (7). This form of (7) is a convex problem and is solved
II. PRELIMINARIES using linear programming (LP).
A. Data Model min || ||1 s. t. (7)
Suppose M narrowband source signals impinge on a uniform where || ||1 = . In this paper, we obtain these solutions
linear array (ULA) of N omnidirectional sensors from using the greedy algorithms.
directions θ1, θ2, …, θM. The output of these sensors is
represented by the following model: C. Basics of Greedy algorithms
The LP technique to solve l1-norm minimization problem is
very effective in reconstructing desired signal. But the trade-
off is that it’s computationally costly. For major engineering
K denotes kth snapshot number. x(k) ϵ ℂN, s(k) ϵ ℂM and n(k) ϵ applications like wireless communications, even the time
ℂN denote the received data, source signal vector and the noise complexity of l1-norm minimization solver is prohibitive. In
vector at snapshot time k respectively. a(θi), i = 1, 2, …, M such cases, greedy algorithms provide a suitable alternative.
denotes the steering vector of the respective ith source. It forms By “greedy algorithm”, it means an algorithm to make an
the array manifold matrix A(θ) consisting of all the steering optimal selection at each time locally so as to find a globally
vectors a (θi). In matrix form, it is written as optimum solution in the end of the process. These can be
broadly categorized into two strategies as “greedy” pursuits
(1) and “thresholding” algorithms.
where X = [x (1), x (2), …, x (K)], S = [ (1), (2), …, (K)] Greedy pursuits are a set of algorithms that iteratively build
and N = [n (1), n (1), …, n (K)]. The goal is to estimate θi, i = up an estimate of . Beginning with a zero-signal vector, these
1, 2, …, M given X and a(θi). Note that we will be considering algorithms estimate a set of non-zero entries of by iteratively
only the case of single snapshot, hence the data model adding new entries which where non-zero. This selection is
representation will be as alternated with an estimation step in which the values for the
non-zero entries are optimized. These algorithms have very
(2)
less time complexity and are useful in very large data-sets. The
orthogonal matching pursuit (OMP) and the forward backward
B. Introduction to Compressive Sensing
pursuit (FBP) fall in this category.
An overview of CS theory can be found in [1] [9]. Let x ϵ
ℂN be the input signal. We need to obtain m > M ln(N) linear Due to their ability to remove non-zero entries, the second
measurements from x. For this, multiply x by a measurement category is called as “thresholding” algorithms. The main
examples are the Compressive Sampling Matching Pursuit
matrix ϵ ℂm×N. This process is represented as
(CoSaMP) and the Subspace Pursuit (SP). Both CoSaMP and
= x (3) SP maintain track of the non-zero elements while both adding
and removing entries in each iteration. At the beginning of
with ϵ ℂm. According to CS theory, for accurate each iteration, a sparse estimate of is used to calculate a
reconstruction from lesser measurements, x has to be sparse in residual error and the required indices’ support is updated.
some transform domain. Let’s now consider the DOA data Then, either the algorithms take a new estimate of this
model (2) in CS environment taking the noiseless case for intermediate estimate of , keeping it restricted to the current
simplicity. Here will be the sparse representation of non- support set or solve a second least-squares problem restricted
sparse signal x in transform domain A( ), then the overall to this same support. We now do an analysis of the OMP and
sampling process becomes CoSaMP algorithms for DOA environment. Fig. 1 shows the
= (4) pictorial representation of the concept behind greedy
algorithms [10].

1984
λc = arg maxj ϵ {1,..., N} |¢ rc-1, Jj ⟩|
3) Augment the index set Λ0 = Λc-1 ∪ { λc } and the matrix
of chosen atoms c = [ c Jj]
4) Solve the following optimization problem to obtain the
signal vector estimate for c:
= arg mins || cs - ||2
5) Calculate the new approximation (β βc) of and the new
residual:
βc = c
rc =

Fig. 1. If the correct columns are chosen, then convert the 6) Increase c by 1, and return to Step 2) if c < M.
underdetermined system into overdetermined system. 7) Value of estimate in λj equals the jth component of .

III. OMP AND COSAMP ALGORITHMS FOR DOA ESTIMATION B. Compressive Sampling Matching Pursuit
PROBLEM The CoSaMP algorithm applies hard thresholding by
This section explains the basic idea behind the greedy selecting the M largest entries of a vector obtained by applying
algorithms – OMP and CoSaMP. A very important part of a pseudoinverse to . The columns of selected for the
adapting these algorithms to finally obtain the set of DOAs is pseudoinverse, are obtained by applying hard thresholding of
the plotting of angle spectrum which is common to both the magnitude 2M to * applied to the residual from the previous
above said algorithms after estimating approximation of . We iteration and adding these indices to the support set Λc from
do this by using (8). the previous iteration. Here * is the complex conjugate of .
A major factor of CoSaMP is that it uses pseudoinverse in
Ps(θ) = || ||2 ; θ = θ1, θ2, …, θNs (8) every iteration. Again, when computing the output vector ,
θ
CoSaMP does not need to apply another pseudoinverse as in
case of OMP. Algorithm III. 2 gives the CoSaMP algorithm.
where Ns being the total number of angles to be scanned. The Algorithm III. 2: CoSaMP [7]
peaks from the plot Ps(θ) vs θ correspond to respective DOAs.
Input:
For the convenience of the reader, we again state the objective x , ,M
problem in (9) to estimate Output:
x An estimate
min || ||1 s. t. = (9)
Procedure: (Loop until convergence)
where = . c is the current iteration number and Λc is 1) Set r0 = , 0 = ∅, Λ0 = ∅, and an iteration counter c = 1
the support set at cth iteration. We solve (9) by following 2) Compute the current error
algorithms. e=
3) Find the best 2M index set of e, Λc = e2M
A. Orthogonal Matching Pursuit 4) Update the current support set as:
In this algorithm, the approximation for is updated in Λc = Λc-1 ∪ Λc
each iteration by projecting orthogonally onto the columns 5) Estimate the new approximation (βc) of and the new
of associated with the current support set Λc with c denoting residual:
the current iteration. Hence it minimizes || − ||2 over all βc =
with support Λc and never re-selects an entry. Also, the rc =
residual at any iteration is always orthogonal to all currently
selected entries. The algorithm is as follows. 6) Increase c by 1, and set = β c.

Algorithm III. 1: OMP [6] IV. NUMERICAL ANALYSIS


Input: In these section, we present MATLAB simulations to study
x , ,M the performance of OMP and CoSaMP algorithm in various
Output: DOA environment scenarios. Subsequent results are discussed
x An estimate with corresponding plots and their analysis. For all the
Procedure: simulations, the ULA has N = 15 sensors. The scanning
1) Set r0 = , 0 = ∅, Λ0 = ∅, and an iteration counter c = 1 direction grid contains Ns = 181 points being sampled from
2) Find the corresponding index λc of the optimization −90° to 90° with 1° interval. Throughout this section, the
problem noises are generated from a zero-mean complex Gaussian

1985
distribution. The number of snapshot is K = 1, hence the
treated as a single measurement vector (SMV) problem.
Simulation 1: In this example, there are three non- coherent
sources with respective directions as θ1 = −60°, θ2 = 0° and θ3
= 40°. The SNR is set to be 0 dB to create a noisy
environment. The performances of the algorithms are shown in
Fig.2. We can observe from the plot that the OMP algorithm
correctly resolves all the sources under the given scenario with
better resolution. The plot of CoSaMP spectrum however
gives poorer results and is not able to detect all the sources
properly. This is mainly because it needs more measurements
than what OMP requires to correctly detect the DOA support
set. This was confirmed by taking a larger number of
measurements.
Fig. 3. Plot for Simulation 2 (N = 15, M = 3, SNR= 0 dB)
Simulation 2: Setting the same DOA environment as the first
example, we have a source from θ1 = −60° and two coherent
sources with directions θ2 = 0° and θ3 = 40°. The SNR remains
unchanged. The performances of the algorithms are now
observed in this partially coherent source environment in
Fig.3. We can observe from the plot that the OMP algorithm
accurately resolves all the sources under the partially coherent
source scenario without false peaks. However, CoSaMP fails
to resolve all the signals while generating false peaks. Thus,
OMP algorithm works well even here, given the coherency of
the sources. These two examples show an empirical advantage
of OMP over CoSaMP in terms of performances.
We now do the RMSE vs SNR analysis to observe the
performance for 1000 trials.

Fig. 4. Plot for Simulation 3 (N = 15, M = 2, Trials = 1000)

V. CONCLUSION
In this paper, we provided an overview of the application of
the OMP and CoSaMP algorithm to the DOA estimation
problem by modelling it as a standard CS recovery problem.
The major advantage of these algorithms is that they don’t
require any eigen value decomposition and work well with
single snapshot. This is highly desirable for practical
engineering applications such as dynamic tracking of a vehicle.
It can be fairly concluded that greedy algorithms are a
favorable candidate for DOA estimation problems as they are
fast and have high resolution. These algorithms remain at the
Fig. 2. Plot for Simulation 1 (N = 15, M = 3, SNR= 0 dB) forefront of active CS research and thus, provide a strong
alternative tool for wide range of array signal processing
applications. We will further work on the performance
Simulation 3: This simulation considers two sources with improvement of these algorithms specifically by designing an
DOAs as θ1 = −60°and θ2 = 60°. we compare the algorithms adaptive dictionary to suit the DOA estimation problem.
with respect to root mean square error (RMSE) vs SNR (dB).
1000 independent Monte Carlo experiments are performed. It REFERENCES
is observed from Fig. 4 that the OMP algorithm achieves a [1] D. L. Donoho, “Compressed sensing”, IEEE Trans. Inf. Theory, vol. 52,
much better estimation performance. no. 4, pp. 1289-1306, Apr. 2004.

1986
[2] Ying Wang, Geert Leus, Ashish Pandharipande, “Direction Estimation [7] C. Karaku and A. C. Grbz, “Comparison of iterative sparse recovery
Using Compressive Sampling Array Processing”, IEEE/SP 15th algorithms”, 2011 IEEE 19th Signal Processing and Communications
workshop on SSP, pp 626-629, 2009. Applications Conference (SIU), 2011, pp. 857- 860.
[3] Naidu, P.S., “Sensor Array Signal Processing”,1st ed. CRC Press; Boca [8] A. Aich and P. Palanisamy, “A strict bound for dimension of
Raton, FL, USA, 2001. measurement matrix for CS beamformer MUSIC algorithm”, 2016 IEEE
[4] J. M. Kim, O. K. Lee and J.C. Ye, “Compressive MUSIC: Revisiting Region 10 Conference (TENCON), Singapore, 2016, pp. 2602-2605.
the link between compressive sensing and array signal processing,” [9] Candès, E.J., Wakin, M.B., “An Introduction To Compressive
IEEE Transactions on Information Theory, 58(1), 278-301, 2012. Sampling”, IEEE Signal Processing Magazine, vol.25, no.2, pp.21-30,
[5] P. P. Vaidyanathan and P. Pal, “Why does direct-MUSIC on sparse- March 2008.
arrays work?”, 2013 Asilomar Conference on Signals, Systems and [10] Choi, Jun Won, et al. “Compressed sensing for wireless
Computers, Pacific Grove, CA, 2013, pp. 2007-2011. communications: Useful tips and tricks.”, arXiv preprint
[6] J. A. Tropp, A. Gilbert, “Signal recovery from partial information by arXiv:1511.08746 (2015).
orthogonal matching pursuit”, IEEE Trans. Inf. Theory, vol. 53, no. 12,
pp. 4655-4666, Dec. 2007.

1987

You might also like