Zubair Thesis Subspace Methods
Zubair Thesis Subspace Methods
OF SINUSOIDAL SIGNALS
by
MUHAMMAD ZUBAIR
i
Acknowledgments
First and foremost I would like to thank the Almighty for granting me perseverance and un-
derstanding to carry out this work.
I owe my gratitude to my advisor Associate Prof. Dr. Sajid Ahmed of the Electrical En-
gineering Department at Information Technology University. He always kept his doors open
whenever I ran into a trouble spot or had a question about my research or writing. He con-
sistently steered me in the right direction whenever I needed it. Thanks a lot sir.
I must express my very profound gratitude to my parents and to my sister for providing
me with unfailing support and continuous encouragement throughout my years of study and
through this research and thesis write-up. I could not have possibly accomplished this with-
out them. Thank you.
Last but definitely not the least, this work has been shaped by many interactions with my
numerous colleagues. Thank you all.
Author
[Muhammad Zubair]
ii
Abstract
Detection and estimation of sinusoidal signals buried in noise is well posed and extensively
investigated problem. For frequency estimation, parametric subspace-based algorithms are
quite proficient as compared to non-parametric methods. However, the performance of
subspace-based algorithms depends on the selection of dominant eigenvalues which is chal-
lenging. This thesis presents a novel subspace-based algorithm for frequency estimation
that exploits the circular transformation of data for the formulation of the input matrix for
subspace-based algorithms. For complex and real sinusoidal signals two theorems are pro-
vided that sets the criterion for the formulation of signal and noise sub-spaces from circulant
matrix. Expressions of reconstruction errors of both proposed and Karhunen-Loeve Trans-
form (KLT) are derived and it is verified that the proposed algorithm does not lose any
information in noise subspace while KLT struggles to preserve all the signal information.
Additionally, expressions of output Signal-to-Noise Ratios (SNRs) is given and it is shown
that the signal estimate is improved in comparison to KLT. The performance of the proposed
method is compared with other subspace-based algorithms as well i.e, Estimation of Sig-
nal Parameters via Rotational Invariance techniques (ESPRIT), Multiple Signal Classifica-
tion (MUSIC), Minimum-Norm and Eigen-Vector Method. Comparson with non-parametric
methods as fast Fourier transform (FFT) and Capon are also carried out. Cramer-Rao Lower
bound and extraction of the clean spectrum are set as the benchmark for the performance
test of proposed method against existing methods. Simulation results show that the proposed
method outperforms other parametric and non-parametric methods.
iii
Table of Contents
Title Page i
Acknowledgments ii
Abstract iii
Table of Contents iv
List of Figures vi
1 Introduction 1
1.1 Overview 1
1.2 Applications 1
1.3 Attempts for Frequency Estimation 2
1.4 Problem Statement 5
1.5 Objectives 6
1.6 Limitations and Scope 6
1.7 Thesis Outline 6
2 Literature Review 7
2.1 Overview 7
2.2 Correlation and Covariance Matrices 8
2.3 Noise Subspace Methods 14
2.4 Signal Subspace Methods 18
3 Methodology 24
3.1 Signal Model 24
3.2 Proposed Algorithm 25
3.3 Reconstruction Error of Proposed Algorithm and KLT 28
3.4 Input and Output SNR of Proposed Algorithm 30
4 Experimental Results 33
4.1 Spectrum Comparison 34
4.2 Mean Square Error(MSE) 45
4.3 Analysis of Signal Content Distribution is Matrix Space 54
4.4 Input and Output SNR comparison of Proposed method with KLT 55
5 Conclusion and Future Work 57
5.1 Conclusion 57
5.2 Future Work 57
6 References 60
7 Appendices 62
iv
7.1 Proof of Theorem-1 62
7.2 Proof of Theorem-2 65
7.3 CRLB for Two Complex Sinusoidal Signals: 67
v
List of Figures
vi
4.14 Power Spectrum estimation of two sources with unequal power
using NSS in dB. Scale 41
4.15 Power Spectrum estimation of two sources with unequal power
using SSS in Normal Scale 42
4.16 Power Spectrum estimation of two sources with unequal power
using SSS in dB. Scale 42
4.17 Power Spectrum estimation of two sources with same power us-
ing NPM in Normal Scale 43
4.18 Power Spectrum estimation of two sources with same power us-
ing NPM in dB. Scale 43
4.19 Power Spectrum estimation of two sources with unequal power
using NPM in Normal Scale 44
4.20 Power Spectrum estimation of two sources with unequal power
using NPM in dB. Scale 44
4.21 Normal Scale MSE comparison of proposed method with NSS
methods 45
4.22 dB scale MSE comparison of proposed method with NSS meth-
ods 46
4.23 Normal scale MSE comparison of proposed method with SSS methods 46
4.24 dB Scale MSE comparison of proposed method with SSS methods 47
4.25 Normal Scale MSE comparison of proposed method with NSS
methods 47
4.26 dB scale MSE comparison of proposed method with NSS meth-
ods 48
4.27 Normal scale MSE comparison of proposed method with SSS methods 48
4.28 dB Scale MSE comparison of proposed method with SSS methods 49
4.29 Normal Scale MSE comparison of proposed method with NSS
methods 50
4.30 dB scale MSE comparison of proposed method with NSS meth-
ods 50
4.31 Normal scale MSE comparison of proposed method with SSS methods 51
4.32 dB Scale MSE comparison of proposed method with SSS methods 51
4.33 Normal Scale MSE comparison of proposed method with NSS
methods 52
4.34 dB scale MSE comparison of proposed method with NSS meth-
ods 52
4.35 Normal scale MSE comparison of proposed method with SSS methods 53
4.36 dB Scale MSE comparison of proposed method with SSS methods 53
4.37 Normal scale MSE comparison of proposed method with NPM methods 54
4.38 dB scale MSE comparison of proposed method with NPM methods 54
4.39 Comparison of eigenvalues distribution of proposed method with
covariance and autocorrelation matrices. 55
4.40 Output SNR comparison of reconstructed signal via proposed
and KLT method 56
vii
Chapter 1
Introduction
1.1 Overview
For decades, the axial of great researcher’s attention is the problem of sinusoidal signal detection
buried in noise or measurement of its constituent parameters like frequency, amplitude and phase
which are essential for the reconstruction or spectral estimation of signal. Before to dive in analysis
of signals and different proposed techniques for frequency estimation. First of we will discuss its
potential applications that will be later on serve to ilustrate the concepts that are developed.
1.2 Applications
Frequency estimation is an evergreen topic of interest for researchers from last few decades, due to
its numerous applications in various fields [1–12]. In most part of the world, power system operate
at 50/60 Hz. Due to different demands of load, frequency deviates from its nominal value. To make
the system stable and to avoid any kind of damage, control of frequency is required and it is only
possible when it is estimated accurately. For example, in Frequency-Modulated Continuous-Wave
(FMCW) radar to find the velocity and range of the target, frequency of the received signal is es-
timated. Similarly in mobile and personnel communication sytems frequency estimation is used to
detect the Dual-Tone Multi-frequency (DTMF) signals [13].
In Biomedical engineering we use frequency estimation to classify the spectrum of Heart Rate Vari-
ability (HRV) as low frequency (LF), very low frequency (VLF) and high frequency (HF) spectrum.
In any good engineered system, Prognostic Health Management (PHM) is used to study and diagnose
the health of electromechanical systems. In PHM it is very common to study the features from tem-
poral data by having its analysis in its counter part frequency domain. For example [14–16] used fast
Fourier transform (FFT) and short-time Fourier transform (STFT) to study the features of ball bearing
data because by exploiting data in frequency domain it became very easy for them to learn about the
fault’s features.
One of the new interesting application of frequency estimation is Search for Extraterrestrial Intelli-
gence (SETI) [11, 17].
So, there are vast variety of applications where frequency estimation is required and it is not possible
for us to list all of them here. Because of too much importance of this problem in many fields as
few of them listed above, it is an area of great interest and focus of many great researchers, and their
attempts to this problem resulted with many techniques beyond the conventional methods.
1
1.3 Attempts for Frequency Estimation
We will discuss the most important techniques as not to make this work exhaustive. The very first
methods developed for frequency estimation in time domain are ”Zero-Crossing Rate”, Peak Rate
and Slope-Event Rate. These methods works fine if waveform are periodic and have single peak
per cycle, but a waveform may has many peaks per cycle. Autocorrelation method also remained
in practice for periodic waveforms. It is difficult to distinguish the target signal and noise in time
domain. Usually frequency spectrum analysis is helpful to analyze the characteristics of signal and
noise. Later on advance methods based on frequency domain proposed and they serve for spectral
analysis and frequency estimation of signals. Remember spectral analysis of signals and frequency
estimation are completely two different things [18].
For spectral estimation and frequency estimation there are two approaches. One is named as non-
parametric; including Periodogram, correlogram etc are directly based on FFT and Beamforming.
Second approach consists on; postulate a model for data with assumed parameters and then reduce
the problem from spectral estimation to parameter estimation, these methods termed as parametric
methods [19].
In general FFT is used to study the spectral behavior of the signals. In the field of digital signal
processing FFT is one of the most useful algorithm ever developed and is the cornerstone of many
other algorithms [20]. In essence FFT is well-known and well investigated approach for single or
multiple tones detection. Without noise FFT shows the clean spectrum of corresponding signals, but
as we increase the noise power or decrease the Signal to Noise ratio (SNR) the noise frequency com-
ponents or side lob levels starts dominate over target frequency components consequently it becomes
difficult to differentiate between target frequency and noise frequencies and impossible to remove
noise from observed signal as well. When SNR is very low FFT fails to separate the target signal
spectrum (main lobe) from noise spectrum (side-lobes) and as a consequence output SNR becomes
very low. In short FFT and other FFT based non-parametric techniques are unable to resolve closely
spaced targets at low Signal-to-Noise ratio(SNR). Beam-forming approach is also applied for this
problem but it has limitations due to structure of sensors. Resolution power of non-parametric meth-
ods is at most equal to Rayleigh resolution.
Parametric methods include Maximum Likelihood Estimation and subspace based algorithms like;
Multiple Signal Classification (MUSIC), ESPIRIT, Yule- Walker Equations, Total Least Square. [8].
Maximum likelihood technique is claimed to achieve Cramer-Rao lower bound (CRLB) for this
method but it does not gives the guarantee of global convergence for multidimensional non-linear
problem additionally it requires to much computation power. Other parametric subspace methods
like PHD, MUSIC and ESPIRIT requires aprioi information about the number of sources present in
the data but if there is no such information available then there are number of solution available in
literature to detect the number of sources from the observed data. Throughout the thesis for paramet-
ric methods it is assumed that the number of sources is known. All parametric methods can also be
termed as super-resolution algorithms as they overcome the problem of Rayleigh resolution in non-
parametric methods. Subspace based methods have attracted much of attention due to overcoming
the problem of Rayleigh bound of non-parametric methods and their computational simplicity. But in
practice common problem with subspace based algorithms is that their performance dependent upon
the separation of signal and noise sub-spaces [21].
2
Many proposed adaptive algorithms Capon and MUSIC are based on asymptotic assumptions of
large number of observations and snapshots and very high Signal-to-Noise ratio [22]. But there are
many real time scenarios where these assumptions are unrealistic and above mentioned algorithms
does not work. For example in sonar due to physical constraint of sound speed only few number of
observation and snapshots are available or in worst case single one [22–25]. Another application is
where single observation or snapshot available is the automotive radar system [26].
To address the above issues of asymptotic assumptions of large number of observations and high
SNR authors in [11] proposed a parametric approach, named as Karhunen-loeve Transform based
method to remove noise from noisy sinusoidal signals. After thorough investigation of KLT there
are number of problems observed which are enlisted as (i) Spreads signal power along all the basis
vectors (dimensions) on which it is mapped. (ii) There is no optimal criteria defined to chose the
basis vectors for reconstruction of target signal. (iii) The spectrum of recovered signal much better
than FFT but still there are some side lobes with the spectrum of signal. When signal reconstructed
using only top eigen vectors maximum signal recovered with some noise.
So to address the matter at issues of subspace and KLT algorithms, at this stage there is a quit-
essential demand of a new algorithm.
In this thesis an algorithm is proposed to exploit the benefits of both parametric and non-parametric
approaches additionally the problems of above mentioned algorithms: requirement of large num-
ber of observations, high SNR, selection criteria of eigenfunctions to recover signal using KLT are
addressed. Major contribution of this thesis are,
1. To get the clean spectrum for frequency estimation a novel approach is provided.
2. Proposed algorithm map sinusoidal signal form N dimension space to only finite dimensional
space(Signal space) and make all the other spaces as null space.
3. The clean spectrum of signal can be received, an exact criteria is defined to use the number of
basis vectors to recover the signal of interest.
To the best knowledge of authors there is no other method exist to date that can achieve as higher
output SNR as the proposed method. Research for frequency estimation algorithms and performance
of proposed algorithm is conducted according to the taxonomy described in Fig.(1.1).
3
Figure 1.1:: Taxonomy of frequency estimation algorithms
4
1.4 Problem Statement
The problem as described in Fig.(1.2) where P number of sources are emitting sinusoidal signals,
while reaching at receiver the noisy signals are jumbled together and we have base-band noisy obser-
vation at hand i.e Y = S1 + S2 + · · · + SP + AW GN , an optimal algorithm is required to reconstruct
signals buried in white Gaussian noise (WGN).
Figure 1.2:: Problem Statement: Circles with different colors represents different Sources
5
1.5 Objectives
The above problem statement will be solved by accomlishing the following steps
2. Identification of loop holes in as many as methods for frequency estimation in the time duration
of master thesis.
3. Propose such an algorithm that can overcome the limitations found in above steps.
Scope of this work can be found in many real world applications as the sinusoidal are encountered in
many real-world applications.
6
Chapter 2
Literature Review
An overview of non-parametric and parametric methods will be given first. Later on existing algo-
rithms in scientific literature will be explained according to the taxonomy given in Fig.(1.1).
2.1 Overview
Non parametric methods does not have any assumption about the signal under analysis while paramet-
ric methods assumes apriori model of the signal. In non-parametric methods Fast Fourier transform
(FFT) is the most popular method to find the frequency components of the signal. It is just a fast
algorithm to compute Discrete Fourier Transform (DFT):
X −j2πkn
YK = y (n) e N , K = [0, N − 1] (Equation 2.1)
FFT Limitations:
1
1. Resolution Bound of here N is the total number of samples.
N
2. Unable to distinguish between actual peaks and noise peaks at lower Signal-to-Noise Ratio (
SNR ).
The FFT is also exploited in many other algorithms to further improve the frequency estimation
and spectral analysis of signals [19]. Keep in mind, frequency estimation and spectral analysis of
signals are two different things [18]. Spectral analysis of signals can also be used to estimate their
frequencies as well. Periodogram is one of the most sophisticated tool for spectral analysis of signals.
Periodogram is based on FFT and the estimation of signal frequency can be found by locating the
bin with highest peak from ’N ’ FFT bins. Periodogram of statistical random signal displays many
noise peaks in its plot and they try to obscure the desired peaks, to avoid these a method of Barlett
(averaged peiodogram) is used in literature with the cost of lower resolution than simple periodogram.
Manolakis in [27] compare Minimum-Variance (Capon) with FFT based Bartlett and all-pole method
and named former a high resolution method as it resolves closely spaced targets while later two
methods unable to do so for lower order (16) of autocorrelation matrix. But if we compare Minimum-
Variance with simple FFT method, FFT outperforms than Minimum-Variance. To some extent all FFT
based methods and their improved versions have limitations of resolution bound N1 .
7
On other side, in parametric methods Non-linear least square (NLS) methods remains the most ac-
curate methods if data length is quite large and performs equally well for gaussian and non-gausian
noise. Unfortunately, the good statistical performance of NLS method depends on the complication
of function under consideration and accurate initialization of the parameter of interest in search pro-
cess [19]. So these kind of limitations produced a new interest in an other domain of parametric
methods known as subspace based methods or superresolution methods.
Research in subspace-based techniques for signal processing was initiated more than three decades
ago, and there has been considerable progress in the area. Thorough studies have shown that the
estimation and detection tasks in many signal processing and communications applications can be
significantly improved by using the subspace-based methodology. The interest of the signal pro-
cessing community in subspace-based schemes remains strong as it is evident from the numerous
articles and reports published in this area each year as well as from the attention attracted by the spe-
cial issue of ”Advances in Subspace-Based Techniques for Signal Processing and Communications”.
Subspace-based methods not only provide new insight into many such problems, but they also offer
a good tradeoff between achieved performance and computational complexity. In most cases they
can be considered to be low-cost alternatives to computationally intensive maximum-likelihood and
Non-linear least square (NLS) approaches.
Subspace based methods are based on second order statistics (SOS) which we call Autocorrelation or
covariance aatrix, so it is essential to explore it in detail before the explanation of subspace methods.
In litrature there are different methods to compute the autocorrelation and covariance matrices, all
have their own significance based on application. In what follows they are explained in detail.
Suppose we have recorded data from single sensor for time interval [ 0 (N − 1)]
Now if we pass this data from the filter of length M = N − 1, filter length may fall in the range
of [P < M < N − 1] here P is the total number of sources (frequncies), but it will be in favor of
estimation if we select the filter length to its maximum value otherwise the efficiency of the estimation
will be sacrificed. So, to filter the data we need to make filter coefficients constant during the time
interval of 0 ≤ n ≤ N − 1 and for that purpose we need to apply windowing operation on data in
different ways depending on the application, that are explained below in detail.
Windowing data matrix gives Autocorrelation matrix. While Covariance matrix is obtained by using
data matrix that is made by using only available data.
8
2.2.1 Auto-Correlation Matrix
Autocorrelation Matrix is approximated in four different ways depending on its application [21, 27,
30].
In this method for data matrix Ypre it is assumed, y(n) = 0 for n < 0
y (0) 0 ··· 0
y (1)
y (0) · · · 0
1
Ypre = √ .. .. .. ..
N . . . .
y (N − 1) y (N − 2) · · · y (0)
H
Rpre = Ypre Ypre
Here Ypre ∈ CN ×N and Rpre ∈ CN ×N . Here Rpre is not fully toeplitz matrix, but we can say it is
approximately toeplitz matrix. It is used in Least Square based applications.
In Post Window method for n < (N − 1) the data samples y(n) are considered zero. Formulated data
matrix Ypost can be written as,
y (N − 1) y (N − 2) · · · y (0)
0 y (N − 1) · · · y (1)
1
Ypost = √
N .
.. .
.. . .. .
..
0 0 · · · y (N − 1)
H
Rpost = Ypost Ypost
Here Ypost ∈ CN ×N and Rpost ∈ CN ×N . Rpost has no practical applications but it may found in
some applications with the combination of pre-window method.
9
2.2.1.3 Full Window Method
By applying the combination of prewindow and post window methods on the data vector in (Equation
2.3) , data matrix Yf ull for autocorrelation matrix Rac can be written as
···
y (0) 0 0
y (1)
y (0) ··· 0
.. .. .. ..
. . . .
1
Yf ull = √ y (N − 1) y (N − 2) · · · y (0)
N
0 y (N − 1) · · · y (1)
.. .. .. ..
. . . .
0 0 · · · y (N − 1)
Here Rac is Toeplitz matrix(entries in each diagonal remains same) and it is a biased estimate of true
autocorrelation matrix. Here Yf ull ∈ C(N +M −1)×N and Rac ∈ CN ×N . Autocorrelation matrix has
its applications in spectral estimation and Least-Square solutions.
Note: Autocorrelation matrices approximated by autocorrelation function and full window method
are same.
10
2.2.2 Sample Covariance Matrix
In this method no windowing operation is applied , data matrix is formed only by using the avail-
able data, covariance matrix computed by the same way using data matrix as described in previous
methods. Sample covariance matrix has more resolution than autocorrelation matrix and it is unbi-
ased estimate of tru covariance matrix. For higher SNR it has lower mean square error(mse) than the
autocorrelation matrix.
As in observation vector (Equation 2.3) we are using the time range [0 N − 1] instead of [1 N ],
so for easy to understand we will use q = m − 1 instead of only m in data matrix and m will be the
order of covariance matrix.
y (q) y (q − 1) · · · y (0)
y (q + 1)
y (q) · · · y (1)
1
Ycov = √ .. .. .. ..
N . . . .
y (N − 1) y (N − 2) · · · y (N − 1 − q)
H
Rcov = Ycov Ycov (Equation 2.6)
Here Ycov ∈ C(N −m+1)×m and Rcov ∈ Cm×m . Equivalently Equation 2.6 can be written as,
N
X −1
Rcov = y (t : t − q) y (t : t − q)H (Equation 2.7)
t=q
This way computed frequency in subspace algebraic methods must be multiplied with negative one to
get the actual outcome [19], to avoid it we can formulate the above Rcov in a simple way as defined
below to not multiply the estimated frequency with negative sign;
N
X −1
Rcov = y (t − q : t) y (t − q : t)H (Equation 2.8)
t=q
As we can see in Fig.(2.1) covariance matrix performance depends on its order. There is a lose criteria
”P (no. of sources) < order(m) < N (data length)” available in literature. No optimal way to
choose the order of covariance matrix exists in literature, however on our observation, quarter of data
length can be used as optimal value in a sense of computational cost and accuracy of estimation.
11
Eigen Values Distribution Eigen Values Distribution
17 Autocorrelation 17 Autocorrelation
Covariance Covariance
16 16
15 15
14 14
| i|2
| i|2
13 13
12 12
11 11
10 10
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
i i
16
20
15
| i|2
| i|2
14
13
15
12
11
10
10
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
i i
tion(Full Window Method), above simulation results are of two complex sinusoidal signal
Both autocorrelation and covariance matrices have their own significance based on applications, in
covariance matrix all signal content confined in n(total number of sources) number of values for
complex sinusoidal sources while in real sinusoidal sources it expands to 2n eigen values. While
in autocorrelation matrix(Full window) the signal content distributes accross the all N eigen values.
This behavior can be seen in Fig.(2.1).
In the chapter of Results, based on the above described autocorrelation and covariance matrices we
will do their deep analysis in more rigor way, by exploiting them in various subspace based algo-
rithms. But in next section of Subspace Based Methods we will consider R in more general sense
both for autocorrelation (Rac ) and covariance (Rac ) matrices, other wise it will create more con-
fusion and it is useless as well here to explain subspace methods based on subspaces reconstructed
using autocorrelation and covariance methods.
So, lets start treatment of R by decomposing it into its eigen values and eigen vectors,
12
R = UΛUH (Equation 2.9)
Here
λ1 0 · · · 0
..
0 λ2 . . .
.
Λ= . .
.
.. . . . . 0
0 · · · 0 λN
If our obsevation vector y contains the superposition of realizations from Q number of sources then
the highest Q number of eigen values corresponds to signal subspace and rest of will be considered
as noise subspace eigen values, in mathemetical form they can be written as:,
Noise Subspace Credentials:
Noise subspace eigen values,
λQ+1 0 ··· 0
.. ..
0 λ Q+2 . .
Λnoise = .. .. ..
. . . 0
0 ··· 0 λN
Us = u1 u2 . . . uQ
Here i and j runs from 1, · · · , N . Hence the bases vectors of signal and noise sub-spaces are also
orthogonal which leads to the foundation of basic development of subspace based methods. If we
denote the signal subspace vectors by a and noise subspace vectors by n, then; aH n = 0.
13
In light of above description of space RN of correlation/covarince matrix, the subspace based meth-
ods can be described as ”Noise Subspce Based Methods” and ”Signal Subspace Methods”. In what
follows both of these are explained in detail.
Noise subspace methods are further categorized as ” Search based Methods” and ”Algebraic Meth-
ods”
In search based methods a possible range of values for the parameter of estimation is assumed, for
example for frequency estimation the normalized range is defined as f = [−0.5 0.5] and similarly
for angle of arrival(AOA) we can define as θ = [−90◦ 90◦ ]. These parameter values pluged into a
function f (θ) (for the time being f (θ) is considered as dummy function, later on notation will be
changed accordingly), and then we search the value at which the function gets its maximum value.
These methods are computationally extensive.
Pisarenko was the first who observed and proposed the frequency estimation by using the noise sub-
space eigen vectors. Although his proposed method is limited in its usefulness due to its sensitivity
to noise but it was the steping stone for today’s avaiable other highly efficient algorithms.
3. Make the noise subspace by choosing single eigen vector, umin corresponding to smallest
eigenvalue.
5. Substitute the values of estimation parameter and search the value when the function in below
equation becomes infinite. The values at which it becomes infinite will be the actual values we
14
want to estimate of our parameter of interest.
1
PP HD (f ) = (Equation 2.10)
aH (f )N a(f )
Note: Due to numerical inaccuracies above equation will not generate infinite result but maxi-
mum values.
MUSIC algorithm is the extended version of PHD algorithm , in MUSIC algorithm we extend the
dimension of noise subspace.
MUSIC algorithm can be implemented by using the following steps [19, 32]:
3. Make signal sub-space by selecting eigen vectors corresponding to top Q eigen values. Con-
struct noise sub-space by choosing the remaining eigen vectors belonging to (N − Q) eigen
values.
5. Search the peak values in the spectrum given by below equation for the estimation of required
parameter.
1
PM U SIC (f ) = (Equation 2.11)
aH (f )N a(f )
There is no difference between PHD and MUSIC algorithm when the size of R is (Q + 1) ×
(Q + 1).
15
Now compute the noise subspace corresponding matrix as; N = [1; γ][1; γ]H
3. Finally search the peaks in the spectrum (Equation 2.12) as described in previous methods.
1
PM in−N orm (f ) = (Equation 2.12)
aH (f )N a(f )
Eigen vector method is same as MUSIC except the approximation of Noise sub-space matrix [33].
EV method scales noise subspace bases vectors by multiplying them with inverse of noise sub-space
eigen values while computing noise matrix N.
3. Search the peaks in the spectrum given below and find the required parameters.
1
PEV (f ) = (Equation 2.14)
aH (f )N a(f )
In algebraic methods we do not search peak as we do in search based algorithms, consequently these
methods requires less computations [33]. Here we selected (Equation 2.10) for general explanation
of all Subspace Algebraic Methods , means this explanation will be not limited only to PHD algo-
rithm. To make the explanation of algebraic methods uncomplicated we will solve the denominator
of Pseudo Spectrum (Equation 2.10) for space of R3 .
1
P (f ) = (Equation 2.15)
aH (f )Na(f )
16
Consider the denominator of (Equation 2.15) for R3 as,
D(f ) = aH (f )Na(f )
n11 n12 n13 1
= 1 e−j2πf e−j2.2πf n21 n22 n23 ej2πf
17
For the more explanation about coefficients of equation (Equation 2.16), Fig.(2.2) is provided. By
observing the structure of equation (Equation 2.16) it can also be readily verified that the roots of
above equation will appear in reciprocal pair. So, first we will choose the first (N − 1) roots and
finally we will take Q of them those lie on the unit circle or inside closest to the unit circle.
Having above Q number of roots we will find their angles in case of Direction of arrival(DOA).
In case frequency estimation we will divide the angles of the roots by 2π and lastly multiplying the
outcome with sampling rate Fs to get actual estimate of the required frequencies in Hertz.
Note: Above description of Algebraic methods applies on all Root PHD, Root-MUSIC, Root-
Minmum-Norm Method, Root-Eigen vector Method, except compute the noise matrix N accord-
ingly for each method as it was approximated in their variants of Searched based algorithms.
In Signal Subspace methods we omit the noise subspace in the process of frequency estimation thus
increasing the output SNR of the reconstructed signal that is is based on the estimated frequency.
This method can be found in literature with other names like, Capon Method, Minimum Variance
Spectral Estimation (MVSE), Maximum Likelihood Method. It is implemented by designing the
filter weights wi on receiver in such a way that it may reject the noise in all possible sides and maxi-
mize the power in desired direction. Mathematically it can be expressed as,
2
Pmv (f ) = E wH y
(Equation 2.17)
Pmv (f ) = wH Rw
To find the optimal weights we will start from below equation
wH y = wH x + wH v (Equation 2.18)
18
Using Lagrange Method to solve the above constrained optimization problem,
2
J = N wH y − λ wH x − 1
= wH Rw − λ wH x − 1
∂J
= 0T
∂wT
w R − λxT
T T
= 0T
wT RT = λxT
Rw = λx
By substituting (Equation 2.22) in constraint term of (Equation 2.19) and finding the value of λ
yields,
1
λ = H −1 (Equation 2.21)
x R x
Substituting (Equation 2.21) in (Equation 2.20) we will get the optimal weights of beamformer(filter)
as
R−1 x
w= (Equation 2.22)
xH R−1 x
Substituting (Equation 2.22) in (Equation 2.17) we will get the maximum power and finally the beam-
pattern (Pmv (f )) will become
1
Pmv (f ) = H −1 (Equation 2.23)
x R x
For frequency estimation Minimum-Variance (MV) performance degrades significantly if the sinu-
soidal emitting sources are very colse to each other with different amplitudes. The evidence is shown
in Fig.(2.3) when the signal has no noise and for noisy case look at Fig.(2.4).
19
Non Parametric Methods
1
FFT
MVAutocorrelation
0.8
MVCovariance
Amplitude
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 2.3:: Noiseless spectrum of Minimum-Variance (MV) with Autocorrelation and Co-
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 2.4:: Noisy spectrum of Minimum-Variance (MV) with Autocorrelation and Covari-
20
2.4.2 Algebraic Methods
This method uses the special structure of antenna arrays or sensors. As the name ”Rotational Invari-
ance ” of this method gives intuitive way that how this algoritm does works; means we have to place
identical sub-substructures(sub-arrays) in the space with a small constant distance and then process
the received signals to estimate their parameters.
21
For the explanation of this method we have set a simple scenario as shown in Fig.(2.5). Where we are
receiving only signal from single source without noise. The temporal samples are stacked in vector
x ∈ CN×1 . Selector-1 and Selector-2 are equivalent to two identical sub-arrays separated by a small
distance. Interestingly by passing x from selectors the two vectors have the relation of x1 ej2πf = x2
which can be easily solved by using Least Square Solution to estimate to desired frequency.
Now for multiple sources and sub-arrays ESPRIT method can be summarized as follows;
6. Now extending the relation x1 ej2πf = x2 that was made in Fig.(2.5) by applying selectror
matrices on Us
Us1 = S1 Us ; Us1 ∈ CN ×(Q−1)
Us2 = S2 Us ; Us2 ∈ CN ×(Q−1)
Hence,
S2 Us V = S1 Us VE
H H
S2 Us |VV
{z } = S1 Us VEV
| {z }
I Λ
Us2 = Us1 Λ (Equation 2.25)
22
By applying Least Square Solution on (Equation 2.25)
−1
Λ = UH
s1 Us1 UH
s1 Us2 (Equation 2.26)
8. Again by applying eigenvalue decomposition on (Equation 2.26) finally we will get the fre-
Q Q
quency estimates as; fq q=1 = λ q=1
Compute the basis from the autocorrelation matrix of (Equation 2.5) or (Equation 2.4), project the
noisy signal y on the new computed basis by using (Equation 2.27)
w = Uy ; w ∈ RN ×1 (Equation 2.27)
Reconstruct the noiseless signal ŷ by using (Equation 2.28), where K is number of basis function for
reconstruction process.
K
X
ŷ = wi ui (Equation 2.28)
i=1
Exactly how many number of basis vectors are required depends of on type of signal like for sinu-
soidal signal only one basis vector is enough for clean reconstruction of signal.
23
Chapter 3
Methodology
In this chapter first we define the signal model that will be latter on serve to explain the proposed
method. Mathematecal expressions of reconstruction errors of proposed and KLT methods are also
derived and finally input and output SNR expressions for both complex and real signals defined.
Consider a received signal of P complex exponential sources in the presence of noise, the nth sample
of the received signal can be written as
P
X
y(n) = sp (n) + v(n), for n = 0, 1, . . . , N − 1
p=1
where sp (n) = Ap ej2πfp n , and v(n) ∼ N (0, σ 2 ) is the white Gaussian noise sample of zero mean
and σ 2 variance, while Ap is the amplitude of pth signal. The N samples of the received signal in
vector form can be written as
P
X −1
y= sp + v, (Equation 3.1)
p=1
where
T
y = y(0) y(1) · · · y(N − 1) ,
T
sp = 1 ej2πfp · · · ej2π(N −1)fp ,
T
v = v(0) v(1) · · · v(N − 1) .
To find the frequency components of the source signals, FFT is applied on the received signal vector,
y. The FFT does not require any priori information about the total number of sources present in the
received signal. The problem with FFT is that at low SNR its estimation performance as well as
SNR of the received signal degrades. To improve the SNR of the received signal, subspace based
algorithms such as Multiple Signal Classification (MUSIC) and KarhunenLove transform (KLT) is
used.These subspace methods .In MUSIC and KLT, the auto-correlation function of the received
sample vector is found, which is then used to form an N × N Toeplitz matrix, Ry . The eigenvalue
and corresponding eigenvectors of the matrix Ry are found. In the absence of noise if the eigenvalues
are sorted in ascending or descending order, depending on the number of source signals, dominant
eigenvalues are limited. If p is the number of dominant eigenvalues, in the presence of noise, although
they have some contribution due to the source signals p + 1 to N eigenvvalues can be considered as
the eigenvalues corresponding to the noise subspace. The MUSIC algorithm exploits this fact and
24
calculate the following spectra [32]
1
PM U SIC (f ) = PN −1 ,
H u |2
i=p+1 |a(f ) i
where ui is the ith noise eigenvector of the matrix Ry and a(f ) = 1 ej2πf · · · ej2π(N −1)f is
the search steering vector. At low SNR, the performance of MUSIC is degraded.
To improve the estimation performance at low SNR, incontrast to MUSIC, the KLT exploits the
subspace corresponding to the dominant eigenvalues of Ry . The KLT uses the eigenvectors corre-
sponding to the dominant eigenvalues to change the basis of the signal vector y. It projects the vector
y onto the dominant eigenvectors corresponding to the dominant eigenvalues of the matrix Ry as
zn = uH
n y, for n = 0, 1, . . . , p
where un is the eigenvector corresponding to the nth dominant eigenvalues of the matrix Ry . Using
the projection of y onto the eigenvectors corresponding to only the dominant eigenvalues yields an
approximate vector of y, which can be written as
p
X
ŷ = z n un , (Equation 3.2)
n=1
Since ŷ is formed using only the dominant eigenvalues of Ry , which correspond to the signal part,
its SNR will be much higher than the SNR of the original vector y. This approach can significantly
removes the noise contribution from the received signal. In KLT, FFT is performed on ŷ that sig-
nificantly suppress the noise frequency components compared to directly applying FFT on y. The
challenge with the KLT technique is that it does not tell how the number of dominant values can
be selected. The performance of KLT depends on the number of chosen dominant eigenvalues at
particular SNR. Choosing dominant eigenvalues is an open challenge for KLT.
If corresponding to a signal vector y a correlation type matrix can be found whose only q eigenvalues
are greater than zero and all other eigenvalues are equal to zero, dominant eigenvalues to form ŷ can
be chosen optimally. In the next section, a technique to optimally choose dominant eigenvalues to
form ŷ is proposed.
y = s + v,
PP −1
where s = p=1 sp . An N × N matrix can be formed by circulating its elements as shown below
25
y (0)y (1) · · · (N − 1)
y (1)y (2) · · · (0)
Yc = . (Equation 3.3)
.. .. .. ..
. . . .
y (N − 1) y (0) · · · (N − 2)
Using (Equation 3.7), a covariance matrix can be formed as
1
Ryc = Yc YcH .
N
If Rsc and Rvc are respectively the circulant matrices formed using s and v (same as Ryc is formed
using y), it can be observed that for single snapshot
If noise is uncorrelated and Λs is diagonal matrix of the eigenvalues of the deterministic matrix Rsc ,
using eigenvalue decomposition of the individual matrices in (Equation 3.4), it can be written as
while
λ1 + σ 2 0 ··· 0
2 .. ..
0 λ2 + σ . .
Λ= .. . .
. .. .. 0
0 ··· 0 λN + σ2
and λp is the pth eigenvalue of the matrix Rsc . In the following two theorems are proposed that
exploit the relation in (Equation 3.5).
T
where sp = 1 ej2πfp · · · ej2π(N −1)fp
. If a circular matrix is formed using the sample vector
26
s as
s (0) s (1) · · · (N − 1)
s (1) s (2) · · · (0)
Sc = . (Equation 3.6)
.. .. .. ..
. . . .
s (N − 1) s (0) · · · (N − 2)
and using this circulant matrix an other matrix is formed as
1
Rsc = Sc SH
c ,
N
then only p eigenvalues of this matrix will be non-zeros.
P
X −1
s= sp ,
p=1
T
where sp = 1 cos 2πfp · · · cos 2π(N − 1)fp . If a circular matrix is formed using sample
vector s as
s (0) s (1) · · · (N − 1)
s (1) s (2) · · · (0)
Sc = . (Equation 3.7)
.. .. .. ..
. . . .
s (N − 1) s (0) · · · (N − 2)
and using this circulant matrix another matrix is formed as
1
Rsc = Sc SH
c ,
N
then only 2P eigenvalues of the matrix Rsc will be non-zeros (See Appendix for the proof).
Observing Theorem 1 and 2, it can be concluded that if signal, s has q sources, depending on on the
source signal, it will contribute into only 2q or q terms of the eigenvalue matrix, Λ, in (Equation 3.5).
All the remaining eigenvalues of Λ in (Equation 3.5) from 2q + 1 to N or q + 1 to N will be due
to the noise component of the received signal. Therefore, in contrast to MUSIC and KLT, the noise
subspace will be a pure noise subspace without any contribution of the signal. Using these results, in
contrast to KLT, for complex sinusoidal sorces, y needs to be projected onto only q basis vector of
Ryc as
xn = wnH y, for n = 0, 1, . . . , q
where wn is the eigenvector corresponding to the nth eigenvalue of the matrix Ryc . Using these
27
projections an approximation of the vector y can be written as
p
X
ỹ = xn wn . (Equation 3.8)
n=1
Here, none of the signal component is lost in non-dominant eigenvalues as happens in KLT, it is
expected that the SNR of ỹ will be higher than ŷ. Moreover, applying FFT on ỹ will further min-
imise the contribution of noise frequencies in the spectrum and a more clean signal reconstruction is
expected. In the following section, the performance of the proposed algorithm will be analytically
compared with the other algorithms.
In this section, the error between the reconstructed received signal vector y and desired signal of
interest, s, is evaluated. In KLT,
p
X
ŷ = z n un , (Equation 3.9)
n=1
e = ŷ − s
= ÛÛH y − UUH s,
e = ÛÛH y − UUH s,
= ÛÛH (s + v) − UUH s,
= ÛÛH s + ÛÛH v − UUH s,
p
X p
X N
X
= hur , siur + hur , viur − hur , siur ,
r=1 r=1 r=1
Xp XN
= hur , viur − hur , siur
r=1 r=p+1
| {z } | {z }
N oise Error truncation Error
Since ur are mutually orthogonal the summation terms of truncation error and noise error will run
over disjoint indexes and both errors will be orthogonal to each other. Hence square of the error norm
28
can be written as
p
X N
X
2
2
|e| = ||hur , vi|| + ||hur , si||2
r=1 r=p+1
Above equation tells that reconstruction error depends on s and how much it is concentrated in the
subspace spanned by up+1 , · · · , uN , we will lose every thing in this subspace.
Similarly, in the case of our proposed algorithm the error between ỹ and s can be evaluated using
(Equation 3.8) as
e = ỹ − s
= W̃W̃H y − WWH s,
H
ec = W̃W̃ (s + v) − WWH s
H H
= W̃W̃ s + W̃W̃ v − WWH s
Xq q
X N
X
= hwr , siwr + hwr , viwr − hwr , siwr
r=1 r=1 r=1
Xq Xq XN
= hwr , siwr + hwr , viwr − hwr , siwr
r=1 r=1 r=1
Xq Xq Xq
= hwr , siwr + hwr , viwr − hwr , siwr
r=1 r=1 r=1
N
X
− hwr , siwr
r=q+1
q
X q
X
= hwr , viwr − hwr , siwr
r=1 r=1
Null space term will be through away because target signal is independent of it??, signal spaces terms
will be canceled out and difference will be just equal to noise term
q
X
ec = hur , viwr
|r=1 {z }
N oise Error
29
Hence square of norm can be written as:
q
X
ec = ||hwr , vi||2
r=1
So in proposed method we can see reconstruction error only depends on noise and independent of
target signal, consequently the reconstruction error of proposed technique is much smaller than KLT.
x (n) = Aejwn
x (n) = A cos (n) + Aj sin (n)
q
|x (n) | = A2 cos2 + sin2
for only single snapshot the noise vector is n = [n0 · · · nN −1 ] and the corresponding noise power will
n2o + · · · + n2N −1
be Pin = and input SNR can be written as
N
Pis A2
SN Ri = = 2 (Equation 3.12)
Pin σ
30
3.4.2 Real Signals: Input SNR
Pis A2
SN Ri = = 2 (Equation 3.13)
Pin 2σ
ŷ = ŨŨH y
H
ŷH ŷ = ŨŨH y ŨŨH y
ŷH ŷ = yH Ũ |Ũ{z
H
Ũ} ŨH y
I
H H H
ŷ ŷ = y ŨŨ y
= Pos + Pon
Pon = ŷH ŷ − Pos
Pos
SN Ro =
Pon
Pos
= H
ŷ ŷ − Pos
2Pos − yH y
=1+ H
y y − Pos
31
If y = s, i.e ν = 0 then Ps will be when y contain only one sinusoidal signal
Pos = λ1 + λ2
Pon = ŷH ŷ − (λ1 + λ2 )
2 (λ1 + λ2 ) − yH y
SN Ro = 1 +
yH y − (λ1 + λ2 )
32
Chapter 4
Experimental Results
This chapter contains on two major sections: Spectrum Comparison and Mean Square Error (MSE)
Comparison . Further each section contains the experimental results based on Autocorrelation and
Covariance matrices. To assess the performance of proposed algorithm against its contender meth-
ods two scenarios have been examined. Detail of these scenarios is as follows;
Scenario-1: Scenario-1 is only for spectrum comparison. It has two sources withe two frequencies
f1 = 10Hz and f2 = 20Hz. Signal to Noise Ratio(SNR) is −10dB. Sampling rate is Fs = 200Hz
and total number of samles taken are N = 400. Scenario-1 is tested when two sources have equal
power (P1 = P2 ) and not equal power(P1 6= P2 ).
Scenario-2: This scenario is for Mean Square Error (MSE) evaluation. It contains two sources with
frequencies 10.5Hz and 20Hz. SNR ranges from −30dB to −5dB. Sampling rate is 200Hz and
total number of samples taken are N = 400. Scenario-2 is also tested when two sources have equal
power (P1 = P2 ) and not equal power(P1 6= P2 ).
Section(4.3) contains analysis of eigenvalues and section(4.4) provides analysis of output SNR of
reconstructed signals.
Performance in a nutshell: For equal power sources the improvement in performance of proposed
algorithm and contender methods based on autocorrelation matrix is shown in Fig.(4.1)-to-Fig.(4.4).
For MSE see Fig.(4.21)-to-Fig(4.24). For unequal power sources when proposed algorithm is com-
pared with all its contender methods based on autocorrelation matrix, proposed algorithm outperforms
and resolve the sources while all other methods( MUSIC, Min-Norm, Eigen Vector Method, ESPRIT,
KLT ,etc) fails to resolve to completely independent sources. For visual inspection of spectral behav-
ior see Fig.(4.5)-to-Fig.(4.8) and for MSE go for Fig.(4.25)-to-Fig.(4.28).
When two sources have equal powers and all contender methods based on covariance matrix, pro-
posed algorithm shows significant gain against all its competitor algorithms. The evidence in spectral
form is depicted in Fig.(4.9)-to-Fig.(4.12). Mean Square Error(MSE) for equal amplitudes and co-
variance matrix is shown in Fig.(4.29)-to-Fig.(4.32). For unequal power of two sources the gain in
performance of proposed algorithm in spectral form is shown in Fig.(4.13)-to-Fig.(4.16) and for MSE
analysis Fig.(4.33)-to-Fig.(4.36) are dedicated.
Spectral behavior of proposed method against Non-parametric methods i.e fast Fourier transform
(FFT) and Minimum variance (MV) is shown in Fig.(4.17)-to-Fig.(4.18) and for unequal power in-
spect the Fig.(4.19)-to-Fig.(4.20). Proposed algorithm shows the clean spectrum with zero side-lobe-
levels(SLL) while in non-parametric methods specifically in FFT it becomes difficult to identify the
required peaks in so high level SLL. The improvement in MSE of proposed algorithm is shown in
Fig.(4.37)-to-Fig.(4.38) for two unequal power sources, while MSE of proposed algorithm for equal
power sources is equal to MSE of non-parametric methods.
33
4.1 Spectrum Comparison
Any kind of information in this world can be encoded in time domain signals. Mostly the infor-
mation in signal is associated to its frequency, amplitude and phase. It is very difficult to fetch the
required information by processing the signals in time domain. To overcome this problem frequency
domain analysis of signals is highly recommended to get the parameters of intensest from signals.
So, for the frequency domain analysis of signals the most conventional technique is Discrete Fourier
Transform(DFT). Hence, in this section we will analyze signals by using DFT and other advance
spectral analysis techniques. For this purpose a scheme is devised in two steps (i) Spectrum Analysis
when two sources have same power (ii) Spectrum Analysis when two sources have different powers.
Each step is carried out with the following procedure; comparison of proposed method’s spectrum
with Noise Subspace (NSS) methods and then spectrum comparison of proposed method with Signal
Subspace methods.
Section (4.1.1) represent the results of scenario-1 for two sources having the same power in section
(4.1.1.1) and for unequal power of sources in section (4.1.1.2).
Proposed method comparison with Noise Subspce (NSS) methods is shown in Normal scale in
Fig.(4.1) and in dB scale shown in Fig.(4.2). Compaison with Signal Subspace (SSS) methods is
in Figures (4.3)-(4.4)
34
Noise Subspace Methods (Normal Scale)
1
Min-Norm
0.8 Music
EV
Amplitude
Proposed
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.1:: Power Spectrum estimation of two sources with same power using NSS in
Normal Scale
10-5
Amplitude (dB)
Min-Norm
Music
EV
10-10 Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.2:: Power Spectrum estimation of two sources with Same power using NSS in dB.
Scale
35
Signal Subspace Methods (Normal Scale)
1
ESPRIT
KLT
0.8
Proposed
Amplitude
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.3:: Power Spectrum estimation of two sources with Same power using SSS in
Normal Scale
10-5
Amplitude (dB)
ESPRIT
-10 KLT
10
Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.4:: Power Spectrum estimation of two sources with different powers with SSS in
dB. Scale
Proposed method comparison with Noise Subspce(NSS) methods is shown in Normal scale in Fig.(4.5)
and in dB scale shown in Fig.(4.6). Compaison with Signal Subspace(SSS) methods is in Figures
36
(4.7)-(4.8)
Proposed
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.5:: Power Spectrum estimation of two sources with unequal power using NSS in
Normal Scale
10-5
Amplitude (dB)
Min-Norm
Music
10-10 EV
Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.6:: Power Spectrum estimation of two sources with unequal power using NSS in
dB. Scale
37
Signal Subspace Methods (Normal Scale)
1
ESPRIT
KLT
0.8
Proposed
Amplitude
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.7:: Power Spectrum estimation of two sources with unequal power using SSS in
Normal Scale
10-5
Amplitude (dB)
ESPRIT
-10 KLT
10
Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.8:: Power Spectrum estimation of two sources with unequal power using SSS in
dB. Scale
. Section (4.1.2) represent the results of scenario-1 for two sources having the equal power in section
(4.1.2.1) and for unequal power of sources in section (4.1.2.2).
38
4.1.2.1 Same Power of two Sources
Proposed method comparison with Noise Subspce(NSS) methods is shown in Normal scale in Fig.(4.9)
and in dB scale shown in Fig.(4.10). Compaison with Signal Subspace(SSS) methods is in Figures
(4.11)-(4.12).
Proposed
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.9:: Power Spectrum estimation of two sources with same power using NSS in
Normal Scale
10-5
Amplitude (dB)
Min-Norm
Music
EV
10-10 Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.10:: Power Spectrum estimation of two sources with same power using NSS in dB.
Scale
39
Signal Subspace Methods (Normal Scale)
1
ESPRIT
KLT
0.8
Proposed
Amplitude
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.11:: Power Spectrum estimation of two sources with same power using SSS in
Normal Scale
10-5
Amplitude (dB)
ESPRIT
-10 KLT
10
Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.12:: Power Spectrum estimation of two sources with same power using SSS in dB.
Scale
Proposed method comparison with Noise Subspce(NSS) methods is shown in Normal scale in Fig.(4.13)
and in dB scale shown in Fig.(4.14). Compaison with Signal Subspace(SSS) methods is in Figures
40
(4.15)-(4.16).
Proposed
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.13:: Power Spectrum estimation of two sources with unequal power using NSS in
Normal Scale
10-5
Amplitude (dB)
Min-Norm
-10 Music
10 EV
Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.14:: Power Spectrum estimation of two sources with unequal power using NSS in
dB. Scale
41
Signal Subspace Methods (Normal Scale)
1
ESPRIT
KLT
0.8
Proposed
Amplitude
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.15:: Power Spectrum estimation of two sources with unequal power using SSS in
Normal Scale
10-5
Amplitude (dB)
ESPRIT
KLT
Proposed
10-10
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.16:: Power Spectrum estimation of two sources with unequal power using SSS in
dB. Scale
42
with unequal power of sources is shown in Figures (4.19)-(4.20).
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.17:: Power Spectrum estimation of two sources with same power using NPM in
Normal Scale
10-5
Amplitude (dB)
FFT
MV
10-10 Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.18:: Power Spectrum estimation of two sources with same power using NPM in
dB. Scale
43
Non Parametric Methods (Normal Scale)
1
FFT
MV
0.8
Proposed
Amplitude
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.19:: Power Spectrum estimation of two sources with unequal power using NPM in
Normal Scale
10-5
Amplitude (dB)
FFT
10-10 MV
Proposed
10-15
10-20
0 10 20 30 40 50 60 70 80 90 100
fi
Figure 4.20:: Power Spectrum estimation of two sources with unequal power using NPM in
dB. Scale
44
4.2 Mean Square Error(MSE)
The best metric for the evaluation of any algorithm is Mean Square Error(MSE). In this section we
will evaluate MSE in a categorical manner. Section (4.2) consist on two subsections of MSE i.e, MSE
for Autocorrelation Matrix and MSE for Covariance Matrix.
MSE analysis will be carried out in two different scenarions (i) When the amplitude of two sources is
same (ii) When the amplitudes of two sources is different . In each step the following procedure will
be carried out; First MSE of proposed method will be compared with Noise Subspace methods and
then it will be compared with Signal Subspace methods.
MSE in normal scale of proposed and Noise Subspace (NSS) based Methods is shown in normal
Scale in Fig.(4.21) and in dB scale is shown in Fig.(4.22). MSE comparison of proposed algorithm
and Signal Subspace(SSS) algorithms is presented in normal and decibel sacle in Fig.(4.23) and in
Fig.(4.24) respectively.
80
Propose
MUSIC
70 EV
Min-Norm
60 CRLB
50
MSE
40
0.3
30 0.2
0.1
20
0
-12 -11.5 -11 -10.5
10
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.21:: Normal Scale MSE comparison of proposed method with NSS methods
45
Propose
MUSIC
100 EV
Min-Norm
CRLB
10-5
MSE
10-10
Figure 4.22:: dB scale MSE comparison of proposed method with NSS methods
MSE in normal scale of Signal Subspace (SSS) based Methods with proposed method is shown in
Fig.(4.23) and in dB scale is shown in Fig.(4.24).
80
Propose
ESPRIT
70 klt
CRLB
60
50
MSE
40
0.2
30
0.1
20
10 0
-12 -11.5 -11
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.23:: Normal scale MSE comparison of proposed method with SSS methods
46
100
Propose
ESPRIT
10-5 klt
MSE
CRLB
10-10
Figure 4.24:: dB Scale MSE comparison of proposed method with SSS methods
MSE in normal scale of proposed and Signal Subspace(NSS) based Methods is shown in normal
Scale in Fig.(4.21) and in dB scale is shown in Fig.(4.22). MSE comparison of proposed algorithm
and Signal Subspace(SSS) algorithms is presented in normal and decibel sacle in Fig.(4.23) and in
Fig.(4.24) respectively.
60
50
40 Propose
MUSIC
EV
MSE
30 Min-Norm
CRLB
20
10
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.25:: Normal Scale MSE comparison of proposed method with NSS methods
47
105
100
Propose
MUSIC
EV
MSE
10-5 Min-Norm
CRLB
10-10
10-15
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.26:: dB scale MSE comparison of proposed method with NSS methods
MSE in normal scale of Signal Subspace(SSS) based Methods with proposed method is shown in
Fig.(4.23) and in dB scale is shown in Fig.(4.24).
50
Propose
45 ESPRIT
klt
40 CRLB
35
30
MSE
25
20
15
10
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.27:: Normal scale MSE comparison of proposed method with SSS methods
48
100 Propose
ESPRIT
klt
CRLB
10-5
MSE
10-10
Figure 4.28:: dB Scale MSE comparison of proposed method with SSS methods
Covariance matrix of order, C 100 is used for the results in this section. Proposed method is compared
with other subspace based methods in two scenarios (i) When the amplitudes of both sources is same
(ii) When the power of two sources is different. In each step the following procedure will be carried
out; First MSE of proposed method will be compared with NSS methods and then it will be compared
with SSS methods.
Fig.(4.29)-(4.32) shows same power results. MSE in normal scale of proposed and Noise Sub-
space(NSS) based Methods is shown in normal Scale in Fig.(4.29) and in dB scale is shown in
Fig.(4.30). MSE comparison of proposed algorithm and Signal Subspace(SSS) algorithms is pre-
sented in normal and decibel sacle in Fig.(4.31) and in Fig.(4.32) respectively.
49
80
Propose
MUSIC
70 EV
Min-Norm
60 CRLB
50
MSE
40
30
20
10
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.29:: Normal Scale MSE comparison of proposed method with NSS methods
Propose
0
MUSIC
10 EV
Min-Norm
CRLB
10-5
MSE
10-10
Figure 4.30:: dB scale MSE comparison of proposed method with NSS methods
MSE in normal scale of Signal Subspace(SSS) based Methods with proposed method is shown in
Fig.(4.31) and in dB scale is shown in Fig.(4.32).
50
80
Propose
ESPRIT
70 klt
CRLB
60
50
MSE
40
30
20
10
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.31:: Normal scale MSE comparison of proposed method with SSS methods
105
Propose
ESPRIT
klt
CRLB
100
MSE
10-5
10-10
10-15
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.32:: dB Scale MSE comparison of proposed method with SSS methods
MSE in normal scale of Noise Subspace(NSS) based Methods with proposed method is shown in
Fig.(4.33) and in dB scale is shown in Fig.(4.34).
51
50
45
Propose
MUSIC
40
EV
Min-Norm
35
CRLB
30
MSE
25
20
15
10
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.33:: Normal Scale MSE comparison of proposed method with NSS methods
105
Propose
MUSIC
EV
100 Min-Norm
CRLB
MSE
10-5
10-10
10-15
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.34:: dB scale MSE comparison of proposed method with NSS methods
MSE in normal scale of Signal Subspace(SSS) based Methods with proposed method is shown in
Fig.(4.35) and in dB scale is shown in Fig.(4.36).
52
50
45
Propose
ESPRIT
40 klt
CRLB
35
30
MSE
25
20
15
10
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.35:: Normal scale MSE comparison of proposed method with SSS methods
105
100
MSE
Propose
10-5
ESPRIT
klt
CRLB
10-10
10-15
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.36:: dB Scale MSE comparison of proposed method with SSS methods
For unequal power of two sources MSE in normal scale of Non-Parametric Methods(NPM) with
proposed method is shown in Fig.(4.37) and in dB scale is shown in Fig.(4.38). On same power of
two sources MSE of proposed and non-parametric methods is same.
53
50
Propose
MV
FFT
40 CRLB
30
MSE
20
10
0
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.37:: Normal scale MSE comparison of proposed method with NPM methods
105
100
Propose
MV
FFT
MSE
CRLB
10-5
10-10
10-15
-30 -25 -20 -15 -10 -5
SNRi
Figure 4.38:: dB scale MSE comparison of proposed method with NPM methods
To test eigenvalue distribution of proposed method against autocorrelation and covariance methods
a scenario is built for two exponential sinusoidal signals with the following parameters; f1 = 0.1,
f2 = 0.3, total numer of data samples (N) is 10 and variance of noise (σ 2 ) is 10.
54
As depicted in Fig.(4.39) proposed method restricts the all signal content to just two top eigenvalues
while the covariance method also do the same thing but have different values of both signal space
eigen values. The disadvantage of unequal distribution of signal content in eigen values of covariance
matrix popup when two sources are closely spaced and the covariance based method fails to resolve
these sources. As you can see autocorrelatoion float the signal content accross the all matrix space.
16 16
| i|2
| i|2
14 14
12 12
10 10
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
i i
20
16
| i|2
| i|2
14
15
12
10
10
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
i i
4.4 Input and Output SNR comparison of Proposed method with KLT
Fig. 4.40 compares the SNR of reconstructed signal using proposed and KLT method. In KLT method
twenty bases vectors are used which are corresponding to twenty dominant eigen values to recover the
maximum content of the signal, while in proposed method signal content is fully recovered by using
only two base vectors. In the figure, it can be seen that the output SNR of the proposed algorithm is
approximately 2.5 times higher than KLT.
55
102
Proposed
KLT
101
SNRo
100
10-1
-10 -8 -6 -4 -2 0 2 4 6 8 10
SNRi
Figure 4.40:: Output SNR comparison of reconstructed signal via proposed and KLT method
56
Chapter 5
5.1 Conclusion
This thesis presents study for different parametric and non-parametric frequency estimation algo-
rithms. In parametric techniques, the unknown frequencies are estimated by using subspace based
algorithms. Accuracy of estimated frequencies depends mainly on the separation of signal and noise
subspaces of input matrix of subspace algorithms. This separation, in turns, depends on the structure
of input matrix. In literature, input matrix is formulated in multiple ways as discussed in chapter (2).
We have analyzed two important formulations of input matrix namely Autocorrelation and Covari-
ance matrix and these are very much related to the problem at hand. The performance in estimating
frequency spectrum using these matrices is assessed as follow: Autocorrelation matrix resolves all
the sources if they have equal transmission powers but fails when sources have different transmission
powers. On other hand, performance of covariance matrix is subjected to its order; it resolves multiple
sources if order of matrix is quarter of data length. Performance of covariance matrix starts degrading
when order of matrix is less or greater than quarter of data length. So, we have proposed a novel
technique that can be used to completely separate the signal and noise subspaces. As we have shown
in results that by using proposed technique the zero spectral leakage of frequency estimation can be
achieved. To the authors best knowledge, there is no other technique discussed in literature which
can ensure zero spectral leakage. Additionally, circulant matrix of proposed technique can be used as
input matrix instead of using autocorrelation or covariance matrices in subspace methods. By doing
so, a highly accurate estimation of unknown parameters can be achieved which is difficult otherwise.
Cramer Rao lower bound (CRLB) is set as a bench mark for the performance comparison of proposed
method with other existing methods. Robustness of proposed method is further verified by having
derived expressions of output SNR and reconstruction errors. By evaluating the proposed method
according to above defined metrics we can draw number of conclusion in a word which are following:
(i) Proposed method reconstruct signals with very high output SNR.
(ii) Zero Spectral leakage is achieved.
(iii) Clean Spectral estimation is carried out by proposed method.
(iv) Proposed method works well for single as well as for multiple sinusoids.
(v) There are no side lobe levels in the recovered spectrum by devise method.
(vi) Proposed method works equally well for complex and real signals.
In this thesis multiple parametric and non-parametric methods have been analyzed. Specifically we
took an assumption about signal model for parametric methods. Parametric methods performs well
if the true parameters satisfy the assumed model of data otherwise the performance of these methods
will decrease significantly. For example if we assume the signal received at sensor is from three
57
sources but actually there were less than two or more than two sources all the parametric methods
outcomes will be useless.
Hence, it will be appropriate for more realistic scenario if we first detect the actual sources emitting
signals and then analyzing parametric methods for frequency estimation. Due to shortage of time this
task is left over for future work.
Additionally the open challenge emerged from proposed method is to find N i.e (total number of
samples) so the actual frequency should fall on range-bin.
58
REFERENCES
[1] D. Rife and R. Boorstyn, “Single tone parameter estimation from discrete-time observations,”
IEEE Transactions on information theory, vol. 20, no. 5, pp. 591–598, 1974.
[2] S. Tretter, “Estimating the frequency of a noisy sinusoid by linear regression,” IEEE Transac-
tions on Information theory, vol. 31, no. 6, pp. 832–835, 1985.
[3] S. Kay, “A fast and accurate single frequency estimator,” IEEE Transactions on Acoustics,
Speech, and Signal Processing, vol. 37, no. 12, pp. 1987–1990, 1989.
[4] A. Eriksson, P. Stoica, and T. Soderstrom, “Markov-based eigenanalysis method for frequency
estimation,” IEEE Transactions on Signal Processing, vol. 42, no. 3, pp. 586–594, 1994.
[5] R. Roy and T. Kailath, “ESPRIT-estimation of signal parameters via rotational invariance
techniques,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 7,
pp. 984–995, 1989.
[6] M. P. Fitz, “Further results in the fast estimation of a single frequency,” IEEE Transactions on
Communications, vol. 42, no. 234, pp. 862–864, 1994.
[7] B. G. Quinn, “Estimation of frequency, amplitude, and phase from the DFT of a time series,”
IEEE Transactions on Signal Procssing, vol. 45, no. 3, pp. 814–817, 1997.
[8] H.-C. So, K. W. Chan, Y.-T. Chan, and K. Ho, “Linear prediction approach for efficient fre-
quency estimation of multiple real sinusoids: algorithms and analyses,” IEEE Transactions on
Signal Processing, vol. 53, no. 7, pp. 2290–2305, 2005.
[9] E. Aboutanios and B. Mulgrew, “Iterative frequency estimation by interpolation on Fourier co-
efficients,” IEEE Transactions on Signal Processing, vol. 53, no. 4, pp. 1237–1242, 2005.
[10] H. Fu and P. Y. Kam, “MAP/ML estimation of the frequency and phase of a single sinusoid in
noise,” IEEE Transactions on Signal Processing, vol. 55, no. 3, pp. 834–845, 2007.
[11] C. Maccone, “The KLT (Karhunen–Loève Transform) to extend SETI searches to broad-band
and extremely feeble signals,” Acta Astronautica, vol. 67, no. 11-12, pp. 1427–1439, 2010.
[12] S. Ye, J. Sun, and E. Aboutanios, “On the estimation of the parameters of a real sinusoid in
noise,” IEEE Signal Processing Letters, vol. 24, no. 5, pp. 638–642, 2017.
[13] L.-T. Shen and S.-H. Hwang, “A new single tone detection algorithm,” in Communications,
Control and Signal Processing, 2008. ISCCSP 2008. 3rd International Symposium on, pp. 600–
603, IEEE, 2008.
[15] N. Mehala and R. Dahiya, “A comparative study of fft, stft and wavelet techniques for induction
machine fault diagnostic analysis,” in Proceedings of the 7th WSEAS international conference
on computational intelligence, man-machine systems and cybernetics, Cairo, Egypt, vol. 2931,
2008.
60
[16] E. Sutrisno, H. Oh, A. S. S. Vasan, and M. Pecht, “Estimation of remaining useful life of ball
bearings using data driven methodologies,” in Prognostics and Health Management (PHM),
2012 IEEE Conference on, pp. 1–7, IEEE, 2012.
[17] G. Zimmerman and S. Gulkis, “Polyphase-discrete fourier transform spectrum analysis for the
search for extraterrestrial intelligence sky survey,” TDA Progress Report, 1991.
[18] B. G. Quinn and E. J. Hannan, The estimation and tracking of frequency, vol. 9. Cambridge
University Press, 2001.
[19] P. Stoica and R. L. Moses, Spectral analysis of signals, vol. 1. Pearson Prentice Hall Upper
Saddle River, NJ, 2005.
[20] D. K. Maslen and D. N. Rockmore, “Generalized FFTs - A survey of some recent results,” in
Groups and Computation II, vol. 28, pp. 183–287, American Mathematical Soc., 1997.
[21] S. L. Marple and S. L. Marple, Digital spectral analysis: with applications, vol. 5. Prentice-Hall
Englewood Cliffs, NJ, 1987.
[22] S. Fortunati, R. Grasso, F. Gini, M. S. Greco, and K. LePage, “Single-snapshot doa estimation
by using compressed sensing,” EURASIP Journal on Advances in Signal Processing, vol. 2014,
no. 1, p. 120, 2014.
[23] B. M. Radich and K. M. Buckley, “Single-snapshot doa estimation and source number detec-
tion,” IEEE Signal processing letters, vol. 4, no. 4, pp. 109–111, 1997.
[24] P. Häcker and B. Yang, “Single snapshot doa estimation,” Advances in Radio Science: ARS,
vol. 8, p. 251, 2010.
[25] W. Liao and A. Fannjiang, “MUSIC for single-snapshot spectral estimation: Stability and super-
resolution,” Applied and Computational Harmonic Analysis, vol. 40, no. 1, pp. 33–67, 2016.
[26] C.-J. Huang, C.-W. Dai, T.-Y. Tsai, W.-H. Chung, and T.-S. Lee, “A closed-form phase-
comparison ml doa estimator for automotive radar with one single snapshot,” IEICE Electronics
Express, vol. 10, no. 7, pp. 20130086–20130086, 2013.
[27] D. G. Manolakis, V. K. Ingle, and S. M. Kogon, Statistical and adaptive signal processing:
spectral estimation, signal modeling, adaptive filtering, and array processing. McGraw-Hill
Boston, 2000.
[28] B. G. Quinn, “Estimating frequency by interpolation using fourier coefficients,” IEEE transac-
tions on Signal Processing, vol. 42, no. 5, pp. 1264–1268, 1994.
[29] S. Ye and E. Aboutanios, “An algorithm for the parameter estimation of multiple superimposed
exponentials in noise,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE Inter-
national Conference on, pp. 3457–3461, IEEE, 2015.
[30] M. H. Hayes, Statistical digital signal processing and modeling. John Wiley & Sons, 2009.
[31] V. F. Pisarenko, “The retrieval of harmonics from a covariance function,” Geophysical Journal
International, vol. 33, no. 3, pp. 347–366, 1973.
[32] R. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Transactions on
Antennas and Propagation, vol. 34, no. 3, pp. 276–280, 1986.
[33] V. Madisetti, The digital signal processing handbook. CRC press, 1997.
61
Chapter 7
Appendices
Appendix A
To prove the first theorem, for a circulant matrix Rsc , it can be written as
λk = fkH Sc SH H 2
c fk = |Sc fk | . (Equation 7.2)
It is reminded, the columns of the circular matrix Sc are the circular shifted versions of s. Finding λk
requires the inner product of these shifted versions of s with the kth column of the DFT matrix. These
operations require modulo-N operation on s. Applying Modulo-N operation on DFT vectors is very
simple. In order to shift modulo-N operation of fk and keeping s a constant vector, by rearranging
the terms in (Equation 7.2) within fk and Sc , kth eigenvalue can be written as
λk = sH F̄k F̄H H 2
k s = |F̄k s| , (Equation 7.3)
where F̄k is the rearranged Fourier matrix. The pth column vector of F̄k can be written as
1 −j2π(0+N −l)k/N T
gl = √ e · · · e−j2π(N −1+N −l)k/N ,
N
62
where l = 0, 1, . . . , N − 1. By exploiting these rearrangements, ith inner product of sH F̄k can be
written as
N −1
1 X −j 2π (m+N −i)k ∗
Ii = √ e N s (m) , i = 0···N − 1
N m=0
and
I0∗
∗
I
1
λk = I0 I1 · · · IN −1
..
.
∗
IN −1
N −1 N −1 P P
1 X X 2π X X
= N ej N (n−m)k s∗p (m) sq (n)
N
m=0 n=0 p=1 q=1
N −1 N −1 P P
X X 2π X X
= ej N (n−m)k s∗p (m) sq (n) . (Equation 7.4)
m=0 n=0 p=1 q=1
It can be noted that (Equation 7.5) is a general expression to find the eigenvalues of circulant matrices
of the form Rsc = Sc SH c .
In Theorem-1 sp (m) = ej2πfp m and sq (n) = ej2πfq n , putting these values in (Equation 7.5) it can be
63
written as
N −1 N −1 P
j 2π
X X X
(n−m)k
λk = e N ej2π(n−m)fp
m=0 n=0 p=1
| {z }
p=q
N −1 N −1 P P
j 2π
X X X X
(n−m)k −j2πfp m
+ e N e ej2πfq n
m=0 n=0 p=1 q=1
| {z }
p6=q
= λ1k + λ2k
Here,
N −1 N −1 P
X X 2π X
λ1k = ej N (n−m)k ej2π(n−m)fp
m=0 n=0 p=1
N −1 N −1 P P
X X 2π X X
λk2 = ej N (n−m)k e−j2πfp m ej2πfq n
m=0 n=0 p=1 q=1
| {z }
p6=q
The term inside the bracket is the DFT of the complex sinusoidal source, therefore we can write
P
X
λ1k = ||N δ (k + fp )||2
p=1
It can be seen that the above equation will be defined iff any fp = −k = N − k. Now, we will find
the value of λ2k
N −1 N −1 P P
j 2π
X X X X
−j2πfp m
λ2k = e N
(n−m)k
e ej2πfq n
m=0 n=0 p=1 q=1
P N −1 P N −1
X X 2π X X 2π
= e−j2πfp m e−j N km ej2πfq n ej N kn . (Equation 7.6)
p=1 m=0 q=1 n=0
In (Equation 7.6), it can be noted that the first inner summation term is the DFT of complex sinusoidal
64
signal, e−j2πfp m , at frequency k while the second inner summation term is the DFT of complex
sinusoidal signal, ej2πfq n , at frequency −k. Both of these signals have an impulse respectively at
frequencies −fp and fq .
P
X P
X
λ2k = (N δ (k + fp )) (N δ (−k + fq )) , p 6= q
p=1 q=1
λk 2 = 0
It can be noted that (Equation 7.7) will be defined only whenever k = fp and we only have P sources.
This concludes our proof.
In Theorem-2 putting sp (m) = cos (2πfp m) and sq (n) = cos (2πfq n) in (Equation 7.5), we can
write
N −1 N −1 P
j 2π
X X X
(n−m)k
λk = e N cos (2πfp m) cos (2πfq n)
m=0 n=0 p=1
| {z }
p=q
N −1 N −1 P P
j 2π
X X X X
(n−m)k
+ e N cos (2πfp m) cos (2πfq n)
m=0 n=0 p=1 q=1
| {z }
p6=q
Here,
N −1 N −1 P
j 2π
X X X
λ3k = e N
(n−m)k
cos (2πfp m) cos (2πfp n)
m=0 n=0 p=1
N −1 N −1 P P
j 2π
X X X X
λ4k = e N
(n−m)k
cos (2πfp m) cos (2πfq n)
m=0 n=0 p=1 q=1
65
First, we evaluate λ3k
N −1 N −1 P
X X 2π X
λ3k = ej N (n−m)k cos (2πfp m) cos (2πfp n)
m=0 n=0 p=1
P N −1 N −1
!
−j 2π j 2π
X X X
mk nk
= cos (2πfp m) e N cos (2πfp n) e N .
p=1 m=0 n=0
It can be noted that both summation terms inside the bracket are conjugate of each other, therefore
can be written as
2
P N −1
X X 2π
λ3k = cos (2πfp m) e−j N mk (Equation 7.9)
p=1 m=0
Now, it can be easily seen (Equation 7.9) is discrete-Fourier-transform (DFT) of cos(2πfpm ), and it
can be written as
P
!
2
X N N
λ3k = δ(k − fp ) + δ(k + fp )
2 2
p=1
P
!
2
N2 X
= δ(k − fp ) + δ(k + fp ) . (Equation 7.10)
4
p=1
It can be noted that (Equation 7.10) will be defined whenever k = fp and k = −fp = N − fp .
Therefore, if we have P sources it will be defined for only 2P values of k and for each value of k,
2
λ3k = N4 . Now we will evaluate the value of λ4k .
N −1 N −1 P P
X X 2π X X
λ4k = ej N (n−m)k cos (2πfp m) cos (2πfq n)
m=0 n=0 p=1 q=1
| {z }
p6=q
P N −1 P N −1
−j 2π 2π
X X X X
mk
= cos (2πfp m) e N cos (2πfq n) ej N nk . (Equation 7.11)
p=1 m=0 q=1 n=0
| {z }
p6=q
66
In (Equation 7.11), the first term is the DFT of cos(2πfp m) at frequency k while the second term is
the DFT of cos(2πfq n) at frequency −k or N − k. Therefore, we can write
P
N X
λ4k = δ (k + fp ) + δ (k − fp )
2
p=1
P
N X
× δ (−k + fp ) + δ(−k − fp ) , p 6= q. (Equation 7.12)
2
q=1
In (Equation 7.12), the first term will be defined whenever k = fp or k = −fp = N − fp , while the
second term will be defined whenever k = fq and k = −fq = N − fq . Since p 6= q, λ4k = 0 and
λk = λ3k + 0
P 2
N2
X N
= δ (k − fp ) + δ (k + fp )
4 4
p=1
Appendix B
P
X
y(n) = sp (n) + v(n), for n = 0, 1, . . . , N − 1
p=1
where sp (n) = Ap ej2πfp n , and v(n) ∼ N (0, σ 2 ) is the white Gaussian noise sample of zero mean
and σ 2 variance, while Ap is the amplitude
of pth signal. By having samples y(n) estimate the vector
parameter θ = f1 f2 · · · fP .
67
For notational simplicity we will use sp instead of sp (n) and the probability density function of
Gaussian distributed y(n) samples can be written as
PN −1
1 − (
n=0 y(n)−A1 e
j2πf1 n −A ej2πf2 n
2 )(s(n)∗ −A1 e−j2πf1 n −A2 e−j2πf2 n )
p (y(n); θ) = N e 2σ 2
(2πσ 2 ) 2
PN −1 ∗ ∗ ∗
1 − n=0 (y(n)−s1 −s2 )(y(n) −s1 −s2 )
p (y(n); θ) = N e 2σ 2 (Equation 7.13)
(2πσ 2 ) 2
4π 2 A21
PN −1 4π 2 A1 A2 PN −1
σ2 n=0 n2 σ2 n=0 n 2 cos (2π(f − f )n)
1 2
I (θ) =
2 2
4π 2 A1 A2 PN −1 4π A1 N −1 2
n2 cos (2π(f1 − f2 )n)
P
σ2 n=0 σ2 n=0 n
(Equation 7.16)
Inverse of Fisher Information matrix can be easily calculated to find Cramer-Rao lower bound(CRLB)
of f1 and f2 it is,
2 2
4π A2 N (N − 1)(2N − 1) Ξ
−
6σ 2 |I (θ) | |I (θ) |
−1
I (θ) =
(Equation 7.17)
Ξ 2 2
4π A1 N (N − 1)(2N − 1)
−
|I (θ) | 6σ 2 |I (θ) |
" #
(2N −1)
2N 2 sin ∆w sin( 23 ∆w)−sin( 2N2+1 ∆w)
π 2 A1 A2 2 −1−2 cos(∆w)+(2N +1) cos(N ∆w)
Here, Ξ = σ2 ∆w
sin( 2 )
+ sin( ∆w )2
+ sin( ∆w )3
2 2
68