Estimation Theory
Estimation Theory
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on
measured empirical data that has a random component. The parameters describe an underlying physical
setting in such a way that their value affects the distribution of the measured data. An estimator attempts to
approximate the unknown parameters using the measurements. In estimation theory, two approaches are
generally considered:[1]
The probabilistic approach (described in this article) assumes that the measured data is
random with probability distribution dependent on the parameters of interest
The set-membership approach assumes that the measured data vector belongs to a set
which depends on the parameter vector.
Examples
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular
candidate. That proportion is the parameter sought; the estimate is based on a small random sample of
voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate,
based on some demographic features, such as age.
Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-
way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably
embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be
estimated.
As another example, in electrical communication theory, the measurements which contain information
regarding the parameters of interest are often associated with a noisy signal.
Basics
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The
first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a
vector,
It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics).
It is then necessary to define the Bayesian probability
After the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted ,
where the "hat" indicates the estimate.
One common estimator is the minimum mean squared error (MMSE) estimator, which utilizes the error
between the estimated parameters and the actual value of the parameters
as the basis for optimality. This error term is then squared and the expected value of this squared value is
minimized for the MMSE estimator.
Estimators
Commonly used estimators (estimation methods) and topics related to them include:
Examples
Both of these estimators have a mean of , which can be shown through taking the expected value of each
estimator
and
At this point, these two estimators would appear to perform the same. However, the difference between
them becomes apparent when comparing the variances.
and
It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.
Maximum likelihood
Continuing the example using the maximum likelihood estimator, the probability density function (pdf) of
the noise for one sample is
which is simply the sample mean. From this example, it was found that the sample mean is the maximum
likelihood estimator for samples of a fixed, unknown parameter corrupted by AWGN.
To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the
Fisher information number
and finding the negative expected value is trivial since it is now a deterministic constant
results in
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean is
equal to the Cramér–Rao lower bound for all values of and . In other words, the sample mean is the
(necessarily unique) efficient estimator, and thus also the minimum variance unbiased estimator (MVUE),
in addition to being the maximum likelihood estimator.
One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform
distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation
theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and
possible misunderstandings in the use of maximum likelihood estimators and likelihood functions.
Given a discrete uniform distribution with unknown maximum, the UMVU estimator for the
maximum is given by
where m is the sample maximum and k is the sample size, sampling without replacement.[2][3] This problem
is commonly known as the German tank problem, due to application of maximum estimation to estimates of
German tank production during World War II.
"The sample maximum plus the average gap between observations in the sample",
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the
population maximum.[note 1]
The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed
above, it is biased.
Applications
Numerous fields require the use of estimation theory. Some of these fields include:
See also
Best linear unbiased estimator (BLUE)
Completeness (statistics)
Detection theory
Efficiency (statistics)
Expectation-maximization algorithm (EM algorithm)
Fermi problem
Grey box model
Information theory
Least-squares spectral analysis
Matched filter
Maximum entropy spectral estimation
Nuisance parameter
Parametric equation
Pareto principle
Rule of three (statistics)
State estimator
Statistical signal processing
Sufficiency (statistics)
Notes
1. The sample maximum is never more than the population maximum, but can be less, hence it
is a biased estimator: it will tend to underestimate the population maximum.
References
Citations
1. Walter, E.; Pronzato, L. (1997). Identification of Parametric Models from Experimental Data.
London, England: Springer-Verlag.
2. Johnson, Roger (1994), "Estimating the Size of a Population", Teaching Statistics, 16 (2
(Summer)): 50–52, doi:10.1111/j.1467-9639.1994.tb00688.x (https://doi.org/10.1111%2Fj.14
67-9639.1994.tb00688.x)
3. Johnson, Roger (2006), "Estimating the Size of a Population" (https://web.archive.org/web/2
0081120085633/http://www.rsscse.org.uk/ts/gtb/contents.html), Getting the Best from
Teaching Statistics (http://www.rsscse.org.uk/ts/gtb/contents.html), archived from the original
(http://www.rsscse.org.uk/ts/gtb/johnson.pdf) (PDF) on November 20, 2008
Sources
Theory of Point Estimation by E.L. Lehmann and G. Casella. (ISBN 0387985026)
Systems Cost Engineering by Dale Shermon. (ISBN 978-0-566-08861-2)
Mathematical Statistics and Data Analysis by John Rice. (ISBN 0-534-209343)
Fundamentals of Statistical Signal Processing: Estimation Theory by Steven M. Kay
(ISBN 0-13-345711-7)
An Introduction to Signal Detection and Estimation by H. Vincent Poor (ISBN 0-387-94173-
8)
Detection, Estimation, and Modulation Theory, Part 1 by Harry L. Van Trees (ISBN 0-471-
09517-6; website (https://web.archive.org/web/20050428233957/http://gunston.gmu.edu/de
mt/demtp1/))
Optimal State Estimation: Kalman, H-infinity, and Nonlinear Approaches by Dan Simon
website (http://academic.csuohio.edu/simond/estimation/)
Ali H. Sayed, Adaptive Filters, Wiley, NJ, 2008, ISBN 978-0-470-25388-5.
Ali H. Sayed, Fundamentals of Adaptive Filtering, Wiley, NJ, 2003, ISBN 0-471-46126-1.
Thomas Kailath, Ali H. Sayed, and Babak Hassibi, Linear Estimation, Prentice-Hall, NJ,
2000, ISBN 978-0-13-022464-4.
Babak Hassibi, Ali H. Sayed, and Thomas Kailath, Indefinite Quadratic Estimation and
Control: A Unified Approach to H2 and H Theories, Society for Industrial & Applied
Mathematics (SIAM), PA, 1999, ISBN 978-0-89871-411-1.
V.G.Voinov, M.S.Nikulin, "Unbiased estimators and their applications. Vol.1: Univariate
case", Kluwer Academic Publishers, 1993, ISBN 0-7923-2382-3.
V.G.Voinov, M.S.Nikulin, "Unbiased estimators and their applications. Vol.2: Multivariate
case", Kluwer Academic Publishers, 1996, ISBN 0-7923-3939-8.
External links
Media related to Estimation theory at Wikimedia Commons