Estimation theory is a branch ofstatistics that deals with estimating the values ofparameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. Anestimator attempts to approximate the unknown parameters using the measurements.In estimation theory, two approaches are generally considered:[1]
- The probabilistic approach (described in this article) assumes that the measured data is random withprobability distribution dependent on the parameters of interest
- Theset-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.
Examples
editFor example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, inradar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with anoisysignal.
Basics
editFor a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is astatistical sample – a set of data points taken from arandom vector (RV) of sizeN. Put into avector, Secondly, there areM parameters whose values are to be estimated. Third, the continuousprobability density function (pdf) or its discrete counterpart, theprobability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters: It is also possible for the parameters themselves to have a probability distribution (e.g.,Bayesian statistics). It is then necessary to define theBayesian probability After the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted , where the "hat" indicates the estimate.
One common estimator is theminimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameters as the basis for optimality. This error term is then squared and theexpected value of this squared value is minimized for the MMSE estimator.
Estimators
editCommonly used estimators (estimation methods) and topics related to them include:
- Maximum likelihood estimators
- Bayes estimators
- Method of moments estimators
- Cramér–Rao bound
- Least squares
- Minimum mean squared error (MMSE), also known as Bayes least squared error (BLSE)
- Maximum a posteriori (MAP)
- Minimum variance unbiased estimator (MVUE)
- Nonlinear system identification
- Best linear unbiased estimator (BLUE)
- Unbiased estimators — seeestimator bias.
- Particle filter
- Markov chain Monte Carlo (MCMC)
- Kalman filter, and its various derivatives
- Wiener filter
Examples
editUnknown constant in additive white Gaussian noise
editConsider a receiveddiscrete signal, , of independentsamples that consists of an unknown constant withadditive white Gaussian noise (AWGN) with zeromean and knownvariance (i.e., ).Since the variance is known then the only unknown parameter is .
The model for the signal is then
Two possible (of many) estimators for the parameter are:
- which is thesample mean
Both of these estimators have amean of , which can be shown through taking theexpected value of each estimator and
At this point, these two estimators would appear to perform the same.However, the difference between them becomes apparent when comparing the variances. and
It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.
Maximum likelihood
editContinuing the example using themaximum likelihood estimator, theprobability density function (pdf) of the noise for one sample is and the probability of becomes ( can be thought of a ) Byindependence, the probability of becomes Taking thenatural logarithm of the pdf and the maximum likelihood estimator is
Taking the firstderivative of the log-likelihood function and setting it to zero
This results in the maximum likelihood estimator which is simply the sample mean.From this example, it was found that the sample mean is the maximum likelihood estimator for samples of a fixed, unknown parameter corrupted by AWGN.
Cramér–Rao lower bound
editTo find theCramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find theFisher information number and copying from above
Taking the second derivative and finding the negative expected value is trivial since it is now a deterministic constant
Finally, putting the Fisher information into results in
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean isequal to the Cramér–Rao lower bound for all values of and .In other words, the sample mean is the (necessarily unique)efficient estimator, and thus also theminimum variance unbiased estimator (MVUE), in addition to being themaximum likelihood estimator.
Maximum of a uniform distribution
editOne of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use ofmaximum likelihood estimators andlikelihood functions.
Given adiscrete uniform distribution with unknown maximum, theUMVU estimator for the maximum is given by wherem is thesample maximum andk is thesample size, sampling without replacement.[2][3] This problem is commonly known as theGerman tank problem, due to application of maximum estimation to estimates of German tank production duringWorld War II.
The formula may be understood intuitively as;
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.[note 1]
This has a variance of[2] so a standard deviation of approximately , the (population) average size of a gap between samples; compare above. This can be seen as a very simple case ofmaximum spacing estimation.
The sample maximum is themaximum likelihood estimator for the population maximum, but, as discussed above, it is biased.
Applications
editNumerous fields require the use of estimation theory.Some of these fields include:
- Interpretation of scientificexperiments
- Signal processing
- Clinical trials
- Opinion polls
- Quality control
- Telecommunications
- Project management
- Software engineering
- Control theory (in particularAdaptive control)
- Network intrusion detection system
- Orbit determination
Measured data are likely to be subject tonoise or uncertainty and it is through statisticalprobability thatoptimal solutions are sought to extract as muchinformation from the data as possible.
See also
edit- Best linear unbiased estimator (BLUE)
- Completeness (statistics)
- Detection theory
- Efficiency (statistics)
- Expectation-maximization algorithm (EM algorithm)
- Fermi problem
- Grey box model
- Information theory
- Least-squares spectral analysis
- Matched filter
- Maximum entropy spectral estimation
- Nuisance parameter
- Parametric equation
- Pareto principle
- Rule of three (statistics)
- State estimator
- Statistical signal processing
- Sufficiency (statistics)
Notes
edit- ^The sample maximum is never more than the population maximum, but can be less, hence it is abiased estimator: it will tend tounderestimate the population maximum.
References
editCitations
edit- ^Walter, E.; Pronzato, L. (1997).Identification of Parametric Models from Experimental Data. London, England: Springer-Verlag.
- ^abJohnson, Roger (1994), "Estimating the Size of a Population",Teaching Statistics,16 (2 (Summer)):50–52,doi:10.1111/j.1467-9639.1994.tb00688.x
- ^Johnson, Roger (2006),"Estimating the Size of a Population",Getting the Best from Teaching Statistics, archived fromthe original(PDF) on November 20, 2008
Sources
edit- E.L. Lehmann & G. Casella.Theory of Point Estimation.ISBN 0387985026.
- Dale Shermon (2009).Systems Cost Engineering. Gower Publishing.ISBN 978-0-566-08861-2.
- John Rice (1995).Mathematical Statistics and Data Analysis. Duxbury Press.ISBN 0-534-209343.
- Steven M. Kay (1993).Fundamentals of Statistical Signal Processing: Estimation Theory. PTR Prentice-Hall.ISBN 0-13-345711-7.
- H. Vincent Poor (16 March 1998).An Introduction to Signal Detection and Estimation. Springer.ISBN 0-387-94173-8.
- Harry L. Van Trees (2001).Detection, Estimation, and Modulation Theory, Part 1. Wiley.ISBN 0-471-09517-6. Archived fromthe original on 2005-04-28.
- Dan Simon.Optimal State Estimation: Kalman, H-infinity, and Nonlinear Approaches. Archived fromthe original on 2010-12-30.
- Adaptive Filters. NJ: Wiley. 2008.ISBN 978-0-470-25388-5.
- Fundamentals of Adaptive Filtering. NJ: Wiley. 2003.ISBN 0-471-46126-1.
- Linear Estimation. NJ: Prentice-Hall. 2000.ISBN 978-0-13-022464-4.
- Indefinite Quadratic Estimation and Control: A Unified Approach to H2 and H∞ Theories. PA: Society for Industrial & Applied Mathematics (SIAM). 1999.ISBN 978-0-89871-411-1.
- V.G. Voinov & M.S. Nikulin (1993).Unbiased estimators and their applications. Vol. 1: Univariate case. Kluwer Academic Publishers.ISBN 0-7923-2382-3.
- V.G. Voinov & M.S. Nikulin (1996).Unbiased estimators and their applications. Vol. 2: Multivariate case. Kluwer Academic Publishers.ISBN 0-7923-3939-8.
External links
edit- Media related toEstimation theory at Wikimedia Commons