This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(April 2025) (Learn how and when to remove this message) |
Estimation theory is a branch ofstatistics that deals with estimating the values ofparameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. Anestimator attempts to approximate the unknown parameters using the measurements.In estimation theory, two approaches are generally considered:[1]
For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.
Or, for example, inradar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.
As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with anoisysignal.
For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is astatistical sample – a set of data points taken from arandom vector (RV) of sizeN. Put into avector,Secondly, there areM parameterswhose values are to be estimated. Third, the continuousprobability density function (pdf) or its discrete counterpart, theprobability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:It is also possible for the parameters themselves to have a probability distribution (e.g.,Bayesian statistics). It is then necessary to define theBayesian probabilityAfter the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted, where the "hat" indicates the estimate.
One common estimator is theminimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parametersas the basis for optimality. This error term is then squared and theexpected value of this squared value is minimized for the MMSE estimator.
Commonly used estimators (estimation methods) and topics related to them include:
Consider a receiveddiscrete signal,, ofindependentsamples that consists of an unknown constant withadditive white Gaussian noise (AWGN) with zeromean and knownvariance (i.e.,).Since the variance is known then the only unknown parameter is.
The model for the signal is then
Two possible (of many) estimators for the parameter are:
Both of these estimators have amean of, which can be shown through taking theexpected value of each estimatorand
At this point, these two estimators would appear to perform the same.However, the difference between them becomes apparent when comparing the variances.and
It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.
Continuing the example using themaximum likelihood estimator, theprobability density function (pdf) of the noise for one sample isand the probability of becomes ( can be thought of a)Byindependence, the probability of becomesTaking thenatural logarithm of the pdfand the maximum likelihood estimator is
Taking the firstderivative of the log-likelihood functionand setting it to zero
This results in the maximum likelihood estimatorwhich is simply the sample mean.From this example, it was found that the sample mean is the maximum likelihood estimator for samples of a fixed, unknown parameter corrupted by AWGN.
To find theCramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find theFisher information numberand copying from above
Taking the second derivativeand finding the negative expected value is trivial since it is now a deterministic constant
Finally, putting the Fisher information intoresults in
Comparing this to the variance of the sample mean (determined previously) shows that the sample mean isequal to the Cramér–Rao lower bound for all values of and.In other words, the sample mean is the (necessarily unique)efficient estimator, and thus also theminimum variance unbiased estimator (MVUE), in addition to being themaximum likelihood estimator.
One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use ofmaximum likelihood estimators andlikelihood functions.
Given adiscrete uniform distribution with unknown maximum, theUMVU estimator for the maximum is given bywherem is thesample maximum andk is thesample size, sampling without replacement.[2][3] This problem is commonly known as theGerman tank problem, due to application of maximum estimation to estimates of German tank production duringWorld War II.
The formula may be understood intuitively as;
the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.[note 1]
This has a variance of[2]so a standard deviation of approximately, the (population) average size of a gap between samples; compare above. This can be seen as a very simple case ofmaximum spacing estimation.
The sample maximum is themaximum likelihood estimator for the population maximum, but, as discussed above, it is biased.
Numerous fields require the use of estimation theory.Some of these fields include:
Measured data are likely to be subject tonoise or uncertainty and it is through statisticalprobability thatoptimal solutions are sought to extract as muchinformation from the data as possible.