Part ofa series on |
Research |
---|
Research strategy |
Philosophy portal |
Statistical inference is the process of usingdata analysis to infer properties of an underlyingprobability distribution.[1]Inferential statistical analysis infers properties of apopulation, for example bytesting hypotheses and deriving estimates. It is assumed that the observed data set issampled from a larger population.
Inferential statistics can be contrasted withdescriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. Inmachine learning, the terminference is sometimes used instead to mean "make a prediction, by evaluating an already trained model";[2] in this context inferring properties of the model is referred to astraining orlearning (rather thaninference), and using a model for prediction is referred to asinference (instead ofprediction); see alsopredictive inference.
Statistical inference makes propositions about a population, using data drawn from the population with some form ofsampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first)selecting astatistical model of the process that generates the data and (second) deducing propositions from the model.[3]
Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling".[4] Relatedly,Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".[5]
Theconclusion of a statistical inference is a statisticalproposition.[6] Some common forms of statistical proposition are the following:
Any statistical inference requires some assumptions. Astatistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference.[7]Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn.[8]
Statisticians distinguish between three levels of modeling assumptions:
Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified.
Incorrect assumptions of'simple' random sampling can invalidate statistical inference.[10] More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions.[11] Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference.[12] The use ofany parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal."[13] In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population."[13] Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed.
Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these.
With finite samples,approximation results measure how close a limiting distribution approaches the statistic'ssample distribution: For example, with 10,000 independent samples thenormal distribution approximates (to two digits of accuracy) the distribution of thesample mean for many population distributions, by theBerry–Esseen theorem.[14] Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience.[14] Following Kolmogorov's work in the 1950s, advanced statistics usesapproximation theory andfunctional analysis to quantify the error of approximation. In this approach, themetric geometry ofprobability distributions is studied; this approach quantifies approximation error with, for example, theKullback–Leibler divergence,Bregman divergence, and theHellinger distance.[15][16][17]
With indefinitely large samples,limiting results like thecentral limit theorem describe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples.[18][19][20] However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify thegeneralized method of moments and the use ofgeneralized estimating equations, which are popular ineconometrics andbiostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation.[21] The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensionalmodels withlog-concavelikelihoods (such as with one-parameterexponential families).
For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments.[22][23] Statistical inference from randomized studies is also more straightforward than many other situations.[24][25][26] InBayesian inference, randomization is also of importance: insurvey sampling, use ofsampling without replacement ensures theexchangeability of the sample with the population; in randomized experiments, randomization warrants amissing at random assumption forcovariate information.[27]
Objective randomization allows properly inductive procedures.[28][29][30][31][32] Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures.[33] (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences.[34][35]) Similarly, results fromrandomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena.[36] However, a good observational study may be better than a bad randomized experiment.
The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model.[37][38]
However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical.
It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments.[39] However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme.[23] Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units.[40]
Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations.[41][42]
For example, model-free simple linear regression is based either on:
In either case, the model-free randomization inference for features of the common conditional distribution relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population featureconditional mean,, can be consistently estimated via local averaging or local polynomial fitting, under the assumption that is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, theconditional mean,.[43]
Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms.
Bandyopadhyay and Forster describe four paradigms: The classical (orfrequentist) paradigm, theBayesian paradigm, thelikelihoodist paradigm, and theAkaikean-Information Criterion-based paradigm.[44]
This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging.
One interpretation offrequentist inference (or classical inference) is that it is applicable only in terms offrequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman[45] develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach.
The frequentist procedures of significance testing and confidence intervals can be constructed without regard toutility functions. However, some elements of frequentist statistics, such asstatistical decision theory, do incorporateutility functions.[citation needed] In particular, frequentist developments of optimal inference (such asminimum-variance unbiased estimators, oruniformly most powerful testing) make use ofloss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property.[46] However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal underabsolute value loss functions, in that they minimize expected loss, andleast squares estimators are optimal under squared error loss functions, in that they minimize expected loss.
While statisticians using frequentist inference must choose for themselves the parameters of interest, and theestimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'.[47]
The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions.[48] There areseveral different justifications for using the Bayesian approach.
Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user'sutility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have beenproposed but not yet fully developed.)
Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically providesoptimal decisions in adecision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically)incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to becoherent. Some advocates ofBayesian inference assert that inferencemust take place in this decision-theoretic framework, and thatBayesian inference should not conclude with the evaluation and summarization of posterior beliefs.
Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data.Likelihoodism approaches statistics by using thelikelihood function, denoted as, quantifies the probability of observing the given data, assuming a specific set of parameter values. In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data.
The process of likelihood-based inference usually involves the following steps:
![]() | This sectionneeds expansion. You can help byadding to it.(November 2017) |
TheAkaike information criterion (AIC) is anestimator of the relative quality ofstatistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means formodel selection.
AIC is founded oninformation theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between thegoodness of fit of the model and the simplicity of the model.)
The minimum description length (MDL) principle has been developed from ideas ininformation theory[49] and the theory ofKolmogorov complexity.[50] The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" orprobability models for the data, as might be done in frequentist or Bayesian approaches.
However, if a "data generating mechanism" does exist in reality, then according toShannon'ssource coding theorem it provides the MDL description of the data, on average and asymptotically.[51] In minimizing description length (or descriptive complexity), MDL estimation is similar tomaximum likelihood estimation andmaximum a posteriori estimation (usingmaximum-entropyBayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling.[51][52]
The MDL principle has been applied in communication-coding theory ininformation theory, inlinear regression,[52] and indata mining.[50]
The evaluation of MDL-based inferential procedures often uses techniques or criteria fromcomputational complexity theory.[53]
Fiducial inference was an approach to statistical inference based onfiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious.[54][55] However this argument is the same as that which shows[56] that a so-calledconfidence distribution is not a validprobability distribution and, since this has not invalidated the application ofconfidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher'sfiducial argument as a special case of an inference theory usingupper and lower probabilities.[57]
Developing ideas of Fisher and of Pitman from 1938 to 1939,[58]George A. Barnard developed "structural inference" or "pivotal inference",[59] an approach usinginvariant probabilities ongroup families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful.Donald A. S. Fraser developed a general theory for structural inference[60] based ongroup theory and applied this to linear models.[61] The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist.[62]
The topics below are usually included in the area ofstatistical inference.
Predictive inference is an approach to statistical inference that emphasizes theprediction of future observations based on past observations.
Initially, predictive inference was based onobservable parameters and it was the main purpose of studyingprobability,[citation needed] but it fell out of favor in the 20th century due to a new parametric approach pioneered byBruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g.,celestial mechanics). De Finetti's idea ofexchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper,[63] and has since been propounded by such statisticians asSeymour Geisser.[64]
The terminference refers to the process of executing a TensorFlow Lite model on-device in order to make predictions based on input data.