Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Model selection

From Wikipedia, the free encyclopedia
Task of selecting a statistical model from a set of candidate models
This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages)
This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(September 2016) (Learn how and when to remove this message)
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Model selection" – news ·newspapers ·books ·scholar ·JSTOR
(February 2010) (Learn how and when to remove this message)
(Learn how and when to remove this message)

Model selection is the task of selecting amodel from among various candidates on the basis of performance criterion to choose the best one.[1]In the context ofmachine learning and more generallystatistical analysis, this may be the selection of astatistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve thedesign of experiments such that thedata collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor).

Konishi & Kitagawa (2008, p. 75) state, "The majority of the problems instatistical inference can be considered to be problems related to statistical modeling". Relatedly,Cox (2006, p. 197) has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".

Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose ofdecision making or optimization under uncertainty.[2]

Inmachine learning, algorithmic approaches to model selection includefeature selection,hyperparameter optimization, andstatistical learning theory.

Introduction

[edit]
The scientific observation cycle.

In its most basic forms, model selection is one of the fundamental tasks ofscientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, whenGalileo performed hisinclined plane experiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model[citation needed].

Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such aspolynomials are used, at least initially[citation needed].Burnham & Anderson (2002) emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data.

Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant bybest is controversial. A good model selection technique will balancegoodness of fit with simplicity. More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using alikelihood ratio approach, or an approximation of this, leading to achi-squared test. The complexity is generally measured by counting the number ofparameters in the model.

Model selection techniques can be considered asestimators of some physical quantity, such as the probability of the model producing the given data. Thebias andvariance are both important measures of the quality of this estimator;efficiency is also often considered.

A standard example of model selection is that ofcurve fitting, where, given a set of points and other background knowledge (e.g. points are a result ofi.i.d. samples), we must select a curve that describes the function that generated the points.

Two directions of model selection

[edit]

There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions.

In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction.[3] The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the mostrobust candidate will be consistently selected given sufficiently many data samples.

The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading.[3] Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made.[4]

Methods to assist in choosing the set of candidate models

[edit]

Criteria

[edit]

Below is a list of criteria for model selection. The most commonly used information criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), seeStoica & Selen (2004) for a review.

Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems.[citation needed]

Burnham & Anderson (2002, §6.3) say the following:

There is a variety of model selection methods. However, from the point of view of statistical performance of a method, and intended context of its use, there are only two distinct classes of methods: These have been labeledefficient andconsistent. (...) Under the frequentist paradigm for model selection one generally has three main approaches: (I) optimization of some selection criteria, (II) tests of hypotheses, and (III) ad hoc methods.

See also

[edit]

Notes

[edit]
  1. ^Hastie, Tibshirani, Friedman (2009).The elements of statistical learning. Springer. p. 195.{{cite book}}: CS1 maint: multiple names: authors list (link)
  2. ^Shirangi, Mehrdad G.; Durlofsky, Louis J. (2016). "A general method to select representative models for decision making and optimization under uncertainty".Computers & Geosciences.96:109–123.Bibcode:2016CG.....96..109S.doi:10.1016/j.cageo.2016.08.002.
  3. ^abDing, Jie; Tarokh, Vahid; Yang, Yuhong (2018)."Model Selection Techniques: An Overview".IEEE Signal Processing Magazine.35 (6):16–34.arXiv:1810.09583.Bibcode:2018ISPM...35f..16D.doi:10.1109/MSP.2018.2867638.ISSN 1053-5888.S2CID 53035396.
  4. ^Su, J.; Vargas, D.V.; Sakurai, K. (2019). "One Pixel Attack for Fooling Deep Neural Networks".IEEE Transactions on Evolutionary Computation.23 (5):828–841.arXiv:1710.08864.Bibcode:2019ITEC...23..828S.doi:10.1109/TEVC.2019.2890858.S2CID 2698863.
  5. ^Ding, J.; Tarokh, V.; Yang, Y. (June 2018). "Bridging AIC and BIC: A New Criterion for Autoregression".IEEE Transactions on Information Theory.64 (6):4024–4043.arXiv:1508.02473.Bibcode:2018ITIT...64.4024D.doi:10.1109/TIT.2017.2717599.ISSN 1557-9654.S2CID 5189440.
  6. ^Tsao, Min (2023). "Regression model selection via log-likelihood ratio and constrained minimum criterion".Canadian Journal of Statistics.52:195–211.arXiv:2107.08529.doi:10.1002/cjs.11756.S2CID 236087375.

References

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Computational statistics
Correlation and dependence
Regression analysis
Regression as a
statistical model
Linear regression
Predictor structure
Non-standard
Non-normal errors
Decomposition of variance
Model exploration
Background
Design of experiments
Numericalapproximation
Applications
Retrieved from "https://en.wikipedia.org/w/index.php?title=Model_selection&oldid=1320553270"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp