Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Early stopping

From Wikipedia, the free encyclopedia
Method in machine learning

Inmachine learning,early stopping is a form ofregularization used to avoidoverfitting when training a model with aniterative method, such asgradient descent. Such methods update the model to make it better fit thetraining data with each iteration. Up to a point, this improves the model's performance on data outside of the training set (e.g., the validation set). Past that point, however, improving the model's fit to the training data comes at the expense of increasedgeneralization error. Early stopping rules provide guidance as to how many iterations can be run before the learner begins to over-fit. Early stopping rules have been employed in many different machine learning methods, with varying amounts of theoretical foundation.

Background

[edit]

This section presents some of the basic machine-learning concepts required for a description of early stopping methods.

Overfitting

[edit]
Main article:Overfitting
Figure 1.  The green line represents an overfitted model and the black line represents a regularized model. While the green line best follows the training data, it is too dependent on that data and it is likely to have a higher error rate on new unseen data illustrated by black-outlined dots, compared to the black line.

Machine learning algorithms train a model based on afinite set of training data. During this training, the model is evaluated based on how well it predicts the observations contained in the training set. In general, however, the goal of a machine learning scheme is to produce a model that generalizes, that is, that predicts previously unseen observations. Overfitting occurs when a model fits the data in the training set well, while incurring largergeneralization error.

Regularization

[edit]
Main article:Regularization (mathematics)

Regularization, in the context of machine learning, refers to the process of modifying a learning algorithm so as to prevent overfitting. This generally involves imposing some sort of smoothness constraint on the learned model.[1]This smoothness may be enforced explicitly, by fixing the number of parameters in the model, or by augmenting the cost function as inTikhonov regularization. Tikhonov regularization, along withprincipal component regression and many other regularization schemes, fall under the umbrella of spectral regularization, regularization characterized by the application of a filter. Early stopping also belongs to this class of methods.

Gradient descent methods

[edit]
Main article:Gradient descent

Gradient descent methods are first-order, iterative, optimization methods. Each iteration updates an approximate solution to the optimization problem by taking a step in the direction of the negative of the gradient of the objective function. By choosing the step-size appropriately, such a method can be made to converge to a local minimum of the objective function. Gradient descent is used in machine-learning by defining aloss function that reflects the error of the learner on the training set and then minimizing that function.

Early stopping based on analytical results

[edit]

Early stopping instatistical learning theory

[edit]

Early-stopping can be used to regularizenon-parametric regression problems encountered inmachine learning. For a given input space,X{\displaystyle X}, output space,Y{\displaystyle Y}, and samples drawn from an unknown probability measure,ρ{\displaystyle \rho }, onZ=X×Y{\displaystyle Z=X\times Y}, the goal of such problems is to approximate aregression function,fρ{\displaystyle f_{\rho }}, given by

fρ(x)=Yydρ(yx),xX,{\displaystyle f_{\rho }(x)=\int _{Y}y\,d\rho (y\mid x),\,x\in X,}

whereρ(yx){\displaystyle \rho (y\mid x)} is the conditional distribution atx{\displaystyle x} induced byρ{\displaystyle \rho }.[2]One common choice for approximating the regression function is to use functions from areproducing kernel Hilbert space.[2] These spaces can be infinite dimensional, in which they can supply solutions that overfit training sets of arbitrary size. Regularization is, therefore, especially important for these methods. One way to regularize non-parametric regression problems is to apply an early stopping rule to an iterative procedure such as gradient descent.

The early stopping rules proposed for these problems are based on analysis of upper bounds on the generalization error as a function of the iteration number. They yield prescriptions for the number of iterations to run that can be computed prior to starting the solution process.[3][4]

Example: Least-squares loss

[edit]

(Adapted from Yao, Rosasco and Caponnetto, 2007[3])

LetXRn{\displaystyle X\subseteq \mathbb {R} ^{n}} andY=R.{\displaystyle Y=\mathbb {R} .} Given a set of samples

z={(xi,yi)X×Y:i=1,,m}Zm,{\displaystyle \mathbf {z} =\left\{(x_{i},y_{i})\in X\times Y:i=1,\dots ,m\right\}\in Z^{m},}

drawn independently fromρ{\displaystyle \rho }, minimize the functional

E(f)=X×Y(f(x)y)2dρ{\displaystyle {\mathcal {E}}(f)=\int _{X\times Y}(f(x)-y)^{2}\,d\rho }

where,f{\displaystyle f} is a member of the reproducing kernel Hilbert spaceH{\displaystyle {\mathcal {H}}}. That is, minimize the expected risk for a Least-squares loss function. SinceE{\displaystyle {\mathcal {E}}} depends on the unknown probability measureρ{\displaystyle \rho }, it cannot be used for computation. Instead, consider the following empirical risk

Ez(f)=1mi=1m(f(xi)yi)2.{\displaystyle {\mathcal {E}}_{\mathbf {z} }(f)={\frac {1}{m}}\sum _{i=1}^{m}\left(f(x_{i})-y_{i}\right)^{2}.}

Letft{\displaystyle f_{t}} andftz{\displaystyle f_{t}^{\mathbf {z} }} be thet-th iterates of gradient descent applied to the expected and empirical risks, respectively, where both iterations are initialized at the origin, and both use the step sizeγt{\displaystyle \gamma _{t}}. Theft{\displaystyle f_{t}} form thepopulation iteration, which converges tofρ{\displaystyle f_{\rho }}, but cannot be used in computation, while theftz{\displaystyle f_{t}^{\mathbf {z} }} form thesample iteration which usually converges to an overfitting solution.

We want to control the difference between the expected risk of the sample iteration and the minimum expected risk, that is, the expected risk of the regression function:

E(ftz)E(fρ){\displaystyle {\mathcal {E}}(f_{t}^{\mathbf {z} })-{\mathcal {E}}(f_{\rho })}

This difference can be rewritten as the sum of two terms: the difference in expected risk between the sample and population iterations and that between the population iteration and the regression function:

E(ftz)E(fρ)=[E(ftz)E(ft)]+[E(ft)E(fρ)]{\displaystyle {\mathcal {E}}(f_{t}^{\mathbf {z} })-{\mathcal {E}}(f_{\rho })=\left[{\mathcal {E}}(f_{t}^{\mathbf {z} })-{\mathcal {E}}(f_{t})\right]+\left[{\mathcal {E}}(f_{t})-{\mathcal {E}}(f_{\rho })\right]}

This equation presents abias-variance tradeoff, which is then solved to give anoptimal stopping rule that may depend on the unknown probability distribution. That rule has associated probabilistic bounds on the generalization error. For the analysis leading to the early stopping rule and bounds, the reader is referred to the original article.[3] In practice, data-driven methods, e.g. cross-validation can be used to obtain an adaptive stopping rule.

Early stopping in boosting

[edit]

Boosting refers to a family of algorithms in which a set ofweak learners (learners that are only slightly correlated with the true process) are combined to produce astrong learner. It has been shown, for several boosting algorithms (includingAdaBoost), that regularization via early stopping can provide guarantees ofconsistency, that is, that the result of the algorithm approaches the true solution as the number of samples goes to infinity.[5][6][7][8]

L2-boosting

[edit]

Boosting methods have close ties to the gradient descent methods describedabove can be regarded as a boosting method based on theL2{\displaystyle L_{2}} loss:L2Boost.[3]

Validation-based early stopping

[edit]

These early stopping rules work by splitting the original training set into a new training set and avalidation set. The error on the validation set is used as a proxy for thegeneralization error in determining when overfitting has begun. These methods are employed in the training of many iterative machine learning algorithms includingneural networks. Prechelt gives the following summary of a naive implementation ofholdout-based early stopping as follows:[9]

  1. Split the training data into a training set and a validation set, e.g. in a 2-to-1 proportion.
  2. Train only on the training set and evaluate the per-example error on the validation set once in a while, e.g. after every fifth epoch.
  3. Stop training as soon as the error on the validation set is higher than it was the last time it was checked.
  4. Use the weights the network had in that previous step as the result of the training run.

— Lutz Prechelt,Early Stopping – But When?

Cross-validation is an alternative that is applicable to non time-series scenarios. Cross-validation involves splitting multiple partitions of the data into training set and validation set – instead of a single partition into a training set and validation set. Even this simple procedure is complicated in practice by the fact that the validation error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad hoc rules for deciding when overfitting has truly begun.[9]

See also

[edit]

References

[edit]
  1. ^Girosi, Federico; Michael Jones; Tomaso Poggio (1995-03-01). "Regularization Theory and Neural Networks Architectures".Neural Computation.7 (2):219–269.CiteSeerX 10.1.1.48.9258.doi:10.1162/neco.1995.7.2.219.ISSN 0899-7667.S2CID 49743910.
  2. ^abSmale, Steve; Ding-Xuan Zhou (2007-08-01). "Learning Theory Estimates via Integral Operators and Their Approximations".Constructive Approximation.26 (2):153–172.CiteSeerX 10.1.1.210.722.doi:10.1007/s00365-006-0659-y.ISSN 0176-4276.S2CID 5977083.
  3. ^abcdYao, Yuan; Lorenzo Rosasco; Andrea Caponnetto (2007-08-01). "On Early Stopping in Gradient Descent Learning".Constructive Approximation.26 (2):289–315.CiteSeerX 10.1.1.329.2482.doi:10.1007/s00365-006-0663-2.ISSN 0176-4276.S2CID 8323954.
  4. ^Raskutti, G.; M.J. Wainwright; Bin Yu (2011). "Early stopping for non-parametric regression: An optimal data-dependent stopping rule".2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton). 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton). pp. 1318–1325.doi:10.1109/Allerton.2011.6120320.
  5. ^Wenxin Jiang (February 2004)."Process consistency for AdaBoost".The Annals of Statistics.32 (1):13–29.doi:10.1214/aos/1079120128.ISSN 0090-5364.
  6. ^Bühlmann, Peter; Bin Yu (2003-06-01). "Boosting with the L₂ Loss: Regression and Classification".Journal of the American Statistical Association.98 (462):324–339.doi:10.1198/016214503000125.ISSN 0162-1459.JSTOR 30045243.S2CID 123059267.
  7. ^Tong Zhang; Bin Yu (2005-08-01). "Boosting with Early Stopping: Convergence and Consistency".The Annals of Statistics.33 (4):1538–1579.arXiv:math/0508276.Bibcode:2005math......8276Z.doi:10.1214/009053605000000255.ISSN 0090-5364.JSTOR 3448617.S2CID 13158356.
  8. ^Stankewitz, Bernhard (2024-04-01). "Early stopping for L2-boosting in high-dimensional linear models".The Annals of Statistics.52 (2):491–518.arXiv:2210.07850.doi:10.1214/24-AOS2356.
  9. ^abPrechelt, Lutz; Geneviève B. Orr (2012-01-01). "Early Stopping — But When?". In Grégoire Montavon;Klaus-Robert Müller (eds.).Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 53–67.doi:10.1007/978-3-642-35289-8_5.ISBN 978-3-642-35289-8.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Early_stopping&oldid=1320418762"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp