Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Platt scaling

From Wikipedia, the free encyclopedia
Machine learning calibration technique
Part of a series on
Machine learning
anddata mining

Inmachine learning,Platt scaling orPlatt calibration is a way of transforming the outputs of aclassification model into aprobability distribution over classes. The method was invented byJohn Platt in the context ofsupport vector machines,[1] replacing an earlier method byVapnik, but can be applied to other classification models.[2] Platt scaling works by fitting alogistic regression model to a classifier's scores.

Problem formalization

[edit]

Consider the problem ofbinary classification: for inputsx, we want to determine whether they belong to one of two classes, arbitrarily labeled+1 and−1. We assume that the classification problem will be solved by a real-valued functionf, by predicting a class labely = sign(f(x)).[a] For many problems, it is convenient to get a probabilityP(y=1|x){\displaystyle P(y=1|x)}, i.e. a classification that not only gives an answer, but also a degree of certainty about the answer. Some classification models do not provide such a probability, or give poor probability estimates.

Standard logistic function whereL=1,k=1,x0=0{\displaystyle L=1,k=1,x_{0}=0}

L=1,k=1,x0=0{\displaystyle L=1,k=1,x_{0}=0}.

Algorithm

[edit]

Platt scaling is an algorithm to solve the aforementioned problem. It produces probability estimates

P(y=1|x)=11+exp(Af(x)+B){\displaystyle \mathrm {P} (y=1|x)={\frac {1}{1+\exp(Af(x)+B)}}}

i.e., alogistic transformation of the classifier outputf(x), whereA andB are twoscalar parameters that are learned by the algorithm. After scaling, values can be predicted asy=1 iff P(y=1|x)>12{\displaystyle y=1{\text{ iff }}P(y=1|x)>{\frac {1}{2}}}. IfB0,{\displaystyle B\neq 0,} then the probability estimates are modified from to the original decision functiony = sign(f(x)).[3]

The parametersA andB are estimated using amaximum likelihood method that optimizes on the same training set as that for the original classifierf. To avoidoverfitting to this set, a held-outcalibration set orcross-validation can be used, but Platt additionally suggests transforming the labelsy to target probabilities

t+=N++1N++2{\displaystyle t_{+}={\frac {N_{+}+1}{N_{+}+2}}} for positive samples (y = 1), and
t=1N+2{\displaystyle t_{-}={\frac {1}{N_{-}+2}}} for negative samples,y = -1.

Here,N+ andN are the number of positive and negative samples, respectively. This transformation follows by applyingBayes' rule to a model of out-of-sample data that has a uniform prior over the labels.[1] The constants 1 and 2, on the numerator and denominator respectively, are derived from the application ofLaplace smoothing.

Platt himself suggested using theLevenberg–Marquardt algorithm to optimize the parameters, but aNewton algorithm was later proposed that should be morenumerically stable.[4]

Analysis

[edit]

Platt scaling has been shown to be effective for SVMs as well as other types of classification models, includingboosted models and evennaive Bayes classifiers, which produce distorted probability distributions. It is particularly effective for max-margin methods such as SVMs and boosted trees, which show sigmoidal distortions in their predicted probabilities, but has less of an effect with well-calibrated models such aslogistic regression,multilayer perceptrons, andrandom forests.[2]

An alternative approach to probability calibration is to fit anisotonic regression model to an ill-calibrated probability model. This has been shown to work better than Platt scaling, in particular when enough training data is available.[2]

Platt scaling can also be applied to deep neural network classifiers. For image classification, such as CIFAR-100, small networks likeLeNet-5 have good calibration but low accuracy, and large networks likeResNet has high accuracy but is overconfident in predictions. A 2017 paper proposedtemperature scaling, which simply multiplies the output logits of a network by a constant1/T{\displaystyle 1/T} before taking thesoftmax. During training,T{\displaystyle T} is set to 1. After training,T{\displaystyle T} is optimized on a held-out calibration set to minimize the calibration loss.[5]

See also

[edit]

Notes

[edit]
  1. ^Seesign function. The label forf(x) = 0 is arbitrarily chosen to be either zero, or one.

References

[edit]
  1. ^abPlatt, John (1999)."Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods".Advances in Large Margin Classifiers.10 (3):61–74.
  2. ^abcNiculescu-Mizil, Alexandru; Caruana, Rich (2005).Predicting good probabilities with supervised learning(PDF). ICML.doi:10.1145/1102351.1102430.
  3. ^Olivier Chapelle; Vladimir Vapnik; Olivier Bousquet; Sayan Mukherjee (2002)."Choosing multiple parameters for support vector machines"(PDF).Machine Learning.46:131–159.doi:10.1023/a:1012450327387.
  4. ^Lin, Hsuan-Tien; Lin, Chih-Jen; Weng, Ruby C. (2007)."A note on Platt's probabilistic outputs for support vector machines"(PDF).Machine Learning.68 (3):267–276.doi:10.1007/s10994-007-5018-6.
  5. ^Guo, Chuan; Pleiss, Geoff; Sun, Yu; Weinberger, Kilian Q. (2017-07-17)."On Calibration of Modern Neural Networks".Proceedings of the 34th International Conference on Machine Learning. PMLR:1321–1330.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Platt_scaling&oldid=1299615235"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp