| Part of a series on |
| Machine learning anddata mining |
|---|
Learning with humans |
Model diagnostics |
Inmachine learning, aprobabilistic classifier is aclassifier that is able to predict, given an observation of an input, aprobability distribution over aset of classes, rather than only outputting the most likely class that the observation should belong to. Probabilistic classifiers provide classification that can be useful in its own right[1] or when combining classifiers intoensembles.
Formally, an "ordinary" classifier is some rule, orfunction, that assigns to a samplex a class labelŷ:
The samples come from some setX (e.g., the set of alldocuments, or the set of allimages), while the class labels form a finite setY defined prior to training.
Probabilistic classifiers generalize this notion of classifiers: instead of functions, they areconditional distributions, meaning that for a given, they assign probabilities to all (and these probabilities sum to one). "Hard" classification can then be done using theoptimal decision rule[2]: 39–40
or, in English, the predicted class is that which has the highest probability.
Binary probabilistic classifiers are also calledbinary regression models instatistics. Ineconometrics, probabilistic classification in general is calleddiscrete choice.
Some classification models, such asnaive Bayes,logistic regression andmultilayer perceptrons (when trained under an appropriateloss function) are naturally probabilistic. Other models such assupport vector machines are not, butmethods exist to turn them into probabilistic classifiers.
Some models, such aslogistic regression, are conditionally trained: they optimize the conditional probability directly on a training set (seeempirical risk minimization). Other classifiers, such asnaive Bayes, are trainedgeneratively: at training time, the class-conditional distribution and the classprior are found, and the conditional distribution is derived usingBayes' rule.[2]: 43
Not all classification models are naturally probabilistic, and some that are, notably naive Bayes classifiers,decision trees andboosting methods, produce distorted class probability distributions.[3] In the case of decision trees, wherePr(y|x) is the proportion of training samples with labely in the leaf wherex ends up, these distortions come about because learning algorithms such asC4.5 orCART explicitly aim to produce homogeneous leaves (giving probabilities close to zero or one, and thus highbias) while using few samples to estimate the relevant proportion (highvariance).[4]

Calibration can be assessed using acalibration plot (also called areliability diagram).[3][5] A calibration plot shows the proportion of items in each class for bands of predicted probability or score (such as a distorted probability distribution or the "signed distance to the hyperplane" in a support vector machine). Deviations from the identity function indicate a poorly-calibrated classifier for which the predicted probabilities or scores can not be used as probabilities. In this case one can use a method to turn these scores into properlycalibrated class membership probabilities.
For thebinary case, a common approach is to applyPlatt scaling, which learns alogistic regression model on the scores.[6]An alternative method usingisotonic regression[7] is generally superior to Platt's method when sufficient training data is available.[3]
In themulticlass case, one can use a reduction to binary tasks, followed by univariate calibration with an algorithm as described above and further application of the pairwise coupling algorithm by Hastie and Tibshirani.[8]

A method used to assign scores to pairs of predicted probabilities and actual discrete outcomes, so that different predictive methods can be compared, is called ascoring rule. Scoring rules are used to compare the predicted probability to observed outcomes, these include for examplelog loss,Brier score,Continuous ranked probability score and others.
Specific aspects like accuracy, calibration, sharpness or dispersion may vary from one probabilistic classifier to another one and may be specifically investigated.
Calibration errors metrics aim to quantify the extent to which a probabilistic classifier's outputs arewell-calibrated. AsPhilip Dawid put it, "a forecaster is well-calibrated if, for example, of those events to which he assigns a probability 30 percent, the long-run proportion that actually occurs turns out to be 30 percent".[9] Foundational work in the domain of measuring calibration error is the Expected Calibration Error (ECE) metric.[10] More recent works propose variants to ECE that address limitations of the ECE metric that may arise when classifier scores concentrate on narrow subset of the [0,1], including the Adaptive Calibration Error (ACE)[11] and Test-based Calibration Error (TCE).[12]
[I]ndata mining applications the interest is often more in the class probabilities themselves, rather than in performing a class assignment.