3.4.Metrics and scoring: quantifying the quality of predictions#
3.4.1.Which scoring function should I use?#
Before we take a closer look into the details of the many scores andevaluation metrics, we want to give some guidance, inspired by statisticaldecision theory, on the choice ofscoring functions forsupervised learning,see[Gneiting2009]:
Which scoring function should I use?
Which scoring function is a good one for my task?
In a nutshell, if the scoring function is given, e.g. in a kaggle competitionor in a business context, use that one.If you are free to choose, it starts by considering the ultimate goal and applicationof the prediction. It is useful to distinguish two steps:
Predicting
Decision making
Predicting:Usually, the response variable\(Y\) is a random variable, in the sense that thereisno deterministic function\(Y = g(X)\) of the features\(X\).Instead, there is a probability distribution\(F\) of\(Y\).One can aim to predict the whole distribution, known asprobabilistic prediction,or—more the focus of scikit-learn—issue apoint prediction (or point forecast)by choosing a property or functional of that distribution\(F\).Typical examples are the mean (expected value), the median or a quantile of theresponse variable\(Y\) (conditionally on\(X\)).
Once that is settled, use astrictly consistent scoring function for that(target) functional, see[Gneiting2009].This means using a scoring function that is aligned withmeasuring the distancebetween predictionsy_pred
and the true target functional using observations of\(Y\), i.e.y_true
.For classificationstrictly proper scoring rules, seeWikipedia entry for Scoring ruleand[Gneiting2007], coincide with strictly consistent scoring functions.The table further below provides examples.One could say that consistent scoring functions act astruth serum in thatthey guarantee“that truth telling […] is an optimal strategy inexpectation”[Gneiting2014].
Once a strictly consistent scoring function is chosen, it is best used for both: asloss function for model training and as metric/score in model evaluation and modelcomparison.
Note that for regressors, the prediction is done withpredict while forclassifiers it is usuallypredict_proba.
Decision Making:The most common decisions are done on binary classification tasks, where the result ofpredict_proba is turned into a single outcome, e.g., from the predictedprobability of rain a decision is made on how to act (whether to take mitigatingmeasures like an umbrella or not).For classifiers, this is whatpredict returns.See alsoTuning the decision threshold for class prediction.There are many scoring functions which measure different aspects of such adecision, most of them are covered with or derived from themetrics.confusion_matrix
.
List of strictly consistent scoring functions:Here, we list some of the most relevant statistical functionals and correspondingstrictly consistent scoring functions for tasks in practice. Note that the list is notcomplete and that there are more of them.For further criteria on how to select a specific one, see[Fissler2022].
functional | scoring or loss function | response | prediction |
---|---|---|---|
Classification | |||
mean | multi-class |
| |
mean | multi-class |
| |
mode | multi-class |
| |
Regression | |||
mean | all reals |
| |
mean | non-negative |
| |
mean | strictly positive |
| |
mean | depends on |
| |
median | all reals |
| |
quantile | all reals |
| |
mode | no consistent one exists | reals |
1 The Brier score is just a different name for the squared error in case ofclassification.
2 The zero-one loss is only consistent but not strictly consistent for the mode.The zero-one loss is equivalent to one minus the accuracy score, meaning it givesdifferent score values but the same ranking.
3 R² gives the same ranking as squared error.
Fictitious Example:Let’s make the above arguments more tangible. Consider a setting in network reliabilityengineering, such as maintaining stable internet or Wi-Fi connections.As provider of the network, you have access to the dataset of log entries of networkconnections containing network load over time and many interesting features.Your goal is to improve the reliability of the connections.In fact, you promise your customers that on at least 99% of all days there are noconnection discontinuities larger than 1 minute.Therefore, you are interested in a prediction of the 99% quantile (of longestconnection interruption duration per day) in order to know in advance when to addmore bandwidth and thereby satisfy your customers. So thetarget functional is the99% quantile. From the table above, you choose the pinball loss as scoring function(fair enough, not much choice given), for model training (e.g.HistGradientBoostingRegressor(loss="quantile",quantile=0.99)
) as well as modelevaluation (mean_pinball_loss(...,alpha=0.99)
- we apologize for the differentargument names,quantile
andalpha
) be it in grid search for findinghyperparameters or in comparing to other models likeQuantileRegressor(quantile=0.99)
.
References
T. Gneiting and A. E. Raftery.Strictly ProperScoring Rules, Prediction, and EstimationIn: Journal of the American Statistical Association 102 (2007),pp. 359– 378.link to pdf
T. Gneiting.Making and Evaluating Point ForecastsJournal of the American Statistical Association 106 (2009): 746 - 762.
T. Gneiting and M. Katzfuss.Probabilistic Forecasting. In: Annual Review of Statistics and Its Application 1.1 (2014), pp. 125–151.
T. Fissler, C. Lorentzen and M. Mayer.ModelComparison and Calibration Assessment: User Guide for Consistent ScoringFunctions in Machine Learning and Actuarial Practice.
3.4.2.Scoring API overview#
There are 3 different APIs for evaluating the quality of a model’spredictions:
Estimator score method: Estimators have a
score
method providing adefault evaluation criterion for the problem they are designed to solve.Most commonly this isaccuracy for classifiers and thecoefficient of determination (\(R^2\)) for regressors.Details for each estimator can be found in its documentation.Scoring parameter: Model-evaluation tools that usecross-validation (such as
model_selection.GridSearchCV
,model_selection.validation_curve
andlinear_model.LogisticRegressionCV
) rely on an internalscoring strategy.This can be specified using thescoring
parameter of that tool and is discussedin the sectionThe scoring parameter: defining model evaluation rules.Metric functions: The
sklearn.metrics
module implements functionsassessing prediction error for specific purposes. These metrics are detailedin sections onClassification metrics,Multilabel ranking metrics,Regression metrics andClustering metrics.
Finally,Dummy estimators are useful to get a baselinevalue of those metrics for random predictions.
See also
For “pairwise” metrics, betweensamples and not estimators orpredictions, see thePairwise metrics, Affinities and Kernels section.
3.4.3.Thescoring
parameter: defining model evaluation rules#
Model selection and evaluation tools that internally usecross-validation (such asmodel_selection.GridSearchCV
,model_selection.validation_curve
andlinear_model.LogisticRegressionCV
) take ascoring
parameter thatcontrols what metric they apply to the estimators evaluated.
They can be specified in several ways:
None
: the estimator’s default evaluation criterion (i.e., the metric used in theestimator’sscore
method) is used.String name: common metrics can be passed via a stringname.
Callable: more complex metrics can be passed via a custommetric callable (e.g., function).
Some tools do also accept multiple metric evaluation. SeeUsing multiple metric evaluationfor details.
3.4.3.1.String name scorers#
For the most common use cases, you can designate a scorer object with thescoring
parameter via a string name; the table below shows all possible values.All scorer objects follow the convention thathigher return values are betterthan lower return values. Thus metrics which measure the distance betweenthe model and the data, likemetrics.mean_squared_error
, areavailable as ‘neg_mean_squared_error’ which return the negated valueof the metric.
Scoring string name | Function | Comment |
---|---|---|
Classification | ||
‘accuracy’ | ||
‘balanced_accuracy’ | ||
‘top_k_accuracy’ | ||
‘average_precision’ | ||
‘neg_brier_score’ | ||
‘f1’ | for binary targets | |
‘f1_micro’ | micro-averaged | |
‘f1_macro’ | macro-averaged | |
‘f1_weighted’ | weighted average | |
‘f1_samples’ | by multilabel sample | |
‘neg_log_loss’ | requires | |
‘precision’ etc. | suffixes apply as with ‘f1’ | |
‘recall’ etc. | suffixes apply as with ‘f1’ | |
‘jaccard’ etc. | suffixes apply as with ‘f1’ | |
‘roc_auc’ | ||
‘roc_auc_ovr’ | ||
‘roc_auc_ovo’ | ||
‘roc_auc_ovr_weighted’ | ||
‘roc_auc_ovo_weighted’ | ||
‘d2_log_loss_score’ | ||
Clustering | ||
‘adjusted_mutual_info_score’ | ||
‘adjusted_rand_score’ | ||
‘completeness_score’ | ||
‘fowlkes_mallows_score’ | ||
‘homogeneity_score’ | ||
‘mutual_info_score’ | ||
‘normalized_mutual_info_score’ | ||
‘rand_score’ | ||
‘v_measure_score’ | ||
Regression | ||
‘explained_variance’ | ||
‘neg_max_error’ | ||
‘neg_mean_absolute_error’ | ||
‘neg_mean_squared_error’ | ||
‘neg_root_mean_squared_error’ | ||
‘neg_mean_squared_log_error’ | ||
‘neg_root_mean_squared_log_error’ | ||
‘neg_median_absolute_error’ | ||
‘r2’ | ||
‘neg_mean_poisson_deviance’ | ||
‘neg_mean_gamma_deviance’ | ||
‘neg_mean_absolute_percentage_error’ | ||
‘d2_absolute_error_score’ |
Usage examples:
>>>fromsklearnimportsvm,datasets>>>fromsklearn.model_selectionimportcross_val_score>>>X,y=datasets.load_iris(return_X_y=True)>>>clf=svm.SVC(random_state=0)>>>cross_val_score(clf,X,y,cv=5,scoring='recall_macro')array([0.96, 0.96, 0.96, 0.93, 1. ])
Note
If a wrong scoring name is passed, anInvalidParameterError
is raised.You can retrieve the names of all available scorers by callingget_scorer_names
.
3.4.3.2.Callable scorers#
For more complex use cases and more flexibility, you can pass a callable tothescoring
parameter. This can be done by:
3.4.3.2.1.Adapting predefined metrics viamake_scorer
#
The following metric functions are not implemented as named scorers,sometimes because they require additional parameters, such asfbeta_score
. They cannot be passed to thescoring
parameters; instead their callable needs to be passed tomake_scorer
together with the value of the user-settableparameters.
Function | Parameter | Example usage |
---|---|---|
Classification | ||
|
|
|
Regression | ||
|
|
|
|
|
|
|
|
|
|
|
|
One typical use case is to wrap an existing metric function from the librarywith non-default values for its parameters, such as thebeta
parameter forthefbeta_score
function:
>>>fromsklearn.metricsimportfbeta_score,make_scorer>>>ftwo_scorer=make_scorer(fbeta_score,beta=2)>>>fromsklearn.model_selectionimportGridSearchCV>>>fromsklearn.svmimportLinearSVC>>>grid=GridSearchCV(LinearSVC(),param_grid={'C':[1,10]},...scoring=ftwo_scorer,cv=5)
The modulesklearn.metrics
also exposes a set of simple functionsmeasuring a prediction error given ground truth and prediction:
functions ending with
_score
return a value tomaximize, the higher the better.functions ending with
_error
,_loss
, or_deviance
return avalue to minimize, the lower the better. When convertinginto a scorer object usingmake_scorer
, setthegreater_is_better
parameter toFalse
(True
by default; see theparameter description below).
3.4.3.2.2.Creating a custom scorer object#
You can create your own custom scorer object usingmake_scorer
or for the most flexibility, from scratch. See below for details.
Custom scorer objects usingmake_scorer
#
You can build a completely custom scorer objectfrom a simple python function usingmake_scorer
, which cantake several parameters:
the python function you want to use (
my_custom_loss_func
in the example below)whether the python function returns a score (
greater_is_better=True
,the default) or a loss (greater_is_better=False
). If a loss, the outputof the python function is negated by the scorer object, conforming tothe cross validation convention that scorers return higher values for better models.for classification metrics only: whether the python function you provided requirescontinuous decision certainties. If the scoring function only accepts probabilityestimates (e.g.
metrics.log_loss
), then one needs to set the parameterresponse_method="predict_proba"
. Some scoringfunctions do not necessarily require probability estimates but rather non-thresholdeddecision values (e.g.metrics.roc_auc_score
). In this case, one can provide alist (e.g.,response_method=["decision_function","predict_proba"]
),and scorer will use the first available method, in the order given in the list,to compute the scores.any additional parameters of the scoring function, such as
beta
orlabels
.
Here is an example of building custom scorers, and of using thegreater_is_better
parameter:
>>>importnumpyasnp>>>defmy_custom_loss_func(y_true,y_pred):...diff=np.abs(y_true-y_pred).max()...returnfloat(np.log1p(diff))...>>># score will negate the return value of my_custom_loss_func,>>># which will be np.log(2), 0.693, given the values for X>>># and y defined below.>>>score=make_scorer(my_custom_loss_func,greater_is_better=False)>>>X=[[1],[1]]>>>y=[0,1]>>>fromsklearn.dummyimportDummyClassifier>>>clf=DummyClassifier(strategy='most_frequent',random_state=0)>>>clf=clf.fit(X,y)>>>my_custom_loss_func(y,clf.predict(X))0.69>>>score(clf,X,y)-0.69
Custom scorer objects from scratch#
You can generate even more flexible model scorers by constructing your ownscoring object from scratch, without using themake_scorer
factory.
For a callable to be a scorer, it needs to meet the protocol specified bythe following two rules:
It can be called with parameters
(estimator,X,y)
, whereestimator
is the model that should be evaluated,X
is validation data, andy
isthe ground truth target forX
(in the supervised case) orNone
(in theunsupervised case).It returns a floating point number that quantifies the
estimator
prediction quality onX
, with reference toy
.Again, by convention higher numbers are better, so if your scorerreturns loss, that value should be negated.Advanced: If it requires extra metadata to be passed to it, it should exposea
get_metadata_routing
method returning the requested metadata. The usershould be able to set the requested metadata via aset_score_request
method. Please seeUser Guide andDeveloperGuide formore details.
Using custom scorers in functions where n_jobs > 1#
While defining the custom scoring function alongside the calling functionshould work out of the box with the default joblib backend (loky),importing it from another module will be a more robust approach and workindependently of the joblib backend.
For example, to usen_jobs
greater than 1 in the example below,custom_scoring_function
function is saved in a user-created module(custom_scorer_module.py
) and imported:
>>>fromcustom_scorer_moduleimportcustom_scoring_function>>>cross_val_score(model,...X_train,...y_train,...scoring=make_scorer(custom_scoring_function,greater_is_better=False),...cv=5,...n_jobs=-1)
3.4.3.3.Using multiple metric evaluation#
Scikit-learn also permits evaluation of multiple metrics inGridSearchCV
,RandomizedSearchCV
andcross_validate
.
There are three ways to specify multiple scoring metrics for thescoring
parameter:
As an iterable of string metrics:
>>>scoring=['accuracy','precision']
As a
dict
mapping the scorer name to the scoring function:>>>fromsklearn.metricsimportaccuracy_score>>>fromsklearn.metricsimportmake_scorer>>>scoring={'accuracy':make_scorer(accuracy_score),...'prec':'precision'}
Note that the dict values can either be scorer functions or one of thepredefined metric strings.
As a callable that returns a dictionary of scores:
>>>fromsklearn.model_selectionimportcross_validate>>>fromsklearn.metricsimportconfusion_matrix>>># A sample toy binary classification dataset>>>X,y=datasets.make_classification(n_classes=2,random_state=0)>>>svm=LinearSVC(random_state=0)>>>defconfusion_matrix_scorer(clf,X,y):...y_pred=clf.predict(X)...cm=confusion_matrix(y,y_pred)...return{'tn':cm[0,0],'fp':cm[0,1],...'fn':cm[1,0],'tp':cm[1,1]}>>>cv_results=cross_validate(svm,X,y,cv=5,...scoring=confusion_matrix_scorer)>>># Getting the test set true positive scores>>>print(cv_results['test_tp'])[10 9 8 7 8]>>># Getting the test set false negative scores>>>print(cv_results['test_fn'])[0 1 2 3 2]
3.4.4.Classification metrics#
Thesklearn.metrics
module implements several loss, score, and utilityfunctions to measure classification performance.Some metrics might require probability estimates of the positive class,confidence values, or binary decisions values.Most implementations allow each sample to provide a weighted contributionto the overall score, through thesample_weight
parameter.
Some of these are restricted to the binary classification case:
| Compute precision-recall pairs for different probability thresholds. |
| Compute Receiver operating characteristic (ROC). |
| Compute binary classification positive and negative likelihood ratios. |
| Compute Detection Error Tradeoff (DET) for different probability thresholds. |
Others also work in the multiclass case:
| Compute the balanced accuracy. |
| Compute Cohen's kappa: a statistic that measures inter-annotator agreement. |
| Compute confusion matrix to evaluate the accuracy of a classification. |
| Average hinge loss (non-regularized). |
| Compute the Matthews correlation coefficient (MCC). |
| Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores. |
| Top-k Accuracy classification score. |
Some also work in the multilabel case:
| Accuracy classification score. |
| Build a text report showing the main classification metrics. |
| Compute the F1 score, also known as balanced F-score or F-measure. |
| Compute the F-beta score. |
| Compute the average Hamming loss. |
| Jaccard similarity coefficient score. |
| Log loss, aka logistic loss or cross-entropy loss. |
| Compute a confusion matrix for each class or sample. |
| Compute precision, recall, F-measure and support for each class. |
| Compute the precision. |
| Compute the recall. |
| Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores. |
| Zero-one classification loss. |
| \(D^2\) score function, fraction of log loss explained. |
And some work with binary and multilabel (but not multiclass) problems:
| Compute average precision (AP) from prediction scores. |
In the following sub-sections, we will describe each of those functions,preceded by some notes on common API and metric definition.
3.4.4.1.From binary to multiclass and multilabel#
Some metrics are essentially defined for binary classification tasks (e.g.f1_score
,roc_auc_score
). In these cases, by defaultonly the positive label is evaluated, assuming by default that the positiveclass is labelled1
(though this may be configurable through thepos_label
parameter).
In extending a binary metric to multiclass or multilabel problems, the datais treated as a collection of binary problems, one for each class.There are then a number of ways to average binary metric calculations acrossthe set of classes, each of which may be useful in some scenario.Where available, you should select among these using theaverage
parameter.
"macro"
simply calculates the mean of the binary metrics,giving equal weight to each class. In problems where infrequent classesare nonetheless important, macro-averaging may be a means of highlightingtheir performance. On the other hand, the assumption that all classes areequally important is often untrue, such that macro-averaging willover-emphasize the typically low performance on an infrequent class."weighted"
accounts for class imbalance by computing the average ofbinary metrics in which each class’s score is weighted by its presence in thetrue data sample."micro"
gives each sample-class pair an equal contribution to the overallmetric (except as a result of sample-weight). Rather than summing themetric per class, this sums the dividends and divisors that make up theper-class metrics to calculate an overall quotient.Micro-averaging may be preferred in multilabel settings, includingmulticlass classification where a majority class is to be ignored."samples"
applies only to multilabel problems. It does not calculate aper-class measure, instead calculating the metric over the true and predictedclasses for each sample in the evaluation data, and returning their(sample_weight
-weighted) average.Selecting
average=None
will return an array with the score for eachclass.
While multiclass data is provided to the metric, like binary targets, as anarray of class labels, multilabel data is specified as an indicator matrix,in which cell[i,j]
has value 1 if samplei
has labelj
and value0 otherwise.
3.4.4.2.Accuracy score#
Theaccuracy_score
function computes theaccuracy, either the fraction(default) or the count (normalize=False) of correct predictions.
In multilabel classification, the function returns the subset accuracy. Ifthe entire set of predicted labels for a sample strictly match with the trueset of labels, then the subset accuracy is 1.0; otherwise it is 0.0.
If\(\hat{y}_i\) is the predicted value ofthe\(i\)-th sample and\(y_i\) is the corresponding true value,then the fraction of correct predictions over\(n_\text{samples}\) isdefined as
where\(1(x)\) is theindicator function.
>>>importnumpyasnp>>>fromsklearn.metricsimportaccuracy_score>>>y_pred=[0,2,1,3]>>>y_true=[0,1,2,3]>>>accuracy_score(y_true,y_pred)0.5>>>accuracy_score(y_true,y_pred,normalize=False)2.0
In the multilabel case with binary label indicators:
>>>accuracy_score(np.array([[0,1],[1,1]]),np.ones((2,2)))0.5
Examples
SeeTest with permutations the significance of a classification scorefor an example of accuracy score usage using permutations ofthe dataset.
3.4.4.3.Top-k accuracy score#
Thetop_k_accuracy_score
function is a generalization ofaccuracy_score
. The difference is that a prediction is consideredcorrect as long as the true label is associated with one of thek
highestpredicted scores.accuracy_score
is the special case ofk=1
.
The function covers the binary and multiclass classification cases but not themultilabel case.
If\(\hat{f}_{i,j}\) is the predicted class for the\(i\)-th samplecorresponding to the\(j\)-th largest predicted score and\(y_i\) is thecorresponding true value, then the fraction of correct predictions over\(n_\text{samples}\) is defined as
where\(k\) is the number of guesses allowed and\(1(x)\) is theindicator function.
>>>importnumpyasnp>>>fromsklearn.metricsimporttop_k_accuracy_score>>>y_true=np.array([0,1,2,2])>>>y_score=np.array([[0.5,0.2,0.2],...[0.3,0.4,0.2],...[0.2,0.4,0.3],...[0.7,0.2,0.1]])>>>top_k_accuracy_score(y_true,y_score,k=2)0.75>>># Not normalizing gives the number of "correctly" classified samples>>>top_k_accuracy_score(y_true,y_score,k=2,normalize=False)3.0
3.4.4.4.Balanced accuracy score#
Thebalanced_accuracy_score
function computes thebalanced accuracy, which avoids inflatedperformance estimates on imbalanced datasets. It is the macro-average of recallscores per class or, equivalently, raw accuracy where each sample is weightedaccording to the inverse prevalence of its true class.Thus for balanced datasets, the score is equal to accuracy.
In the binary case, balanced accuracy is equal to the arithmetic mean ofsensitivity(true positive rate) andspecificity (true negativerate), or the area under the ROC curve with binary predictions rather thanscores:
If the classifier performs equally well on either class, this term reduces tothe conventional accuracy (i.e., the number of correct predictions divided bythe total number of predictions).
In contrast, if the conventional accuracy is above chance only because theclassifier takes advantage of an imbalanced test set, then the balancedaccuracy, as appropriate, will drop to\(\frac{1}{n\_classes}\).
The score ranges from 0 to 1, or whenadjusted=True
is used, it is rescaled tothe range\(\frac{1}{1 - n\_classes}\) to 1, inclusive, withperformance at random scoring 0.
If\(y_i\) is the true value of the\(i\)-th sample, and\(w_i\)is the corresponding sample weight, then we adjust the sample weight to:
where\(1(x)\) is theindicator function.Given predicted\(\hat{y}_i\) for sample\(i\), balanced accuracy isdefined as:
Withadjusted=True
, balanced accuracy reports the relative increase from\(\texttt{balanced-accuracy}(y, \mathbf{0}, w) =\frac{1}{n\_classes}\). In the binary case, this is also known as*Youden’s J statistic*,orinformedness.
Note
The multiclass definition here seems the most reasonable extension of themetric used in binary classification, though there is no certain consensusin the literature:
Our definition:[Mosley2013],[Kelleher2015] and[Guyon2015], where[Guyon2015] adopt the adjusted version to ensure that random predictionshave a score of\(0\) and perfect predictions have a score of\(1\)..
Class balanced accuracy as described in[Mosley2013]: the minimum between the precisionand the recall for each class is computed. Those values are then averaged over the totalnumber of classes to get the balanced accuracy.
Balanced Accuracy as described in[Urbanowicz2015]: the average of sensitivity and specificityis computed for each class and then averaged over total number of classes.
References
I. Guyon, K. Bennett, G. Cawley, H.J. Escalante, S. Escalera, T.K. Ho, N. Macià,B. Ray, M. Saeed, A.R. Statnikov, E. Viegas,Design of the 2015 ChaLearn AutoML Challenge, IJCNN 2015.
John. D. Kelleher, Brian Mac Namee, Aoife D’Arcy,Fundamentals ofMachine Learning for Predictive Data Analytics: Algorithms, Worked Examples,and Case Studies,2015.
Urbanowicz R.J., Moore, J.H.ExSTraCS 2.0: descriptionand evaluation of a scalable learning classifiersystem, Evol. Intel. (2015) 8: 89.
3.4.4.5.Cohen’s kappa#
The functioncohen_kappa_score
computesCohen’s kappa statistic.This measure is intended to compare labelings by different human annotators,not a classifier versus a ground truth.
The kappa score is a number between -1 and 1.Scores above .8 are generally considered good agreement;zero or lower means no agreement (practically random labels).
Kappa scores can be computed for binary or multiclass problems,but not for multilabel problems (except by manually computing a per-label score)and not for more than two annotators.
>>>fromsklearn.metricsimportcohen_kappa_score>>>labeling1=[2,0,2,2,0,1]>>>labeling2=[0,0,2,2,0,2]>>>cohen_kappa_score(labeling1,labeling2)0.4285714285714286
3.4.4.6.Confusion matrix#
Theconfusion_matrix
function evaluatesclassification accuracy by computing theconfusion matrix with each row correspondingto the true class (Wikipedia and other references may use different conventionfor axes).
By definition, entry\(i, j\) in a confusion matrix isthe number of observations actually in group\(i\), butpredicted to be in group\(j\). Here is an example:
>>>fromsklearn.metricsimportconfusion_matrix>>>y_true=[2,0,2,2,0,1]>>>y_pred=[0,0,2,2,0,2]>>>confusion_matrix(y_true,y_pred)array([[2, 0, 0], [0, 0, 1], [1, 0, 2]])
ConfusionMatrixDisplay
can be used to visually represent a confusionmatrix as shown in theConfusion matrixexample, which creates the following figure:

The parameternormalize
allows to report ratios instead of counts. Theconfusion matrix can be normalized in 3 different ways:'pred'
,'true'
,and'all'
which will divide the counts by the sum of each columns, rows, orthe entire matrix, respectively.
>>>y_true=[0,0,0,1,1,1,1,1]>>>y_pred=[0,1,0,1,0,1,0,1]>>>confusion_matrix(y_true,y_pred,normalize='all')array([[0.25 , 0.125], [0.25 , 0.375]])
For binary problems, we can get counts of true negatives, false positives,false negatives and true positives as follows:
>>>y_true=[0,0,0,1,1,1,1,1]>>>y_pred=[0,1,0,1,0,1,0,1]>>>tn,fp,fn,tp=confusion_matrix(y_true,y_pred).ravel().tolist()>>>tn,fp,fn,tp(2, 1, 2, 3)
Examples
SeeConfusion matrixfor an example of using a confusion matrix to evaluate classifier outputquality.
SeeRecognizing hand-written digitsfor an example of using a confusion matrix to classifyhand-written digits.
SeeClassification of text documents using sparse featuresfor an example of using a confusion matrix to classify textdocuments.
3.4.4.7.Classification report#
Theclassification_report
function builds a text report showing themain classification metrics. Here is a small example with customtarget_names
and inferred labels:
>>>fromsklearn.metricsimportclassification_report>>>y_true=[0,1,2,2,0]>>>y_pred=[0,0,2,1,0]>>>target_names=['class 0','class 1','class 2']>>>print(classification_report(y_true,y_pred,target_names=target_names)) precision recall f1-score support class 0 0.67 1.00 0.80 2 class 1 0.00 0.00 0.00 1 class 2 1.00 0.50 0.67 2 accuracy 0.60 5 macro avg 0.56 0.50 0.49 5weighted avg 0.67 0.60 0.59 5
Examples
SeeRecognizing hand-written digitsfor an example of classification report usage forhand-written digits.
SeeCustom refit strategy of a grid search with cross-validationfor an example of classification report usage forgrid search with nested cross-validation.
3.4.4.8.Hamming loss#
Thehamming_loss
computes the average Hamming loss orHammingdistance between two setsof samples.
If\(\hat{y}_{i,j}\) is the predicted value for the\(j\)-th label of agiven sample\(i\),\(y_{i,j}\) is the corresponding true value,\(n_\text{samples}\) is the number of samples and\(n_\text{labels}\)is the number of labels, then the Hamming loss\(L_{Hamming}\) is definedas:
where\(1(x)\) is theindicator function.
The equation above does not hold true in the case of multiclass classification.Please refer to the note below for more information.
>>>fromsklearn.metricsimporthamming_loss>>>y_pred=[1,2,3,4]>>>y_true=[2,2,3,4]>>>hamming_loss(y_true,y_pred)0.25
In the multilabel case with binary label indicators:
>>>hamming_loss(np.array([[0,1],[1,1]]),np.zeros((2,2)))0.75
Note
In multiclass classification, the Hamming loss corresponds to the Hammingdistance betweeny_true
andy_pred
which is similar to theZero one loss function. However, while zero-one loss penalizesprediction sets that do not strictly match true sets, the Hamming losspenalizes individual labels. Thus the Hamming loss, upper bounded by the zero-oneloss, is always between zero and one, inclusive; and predicting a proper subsetor superset of the true labels will give a Hamming loss betweenzero and one, exclusive.
3.4.4.9.Precision, recall and F-measures#
Intuitively,precision is the abilityof the classifier not to label as positive a sample that is negative, andrecall is theability of the classifier to find all the positive samples.
TheF-measure(\(F_\beta\) and\(F_1\) measures) can be interpreted as a weightedharmonic mean of the precision and recall. A\(F_\beta\) measure reaches its best value at 1 and its worst score at 0.With\(\beta = 1\),\(F_\beta\) and\(F_1\) are equivalent, and the recall and the precision are equally important.
Theprecision_recall_curve
computes a precision-recall curvefrom the ground truth label and a score given by the classifierby varying a decision threshold.
Theaverage_precision_score
function computes theaverage precision(AP) from prediction scores. The value is between 0 and 1 and higher is better.AP is defined as
where\(P_n\) and\(R_n\) are the precision and recall at thenth threshold. With random predictions, the AP is the fraction of positivesamples.
References[Manning2008] and[Everingham2010] present alternative variants ofAP that interpolate the precision-recall curve. Currently,average_precision_score
does not implement any interpolated variant.References[Davis2006] and[Flach2015] describe why a linear interpolation ofpoints on the precision-recall curve provides an overly-optimistic measure ofclassifier performance. This linear interpolation is used when computing areaunder the curve with the trapezoidal rule inauc
.
Several functions allow you to analyze the precision, recall and F-measuresscore:
| Compute average precision (AP) from prediction scores. |
| Compute the F1 score, also known as balanced F-score or F-measure. |
| Compute the F-beta score. |
| Compute precision-recall pairs for different probability thresholds. |
| Compute precision, recall, F-measure and support for each class. |
| Compute the precision. |
| Compute the recall. |
Note that theprecision_recall_curve
function is restricted to thebinary case. Theaverage_precision_score
function supports multiclassand multilabel formats by computing each class score in a One-vs-the-rest (OvR)fashion and averaging them or not depending of itsaverage
argument value.
ThePrecisionRecallDisplay.from_estimator
andPrecisionRecallDisplay.from_predictions
functions will plot theprecision-recall curve as follows.

Examples
SeeCustom refit strategy of a grid search with cross-validationfor an example of
precision_score
andrecall_score
usageto estimate parameters using grid search with nested cross-validation.SeePrecision-Recallfor an example of
precision_recall_curve
usage to evaluateclassifier output quality.
References
C.D. Manning, P. Raghavan, H. Schütze,Introduction to Information Retrieval,2008.
M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn, A. Zisserman,The Pascal Visual Object Classes (VOC) Challenge,IJCV 2010.
J. Davis, M. Goadrich,The Relationship Between Precision-Recall and ROC Curves,ICML 2006.
P.A. Flach, M. Kull,Precision-Recall-Gain Curves: PR Analysis Done Right,NIPS 2015.
3.4.4.9.1.Binary classification#
In a binary classification task, the terms ‘’positive’’ and ‘’negative’’ referto the classifier’s prediction, and the terms ‘’true’’ and ‘’false’’ refer towhether that prediction corresponds to the external judgment (sometimes knownas the ‘’observation’’). Given these definitions, we can formulate thefollowing table:
Actual class (observation) | ||
Predicted class(expectation) | tp (true positive)Correct result | fp (false positive)Unexpected result |
fn (false negative)Missing result | tn (true negative)Correct absence of result |
In this context, we can define the notions of precision and recall:
(Sometimes recall is also called ‘’sensitivity’’)
F-measure is the weighted harmonic mean of precision and recall, with precision’scontribution to the mean weighted by some parameter\(\beta\):
To avoid division by zero when precision and recall are zero, Scikit-Learn calculates F-measure with thisotherwise-equivalent formula:
Note that this formula is still undefined when there are no true positives, falsepositives, or false negatives. By default, F-1 for a set of exclusively true negativesis calculated as 0, however this behavior can be changed using thezero_division
parameter.Here are some small examples in binary classification:
>>>fromsklearnimportmetrics>>>y_pred=[0,1,0,0]>>>y_true=[0,1,0,1]>>>metrics.precision_score(y_true,y_pred)1.0>>>metrics.recall_score(y_true,y_pred)0.5>>>metrics.f1_score(y_true,y_pred)0.66>>>metrics.fbeta_score(y_true,y_pred,beta=0.5)0.83>>>metrics.fbeta_score(y_true,y_pred,beta=1)0.66>>>metrics.fbeta_score(y_true,y_pred,beta=2)0.55>>>metrics.precision_recall_fscore_support(y_true,y_pred,beta=0.5)(array([0.66, 1. ]), array([1. , 0.5]), array([0.71, 0.83]), array([2, 2]))>>>importnumpyasnp>>>fromsklearn.metricsimportprecision_recall_curve>>>fromsklearn.metricsimportaverage_precision_score>>>y_true=np.array([0,0,1,1])>>>y_scores=np.array([0.1,0.4,0.35,0.8])>>>precision,recall,threshold=precision_recall_curve(y_true,y_scores)>>>precisionarray([0.5 , 0.66, 0.5 , 1. , 1. ])>>>recallarray([1. , 1. , 0.5, 0.5, 0. ])>>>thresholdarray([0.1 , 0.35, 0.4 , 0.8 ])>>>average_precision_score(y_true,y_scores)0.83
3.4.4.9.2.Multiclass and multilabel classification#
In a multiclass and multilabel classification task, the notions of precision,recall, and F-measures can be applied to each label independently.There are a few ways to combine results across labels,specified by theaverage
argument to theaverage_precision_score
,f1_score
,fbeta_score
,precision_recall_fscore_support
,precision_score
andrecall_score
functions, as describedabove.
Note the following behaviors when averaging:
If all labels are included, “micro”-averaging in a multiclass setting will produceprecision, recall and\(F\) that are all identical to accuracy.
“weighted” averaging may produce a F-score that is not between precision and recall.
“macro” averaging for F-measures is calculated as the arithmetic mean overper-label/class F-measures, not the harmonic mean over the arithmetic precision andrecall means. Both calculations can be seen in the literature but are not equivalent,see[OB2019] for details.
To make this more explicit, consider the following notation:
\(y\) the set oftrue\((sample, label)\) pairs
\(\hat{y}\) the set ofpredicted\((sample, label)\) pairs
\(L\) the set of labels
\(S\) the set of samples
\(y_s\) the subset of\(y\) with sample\(s\),i.e.\(y_s := \left\{(s', l) \in y | s' = s\right\}\)
\(y_l\) the subset of\(y\) with label\(l\)
similarly,\(\hat{y}_s\) and\(\hat{y}_l\) are subsets of\(\hat{y}\)
\(P(A, B) := \frac{\left| A \cap B \right|}{\left|B\right|}\) for somesets\(A\) and\(B\)
\(R(A, B) := \frac{\left| A \cap B \right|}{\left|A\right|}\)(Conventions vary on handling\(A = \emptyset\); this implementation uses\(R(A, B):=0\), and similar for\(P\).)
\(F_\beta(A, B) := \left(1 + \beta^2\right) \frac{P(A, B) \times R(A, B)}{\beta^2 P(A, B) + R(A, B)}\)
Then the metrics are defined as:
| Precision | Recall | F_beta |
---|---|---|---|
| \(P(y, \hat{y})\) | \(R(y, \hat{y})\) | \(F_\beta(y, \hat{y})\) |
| \(\frac{1}{\left|S\right|} \sum_{s \in S} P(y_s, \hat{y}_s)\) | \(\frac{1}{\left|S\right|} \sum_{s \in S} R(y_s, \hat{y}_s)\) | \(\frac{1}{\left|S\right|} \sum_{s \in S} F_\beta(y_s, \hat{y}_s)\) |
| \(\frac{1}{\left|L\right|} \sum_{l \in L} P(y_l, \hat{y}_l)\) | \(\frac{1}{\left|L\right|} \sum_{l \in L} R(y_l, \hat{y}_l)\) | \(\frac{1}{\left|L\right|} \sum_{l \in L} F_\beta(y_l, \hat{y}_l)\) |
| \(\frac{1}{\sum_{l \in L} \left|y_l\right|} \sum_{l \in L} \left|y_l\right| P(y_l, \hat{y}_l)\) | \(\frac{1}{\sum_{l \in L} \left|y_l\right|} \sum_{l \in L} \left|y_l\right| R(y_l, \hat{y}_l)\) | \(\frac{1}{\sum_{l \in L} \left|y_l\right|} \sum_{l \in L} \left|y_l\right| F_\beta(y_l, \hat{y}_l)\) |
| \(\langle P(y_l, \hat{y}_l) | l \in L \rangle\) | \(\langle R(y_l, \hat{y}_l) | l \in L \rangle\) | \(\langle F_\beta(y_l, \hat{y}_l) | l \in L \rangle\) |
>>>fromsklearnimportmetrics>>>y_true=[0,1,2,0,1,2]>>>y_pred=[0,2,1,0,0,1]>>>metrics.precision_score(y_true,y_pred,average='macro')0.22>>>metrics.recall_score(y_true,y_pred,average='micro')0.33>>>metrics.f1_score(y_true,y_pred,average='weighted')0.267>>>metrics.fbeta_score(y_true,y_pred,average='macro',beta=0.5)0.238>>>metrics.precision_recall_fscore_support(y_true,y_pred,beta=0.5,average=None)(array([0.667, 0., 0.]), array([1., 0., 0.]), array([0.714, 0., 0.]), array([2, 2, 2]))
For multiclass classification with a “negative class”, it is possible to exclude some labels:
>>>metrics.recall_score(y_true,y_pred,labels=[1,2],average='micro')...# excluding 0, no labels were correctly recalled0.0
Similarly, labels not present in the data sample may be accounted for in macro-averaging.
>>>metrics.precision_score(y_true,y_pred,labels=[0,1,2,3],average='macro')0.166
References
3.4.4.10.Jaccard similarity coefficient score#
Thejaccard_score
function computes the average ofJaccard similaritycoefficients, also called theJaccard index, between pairs of label sets.
The Jaccard similarity coefficient with a ground truth label set\(y\) andpredicted label set\(\hat{y}\), is defined as
Thejaccard_score
(likeprecision_recall_fscore_support
) appliesnatively to binary targets. By computing it set-wise it can be extended to applyto multilabel and multiclass through the use ofaverage
(seeabove).
In the binary case:
>>>importnumpyasnp>>>fromsklearn.metricsimportjaccard_score>>>y_true=np.array([[0,1,1],...[1,1,0]])>>>y_pred=np.array([[1,1,1],...[1,0,0]])>>>jaccard_score(y_true[0],y_pred[0])0.6666
In the 2D comparison case (e.g. image similarity):
>>>jaccard_score(y_true,y_pred,average="micro")0.6
In the multilabel case with binary label indicators:
>>>jaccard_score(y_true,y_pred,average='samples')0.5833>>>jaccard_score(y_true,y_pred,average='macro')0.6666>>>jaccard_score(y_true,y_pred,average=None)array([0.5, 0.5, 1. ])
Multiclass problems are binarized and treated like the correspondingmultilabel problem:
>>>y_pred=[0,2,1,2]>>>y_true=[0,1,2,2]>>>jaccard_score(y_true,y_pred,average=None)array([1. , 0. , 0.33])>>>jaccard_score(y_true,y_pred,average='macro')0.44>>>jaccard_score(y_true,y_pred,average='micro')0.33
3.4.4.11.Hinge loss#
Thehinge_loss
function computes the average distance betweenthe model and the data usinghinge loss, a one-sided metricthat considers only prediction errors. (Hingeloss is used in maximal margin classifiers such as support vector machines.)
If the true label\(y_i\) of a binary classification task is encoded as\(y_i=\left\{-1, +1\right\}\) for every sample\(i\); and\(w_i\)is the corresponding predicted decision (an array of shape (n_samples
,) asoutput by thedecision_function
method), then the hinge loss is defined as:
If there are more than two labels,hinge_loss
uses a multiclass variantdue to Crammer & Singer.Here isthe paper describing it.
In this case the predicted decision is an array of shape (n_samples
,n_labels
). If\(w_{i, y_i}\) is the predicted decision for the true label\(y_i\) of the\(i\)-th sample; and\(\hat{w}_{i, y_i} = \max\left\{w_{i, y_j}~|~y_j \ne y_i \right\}\)is the maximum of thepredicted decisions for all the other labels, then the multi-class hinge lossis defined by:
Here is a small example demonstrating the use of thehinge_loss
functionwith a svm classifier in a binary class problem:
>>>fromsklearnimportsvm>>>fromsklearn.metricsimporthinge_loss>>>X=[[0],[1]]>>>y=[-1,1]>>>est=svm.LinearSVC(random_state=0)>>>est.fit(X,y)LinearSVC(random_state=0)>>>pred_decision=est.decision_function([[-2],[3],[0.5]])>>>pred_decisionarray([-2.18, 2.36, 0.09])>>>hinge_loss([-1,1,1],pred_decision)0.3
Here is an example demonstrating the use of thehinge_loss
functionwith a svm classifier in a multiclass problem:
>>>X=np.array([[0],[1],[2],[3]])>>>Y=np.array([0,1,2,3])>>>labels=np.array([0,1,2,3])>>>est=svm.LinearSVC()>>>est.fit(X,Y)LinearSVC()>>>pred_decision=est.decision_function([[-1],[2],[3]])>>>y_true=[0,2,3]>>>hinge_loss(y_true,pred_decision,labels=labels)0.56
3.4.4.12.Log loss#
Log loss, also called logistic regression loss orcross-entropy loss, is defined on probability estimates. It iscommonly used in (multinomial) logistic regression and neural networks, as wellas in some variants of expectation-maximization, and can be used to evaluate theprobability outputs (predict_proba
) of a classifier instead of itsdiscrete predictions.
For binary classification with a true label\(y \in \{0,1\}\)and a probability estimate\(\hat{p} \approx \operatorname{Pr}(y = 1)\),the log loss per sample is the negative log-likelihoodof the classifier given the true label:
This extends to the multiclass case as follows.Let the true labels for a set of samplesbe encoded as a 1-of-K binary indicator matrix\(Y\),i.e.,\(y_{i,k} = 1\) if sample\(i\) has label\(k\)taken from a set of\(K\) labels.Let\(\hat{P}\) be a matrix of probability estimates,with elements\(\hat{p}_{i,k} \approx \operatorname{Pr}(y_{i,k} = 1)\).Then the log loss of the whole set is
To see how this generalizes the binary log loss given above,note that in the binary case,\(\hat{p}_{i,0} = 1 - \hat{p}_{i,1}\) and\(y_{i,0} = 1 - y_{i,1}\),so expanding the inner sum over\(y_{i,k} \in \{0,1\}\)gives the binary log loss.
Thelog_loss
function computes log loss given a list of ground-truthlabels and a probability matrix, as returned by an estimator’spredict_proba
method.
>>>fromsklearn.metricsimportlog_loss>>>y_true=[0,0,1,1]>>>y_pred=[[.9,.1],[.8,.2],[.3,.7],[.01,.99]]>>>log_loss(y_true,y_pred)0.1738
The first[.9,.1]
iny_pred
denotes 90% probability that the firstsample has label 0. The log loss is non-negative.
3.4.4.13.Matthews correlation coefficient#
Thematthews_corrcoef
function computes theMatthew’s correlation coefficient (MCC)for binary classes. Quoting Wikipedia:
“The Matthews correlation coefficient is used in machine learning as ameasure of the quality of binary (two-class) classifications. It takesinto account true and false positives and negatives and is generallyregarded as a balanced measure which can be used even if the classes areof very different sizes. The MCC is in essence a correlation coefficientvalue between -1 and +1. A coefficient of +1 represents a perfectprediction, 0 an average random prediction and -1 an inverse prediction.The statistic is also known as the phi coefficient.”
In the binary (two-class) case,\(tp\),\(tn\),\(fp\) and\(fn\) are respectively the number of true positives, true negatives, falsepositives and false negatives, the MCC is defined as
In the multiclass case, the Matthews correlation coefficient can bedefined in terms of aconfusion_matrix
\(C\) for\(K\) classes. To simplify thedefinition consider the following intermediate variables:
\(t_k=\sum_{i}^{K} C_{ik}\) the number of times class\(k\) truly occurred,
\(p_k=\sum_{i}^{K} C_{ki}\) the number of times class\(k\) was predicted,
\(c=\sum_{k}^{K} C_{kk}\) the total number of samples correctly predicted,
\(s=\sum_{i}^{K} \sum_{j}^{K} C_{ij}\) the total number of samples.
Then the multiclass MCC is defined as:
When there are more than two labels, the value of the MCC will no longer rangebetween -1 and +1. Instead the minimum value will be somewhere between -1 and 0depending on the number and distribution of ground truth labels. The maximumvalue is always +1.For additional information, see[WikipediaMCC2021].
Here is a small example illustrating the usage of thematthews_corrcoef
function:
>>>fromsklearn.metricsimportmatthews_corrcoef>>>y_true=[+1,+1,+1,-1]>>>y_pred=[+1,-1,+1,+1]>>>matthews_corrcoef(y_true,y_pred)-0.33
References
Wikipedia contributors. Phi coefficient.Wikipedia, The Free Encyclopedia. April 21, 2021, 12:21 CEST.Available at:https://en.wikipedia.org/wiki/Phi_coefficientAccessed April 21, 2021.
3.4.4.14.Multi-label confusion matrix#
Themultilabel_confusion_matrix
function computes class-wise (default)or sample-wise (samplewise=True) multilabel confusion matrix to evaluatethe accuracy of a classification. multilabel_confusion_matrix also treatsmulticlass data as if it were multilabel, as this is a transformation commonlyapplied to evaluate multiclass problems with binary classification metrics(such as precision, recall, etc.).
When calculating class-wise multilabel confusion matrix\(C\), thecount of true negatives for class\(i\) is\(C_{i,0,0}\), falsenegatives is\(C_{i,1,0}\), true positives is\(C_{i,1,1}\)and false positives is\(C_{i,0,1}\).
Here is an example demonstrating the use of themultilabel_confusion_matrix
function withmultilabel indicator matrix input:
>>>importnumpyasnp>>>fromsklearn.metricsimportmultilabel_confusion_matrix>>>y_true=np.array([[1,0,1],...[0,1,0]])>>>y_pred=np.array([[1,0,0],...[0,1,1]])>>>multilabel_confusion_matrix(y_true,y_pred)array([[[1, 0], [0, 1]], [[1, 0], [0, 1]], [[0, 1], [1, 0]]])
Or a confusion matrix can be constructed for each sample’s labels:
>>>multilabel_confusion_matrix(y_true,y_pred,samplewise=True)array([[[1, 0], [1, 1]], [[1, 1], [0, 1]]])
Here is an example demonstrating the use of themultilabel_confusion_matrix
function withmulticlass input:
>>>y_true=["cat","ant","cat","cat","ant","bird"]>>>y_pred=["ant","ant","cat","cat","ant","cat"]>>>multilabel_confusion_matrix(y_true,y_pred,...labels=["ant","bird","cat"])array([[[3, 1], [0, 2]], [[5, 0], [1, 0]], [[2, 1], [1, 2]]])
Here are some examples demonstrating the use of themultilabel_confusion_matrix
function to calculate recall(or sensitivity), specificity, fall out and miss rate for each class in aproblem with multilabel indicator matrix input.
Calculatingrecall(also called the true positive rate or the sensitivity) for each class:
>>>y_true=np.array([[0,0,1],...[0,1,0],...[1,1,0]])>>>y_pred=np.array([[0,1,0],...[0,0,1],...[1,1,0]])>>>mcm=multilabel_confusion_matrix(y_true,y_pred)>>>tn=mcm[:,0,0]>>>tp=mcm[:,1,1]>>>fn=mcm[:,1,0]>>>fp=mcm[:,0,1]>>>tp/(tp+fn)array([1. , 0.5, 0. ])
Calculatingspecificity(also called the true negative rate) for each class:
>>>tn/(tn+fp)array([1. , 0. , 0.5])
Calculatingfall out(also called the false positive rate) for each class:
>>>fp/(fp+tn)array([0. , 1. , 0.5])
Calculatingmiss rate(also called the false negative rate) for each class:
>>>fn/(fn+tp)array([0. , 0.5, 1. ])
3.4.4.15.Receiver operating characteristic (ROC)#
The functionroc_curve
computes thereceiver operating characteristic curve, or ROC curve.Quoting Wikipedia :
“A receiver operating characteristic (ROC), or simply ROC curve, is agraphical plot which illustrates the performance of a binary classifiersystem as its discrimination threshold is varied. It is created by plottingthe fraction of true positives out of the positives (TPR = true positiverate) vs. the fraction of false positives out of the negatives (FPR = falsepositive rate), at various threshold settings. TPR is also known assensitivity, and FPR is one minus the specificity or true negative rate.”
This function requires the true binary value and the target scores, which caneither be probability estimates of the positive class, confidence values, orbinary decisions. Here is a small example of how to use theroc_curve
function:
>>>importnumpyasnp>>>fromsklearn.metricsimportroc_curve>>>y=np.array([1,1,2,2])>>>scores=np.array([0.1,0.4,0.35,0.8])>>>fpr,tpr,thresholds=roc_curve(y,scores,pos_label=2)>>>fprarray([0. , 0. , 0.5, 0.5, 1. ])>>>tprarray([0. , 0.5, 0.5, 1. , 1. ])>>>thresholdsarray([ inf, 0.8 , 0.4 , 0.35, 0.1 ])
Compared to metrics such as the subset accuracy, the Hamming loss, or theF1 score, ROC doesn’t require optimizing a threshold for each label.
Theroc_auc_score
function, denoted by ROC-AUC or AUROC, computes thearea under the ROC curve. By doing so, the curve information is summarized inone number.
The following figure shows the ROC curve and ROC-AUC score for a classifieraimed to distinguish the virginica flower from the rest of the species in theIris plants dataset:

For more information see theWikipedia article on AUC.
3.4.4.15.1.Binary case#
In thebinary case, you can either provide the probability estimates, usingtheclassifier.predict_proba()
method, or the non-thresholded decision valuesgiven by theclassifier.decision_function()
method. In the case of providingthe probability estimates, the probability of the class with the“greater label” should be provided. The “greater label” corresponds toclassifier.classes_[1]
and thusclassifier.predict_proba(X)[:,1]
.Therefore, they_score
parameter is of size (n_samples,).
>>>fromsklearn.datasetsimportload_breast_cancer>>>fromsklearn.linear_modelimportLogisticRegression>>>fromsklearn.metricsimportroc_auc_score>>>X,y=load_breast_cancer(return_X_y=True)>>>clf=LogisticRegression().fit(X,y)>>>clf.classes_array([0, 1])
We can use the probability estimates corresponding toclf.classes_[1]
.
>>>y_score=clf.predict_proba(X)[:,1]>>>roc_auc_score(y,y_score)0.99
Otherwise, we can use the non-thresholded decision values
>>>roc_auc_score(y,clf.decision_function(X))0.99
3.4.4.15.2.Multi-class case#
Theroc_auc_score
function can also be used inmulti-classclassification. Two averaging strategies are currently supported: theone-vs-one algorithm computes the average of the pairwise ROC AUC scores, andthe one-vs-rest algorithm computes the average of the ROC AUC scores for eachclass against all other classes. In both cases, the predicted labels areprovided in an array with values from 0 ton_classes
, and the scorescorrespond to the probability estimates that a sample belongs to a particularclass. The OvO and OvR algorithms support weighting uniformly(average='macro'
) and by prevalence (average='weighted'
).
One-vs-one Algorithm#
Computes the average AUC of all possible pairwisecombinations of classes.[HT2001] defines a multiclass AUC metric weighteduniformly:
where\(c\) is the number of classes and\(\text{AUC}(j | k)\) is theAUC with class\(j\) as the positive class and class\(k\) as thenegative class. In general,\(\text{AUC}(j | k) \neq \text{AUC}(k | j))\) in the multiclasscase. This algorithm is used by setting the keyword argumentmulticlass
to'ovo'
andaverage
to'macro'
.
The[HT2001] multiclass AUC metric can be extended to be weighted by theprevalence:
where\(c\) is the number of classes. This algorithm is used by settingthe keyword argumentmulticlass
to'ovo'
andaverage
to'weighted'
. The'weighted'
option returns a prevalence-weighted averageas described in[FC2009].
One-vs-rest Algorithm#
Computes the AUC of each class against the rest[PD2000]. The algorithm is functionally the same as the multilabel case. Toenable this algorithm set the keyword argumentmulticlass
to'ovr'
.Additionally to'macro'
[F2006] and'weighted'
[F2001] averaging, OvRsupports'micro'
averaging.
In applications where a high false positive rate is not tolerable the parametermax_fpr
ofroc_auc_score
can be used to summarize the ROC curve upto the given limit.
The following figure shows the micro-averaged ROC curve and its correspondingROC-AUC score for a classifier aimed to distinguish the different species intheIris plants dataset:

3.4.4.15.3.Multi-label case#
Inmulti-label classification, theroc_auc_score
function isextended by averaging over the labels asabove. In this case,you should provide ay_score
of shape(n_samples,n_classes)
. Thus, whenusing the probability estimates, one needs to select the probability of theclass with the greater label for each output.
>>>fromsklearn.datasetsimportmake_multilabel_classification>>>fromsklearn.multioutputimportMultiOutputClassifier>>>X,y=make_multilabel_classification(random_state=0)>>>inner_clf=LogisticRegression(random_state=0)>>>clf=MultiOutputClassifier(inner_clf).fit(X,y)>>>y_score=np.transpose([y_pred[:,1]fory_predinclf.predict_proba(X)])>>>roc_auc_score(y,y_score,average=None)array([0.828, 0.851, 0.94, 0.87, 0.95])
And the decision values do not require such processing.
>>>fromsklearn.linear_modelimportRidgeClassifierCV>>>clf=RidgeClassifierCV().fit(X,y)>>>y_score=clf.decision_function(X)>>>roc_auc_score(y,y_score,average=None)array([0.82, 0.85, 0.93, 0.87, 0.94])
Examples
SeeMulticlass Receiver Operating Characteristic (ROC) for an example ofusing ROC to evaluate the quality of the output of a classifier.
SeeReceiver Operating Characteristic (ROC) with cross validation for anexample of using ROC to evaluate classifier output quality, using cross-validation.
SeeSpecies distribution modelingfor an example of using ROC to model species distribution.
References
Hand, D.J. and Till, R.J., (2001).A simple generalisationof the area under the ROC curve for multiple class classification problems.Machine learning, 45(2), pp. 171-186.
Ferri, Cèsar & Hernandez-Orallo, Jose & Modroiu, R. (2009).An Experimental Comparison of Performance Measures for Classification.Pattern Recognition Letters. 30. 27-38.
Provost, F., Domingos, P. (2000).Well-trained PETs: Improvingprobability estimation trees(Section 6.2), CeDER Working Paper #IS-00-04, Stern School of Business,New York University.
Fawcett, T., 2006.An introduction to ROC analysis.Pattern Recognition Letters, 27(8), pp. 861-874.
Fawcett, T., 2001.Using rule sets to maximizeROC performanceIn Data Mining, 2001.Proceedings IEEE International Conference, pp. 131-138.
3.4.4.16.Detection error tradeoff (DET)#
The functiondet_curve
computes thedetection error tradeoff curve (DET) curve[WikipediaDET2017].Quoting Wikipedia:
“A detection error tradeoff (DET) graph is a graphical plot of error ratesfor binary classification systems, plotting false reject rate vs. falseaccept rate. The x- and y-axes are scaled non-linearly by their standardnormal deviates (or just by logarithmic transformation), yielding tradeoffcurves that are more linear than ROC curves, and use most of the image areato highlight the differences of importance in the critical operating region.”
DET curves are a variation of receiver operating characteristic (ROC) curveswhere False Negative Rate is plotted on the y-axis instead of True PositiveRate.DET curves are commonly plotted in normal deviate scale by transformation with\(\phi^{-1}\) (with\(\phi\) being the cumulative distributionfunction).The resulting performance curves explicitly visualize the tradeoff of errortypes for given classification algorithms.See[Martin1997] for examples and further motivation.
This figure compares the ROC and DET curves of two example classifiers on thesame classification task:

Properties#
DET curves form a linear curve in normal deviate scale if the detectionscores are normally (or close-to normally) distributed.It was shown by[Navratil2007] that the reverse is not necessarily true andeven more general distributions are able to produce linear DET curves.
The normal deviate scale transformation spreads out the points such that acomparatively larger space of plot is occupied.Therefore curves with similar classification performance might be easier todistinguish on a DET plot.
With False Negative Rate being “inverse” to True Positive Rate the pointof perfection for DET curves is the origin (in contrast to the top leftcorner for ROC curves).
Applications and limitations#
DET curves are intuitive to read and hence allow quick visual assessment of aclassifier’s performance.Additionally DET curves can be consulted for threshold analysis and operatingpoint selection.This is particularly helpful if a comparison of error types is required.
On the other hand DET curves do not provide their metric as a single number.Therefore for either automated evaluation or comparison to otherclassification tasks metrics like the derived area under ROC curve might bebetter suited.
Examples
SeeDetection error tradeoff (DET) curvefor an example comparison between receiver operating characteristic (ROC)curves and Detection error tradeoff (DET) curves.
References
Wikipedia contributors. Detection error tradeoff.Wikipedia, The Free Encyclopedia. September 4, 2017, 23:33 UTC.Available at:https://en.wikipedia.org/w/index.php?title=Detection_error_tradeoff&oldid=798982054.Accessed February 19, 2018.
A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki,The DET Curve in Assessment of Detection Task Performance, NIST 1997.
J. Navratil and D. Klusacek,“On Linear DETs”,2007 IEEE International Conference on Acoustics,Speech and Signal Processing - ICASSP ‘07, Honolulu,HI, 2007, pp. IV-229-IV-232.
3.4.4.17.Zero one loss#
Thezero_one_loss
function computes the sum or the average of the 0-1classification loss (\(L_{0-1}\)) over\(n_{\text{samples}}\). Bydefault, the function normalizes over the sample. To get the sum of the\(L_{0-1}\), setnormalize
toFalse
.
In multilabel classification, thezero_one_loss
scores a subset asone if its labels strictly match the predictions, and as a zero if thereare any errors. By default, the function returns the percentage of imperfectlypredicted subsets. To get the count of such subsets instead, setnormalize
toFalse
.
If\(\hat{y}_i\) is the predicted value ofthe\(i\)-th sample and\(y_i\) is the corresponding true value,then the 0-1 loss\(L_{0-1}\) is defined as:
where\(1(x)\) is theindicator function. The zero-oneloss can also be computed as\(\text{zero-one loss} = 1 - \text{accuracy}\).
>>>fromsklearn.metricsimportzero_one_loss>>>y_pred=[1,2,3,4]>>>y_true=[2,2,3,4]>>>zero_one_loss(y_true,y_pred)0.25>>>zero_one_loss(y_true,y_pred,normalize=False)1.0
In the multilabel case with binary label indicators, where the first labelset [0,1] has an error:
>>>zero_one_loss(np.array([[0,1],[1,1]]),np.ones((2,2)))0.5>>>zero_one_loss(np.array([[0,1],[1,1]]),np.ones((2,2)),normalize=False)1.0
Examples
SeeRecursive feature elimination with cross-validationfor an example of zero one loss usage to perform recursive featureelimination with cross-validation.
3.4.4.18.Brier score loss#
Thebrier_score_loss
function computes theBrier score for binary and multiclassprobabilistic predictions and is equivalent to the mean squared error.Quoting Wikipedia:
“The Brier score is a strictly proper scoring rule that measures the accuracy ofprobabilistic predictions. […] [It] is applicable to tasks in which predictionsmust assign probabilities to a set of mutually exclusive discrete outcomes orclasses.”
Let the true labels for a set of\(N\) data points be encoded as a 1-of-K binaryindicator matrix\(Y\), i.e.,\(y_{i,k} = 1\) if sample\(i\) haslabel\(k\) taken from a set of\(K\) labels. Let\(\hat{P}\) be a matrixof probability estimates with elements\(\hat{p}_{i,k} \approx \operatorname{Pr}(y_{i,k} = 1)\).Following the original definition by[Brier1950], the Brier score is given by:
The Brier score lies in the interval\([0, 2]\) and the lower the value thebetter the probability estimates are (the mean squared difference is smaller).Actually, the Brier score is a strictly proper scoring rule, meaning that itachieves the best score only when the estimated probabilities equal thetrue ones.
Note that in the binary case, the Brier score is usually divided by two andranges between\([0,1]\). For binary targets\(y_i \in {0, 1}\) andprobability estimates\(\hat{p}_i \approx \operatorname{Pr}(y_i = 1)\)for the positive class, the Brier score is then equal to:
Thebrier_score_loss
function computes the Brier score given theground-truth labels and predicted probabilities, as returned by an estimator’spredict_proba
method. Thescale_by_half
parameter controls which of thetwo above definitions to follow.
>>>importnumpyasnp>>>fromsklearn.metricsimportbrier_score_loss>>>y_true=np.array([0,1,1,0])>>>y_true_categorical=np.array(["spam","ham","ham","spam"])>>>y_prob=np.array([0.1,0.9,0.8,0.4])>>>brier_score_loss(y_true,y_prob)0.055>>>brier_score_loss(y_true,1-y_prob,pos_label=0)0.055>>>brier_score_loss(y_true_categorical,y_prob,pos_label="ham")0.055>>>brier_score_loss(...["eggs","ham","spam"],...[[0.8,0.1,0.1],[0.2,0.7,0.1],[0.2,0.2,0.6]],...labels=["eggs","ham","spam"],...)0.146
The Brier score can be used to assess how well a classifier is calibrated.However, a lower Brier score loss does not always mean a better calibration.This is because, by analogy with the bias-variance decomposition of the meansquared error, the Brier score loss can be decomposed as the sum of calibrationloss and refinement loss[Bella2012]. Calibration loss is defined as the meansquared deviation from empirical probabilities derived from the slope of ROCsegments. Refinement loss can be defined as the expected optimal loss asmeasured by the area under the optimal cost curve. Refinement loss can changeindependently from calibration loss, thus a lower Brier score loss does notnecessarily mean a better calibrated model. “Only when refinement loss remainsthe same does a lower Brier score loss always mean better calibration”[Bella2012],[Flach2008].
Examples
SeeProbability calibration of classifiersfor an example of Brier score loss usage to perform probabilitycalibration of classifiers.
References
G. Brier,Verification of forecasts expressed in terms of probability,Monthly weather review 78.1 (1950)
Bella, Ferri, Hernández-Orallo, and Ramírez-Quintana“Calibration of Machine Learning Models”in Khosrow-Pour, M. “Machine learning: concepts, methodologies, toolsand applications.” Hershey, PA: Information Science Reference (2012).
Flach, Peter, and Edson Matsubara.“On classification, ranking,and probability estimation.”Dagstuhl Seminar Proceedings. Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2008).
3.4.4.19.Class likelihood ratios#
Theclass_likelihood_ratios
function computes thepositive and negativelikelihood ratios\(LR_\pm\) for binary classes, which can be interpreted as the ratio ofpost-test to pre-test odds as explained below. As a consequence, this metric isinvariant w.r.t. the class prevalence (the number of samples in the positiveclass divided by the total number of samples) andcan be extrapolated betweenpopulations regardless of any possible class imbalance.
The\(LR_\pm\) metrics are therefore very useful in settings where the dataavailable to learn and evaluate a classifier is a study population with nearlybalanced classes, such as a case-control study, while the target application,i.e. the general population, has very low prevalence.
The positive likelihood ratio\(LR_+\) is the probability of a classifier tocorrectly predict that a sample belongs to the positive class divided by theprobability of predicting the positive class for a sample belonging to thenegative class:
The notation here refers to predicted (\(P\)) or true (\(T\)) label andthe sign\(+\) and\(-\) refer to the positive and negative class,respectively, e.g.\(P+\) stands for “predicted positive”.
Analogously, the negative likelihood ratio\(LR_-\) is the probability of asample of the positive class being classified as belonging to the negative classdivided by the probability of a sample of the negative class being correctlyclassified:
For classifiers above chance\(LR_+\) above 1higher is better, while\(LR_-\) ranges from 0 to 1 andlower is better.Values of\(LR_\pm\approx 1\) correspond to chance level.
Notice that probabilities differ from counts, for instance\(\operatorname{PR}(P+|T+)\) is not equal to the number of true positivecountstp
(seethe wikipedia page forthe actual formulas).
Examples
Interpretation across varying prevalence#
Both class likelihood ratios are interpretable in terms of an odds ratio(pre-test and post-tests):
Odds are in general related to probabilities via
or equivalently
On a given population, the pre-test probability is given by the prevalence. Byconverting odds to probabilities, the likelihood ratios can be translated into aprobability of truly belonging to either class before and after a classifierprediction:
Mathematical divergences#
The positive likelihood ratio (LR+
) is undefined when\(fp=0\), meaning theclassifier does not misclassify any negative labels as positives. This condition caneither indicate a perfect identification of all the negative cases or, if there arealso no true positive predictions (\(tp=0\)), that the classifier does not predictthe positive class at all. In the first case,LR+
can be interpreted asnp.inf
, inthe second case (for instance, with highly imbalanced data) it can be interpreted asnp.nan
.
The negative likelihood ratio (LR-
) is undefined when\(tn=0\). Suchdivergence is invalid, as\(LR_- > 1.0\) would indicate an increase in the odds ofa sample belonging to the positive class after being classified as negative, as if theact of classifying caused the positive condition. This includes the case of aDummyClassifier
that always predicts the positive class(i.e. when\(tn=fn=0\)).
Both class likelihood ratios (LR+andLR-
) are undefined when\(tp=fn=0\), whichmeans that no samples of the positive class were present in the test set. This canhappen when cross-validating on highly imbalanced data and also leads to a division byzero.
If a division by zero occurs andraise_warning
is set toTrue
(default),class_likelihood_ratios
raises anUndefinedMetricWarning
and returnsnp.nan
by default to avoid pollution when averaging over cross-validation folds.Users can set return values in case of a division by zero with thereplace_undefined_by
param.
For a worked-out demonstration of theclass_likelihood_ratios
function,see the example below.
References#
Brenner, H., & Gefeller, O. (1997).Variation of sensitivity, specificity, likelihood ratios and predictivevalues with disease prevalence. Statistics in medicine, 16(9), 981-991.
3.4.4.20.D² score for classification#
The D² score computes the fraction of deviance explained.It is a generalization of R², where the squared error is generalized and replacedby a classification deviance of choice\(\text{dev}(y, \hat{y})\)(e.g., Log loss). D² is a form of askill score.It is calculated as
Where\(y_{\text{null}}\) is the optimal prediction of an intercept-only model(e.g., the per-class proportion ofy_true
in the case of the Log loss).
Like R², the best possible score is 1.0 and it can be negative (because themodel can be arbitrarily worse). A constant model that always predicts\(y_{\text{null}}\), disregarding the input features, would get a D² scoreof 0.0.
D2 log loss score#
Thed2_log_loss_score
function implements the special caseof D² with the log loss, seeLog loss, i.e.:
Here are some usage examples of thed2_log_loss_score
function:
>>>fromsklearn.metricsimportd2_log_loss_score>>>y_true=[1,1,2,3]>>>y_pred=[...[0.5,0.25,0.25],...[0.5,0.25,0.25],...[0.5,0.25,0.25],...[0.5,0.25,0.25],...]>>>d2_log_loss_score(y_true,y_pred)0.0>>>y_true=[1,2,3]>>>y_pred=[...[0.98,0.01,0.01],...[0.01,0.98,0.01],...[0.01,0.01,0.98],...]>>>d2_log_loss_score(y_true,y_pred)0.981>>>y_true=[1,2,3]>>>y_pred=[...[0.1,0.6,0.3],...[0.1,0.6,0.3],...[0.4,0.5,0.1],...]>>>d2_log_loss_score(y_true,y_pred)-0.552
3.4.5.Multilabel ranking metrics#
In multilabel learning, each sample can have any number of ground truth labelsassociated with it. The goal is to give high scores and better rank tothe ground truth labels.
3.4.5.1.Coverage error#
Thecoverage_error
function computes the average number of labels thathave to be included in the final prediction such that all true labelsare predicted. This is useful if you want to know how many top-scored-labelsyou have to predict in average without missing any true one. The best valueof this metric is thus the average number of true labels.
Note
Our implementation’s score is 1 greater than the one given in Tsoumakaset al., 2010. This extends it to handle the degenerate case in which aninstance has 0 true labels.
Formally, given a binary indicator matrix of the ground truth labels\(y \in \left\{0, 1\right\}^{n_\text{samples} \times n_\text{labels}}\) and thescore associated with each label\(\hat{f} \in \mathbb{R}^{n_\text{samples} \times n_\text{labels}}\),the coverage is defined as
with\(\text{rank}_{ij} = \left|\left\{k: \hat{f}_{ik} \geq \hat{f}_{ij} \right\}\right|\).Given the rank definition, ties iny_scores
are broken by giving themaximal rank that would have been assigned to all tied values.
Here is a small example of usage of this function:
>>>importnumpyasnp>>>fromsklearn.metricsimportcoverage_error>>>y_true=np.array([[1,0,0],[0,0,1]])>>>y_score=np.array([[0.75,0.5,1],[1,0.2,0.1]])>>>coverage_error(y_true,y_score)2.5
3.4.5.2.Label ranking average precision#
Thelabel_ranking_average_precision_score
functionimplements label ranking average precision (LRAP). This metric is linked totheaverage_precision_score
function, but is based on the notion oflabel ranking instead of precision and recall.
Label ranking average precision (LRAP) averages over the samples the answer tothe following question: for each ground truth label, what fraction ofhigher-ranked labels were true labels? This performance measure will be higherif you are able to give better rank to the labels associated with each sample.The obtained score is always strictly greater than 0, and the best value is 1.If there is exactly one relevant label per sample, label ranking averageprecision is equivalent to themeanreciprocal rank.
Formally, given a binary indicator matrix of the ground truth labels\(y \in \left\{0, 1\right\}^{n_\text{samples} \times n_\text{labels}}\)and the score associated with each label\(\hat{f} \in \mathbb{R}^{n_\text{samples} \times n_\text{labels}}\),the average precision is defined as
where\(\mathcal{L}_{ij} = \left\{k: y_{ik} = 1, \hat{f}_{ik} \geq \hat{f}_{ij} \right\}\),\(\text{rank}_{ij} = \left|\left\{k: \hat{f}_{ik} \geq \hat{f}_{ij} \right\}\right|\),\(|\cdot|\) computes the cardinality of the set (i.e., the number ofelements in the set), and\(||\cdot||_0\) is the\(\ell_0\) “norm”(which computes the number of nonzero elements in a vector).
Here is a small example of usage of this function:
>>>importnumpyasnp>>>fromsklearn.metricsimportlabel_ranking_average_precision_score>>>y_true=np.array([[1,0,0],[0,0,1]])>>>y_score=np.array([[0.75,0.5,1],[1,0.2,0.1]])>>>label_ranking_average_precision_score(y_true,y_score)0.416
3.4.5.3.Ranking loss#
Thelabel_ranking_loss
function computes the ranking loss whichaverages over the samples the number of label pairs that are incorrectlyordered, i.e. true labels have a lower score than false labels, weighted bythe inverse of the number of ordered pairs of false and true labels.The lowest achievable ranking loss is zero.
Formally, given a binary indicator matrix of the ground truth labels\(y \in \left\{0, 1\right\}^{n_\text{samples} \times n_\text{labels}}\) and thescore associated with each label\(\hat{f} \in \mathbb{R}^{n_\text{samples} \times n_\text{labels}}\),the ranking loss is defined as
where\(|\cdot|\) computes the cardinality of the set (i.e., the number ofelements in the set) and\(||\cdot||_0\) is the\(\ell_0\) “norm”(which computes the number of nonzero elements in a vector).
Here is a small example of usage of this function:
>>>importnumpyasnp>>>fromsklearn.metricsimportlabel_ranking_loss>>>y_true=np.array([[1,0,0],[0,0,1]])>>>y_score=np.array([[0.75,0.5,1],[1,0.2,0.1]])>>>label_ranking_loss(y_true,y_score)0.75>>># With the following prediction, we have perfect and minimal loss>>>y_score=np.array([[1.0,0.1,0.2],[0.1,0.2,0.9]])>>>label_ranking_loss(y_true,y_score)0.0
References#
Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. InData mining and knowledge discovery handbook (pp. 667-685). Springer US.
3.4.5.4.Normalized Discounted Cumulative Gain#
Discounted Cumulative Gain (DCG) and Normalized Discounted Cumulative Gain(NDCG) are ranking metrics implemented indcg_score
andndcg_score
; they compare a predicted order toground-truth scores, such as the relevance of answers to a query.
From the Wikipedia page for Discounted Cumulative Gain:
“Discounted cumulative gain (DCG) is a measure of ranking quality. Ininformation retrieval, it is often used to measure effectiveness of web searchengine algorithms or related applications. Using a graded relevance scale ofdocuments in a search-engine result set, DCG measures the usefulness, or gain,of a document based on its position in the result list. The gain is accumulatedfrom the top of the result list to the bottom, with the gain of each resultdiscounted at lower ranks.”
DCG orders the true targets (e.g. relevance of query answers) in the predictedorder, then multiplies them by a logarithmic decay and sums the result. The sumcan be truncated after the first\(K\) results, in which case we call itDCG@K.NDCG, or NDCG@K is DCG divided by the DCG obtained by a perfect prediction, sothat it is always between 0 and 1. Usually, NDCG is preferred to DCG.
Compared with the ranking loss, NDCG can take into account relevance scores,rather than a ground-truth ranking. So if the ground-truth consists only of anordering, the ranking loss should be preferred; if the ground-truth consists ofactual usefulness scores (e.g. 0 for irrelevant, 1 for relevant, 2 for veryrelevant), NDCG can be used.
For one sample, given the vector of continuous ground-truth values for eachtarget\(y \in \mathbb{R}^{M}\), where\(M\) is the number of outputs, andthe prediction\(\hat{y}\), which induces the ranking function\(f\), theDCG score is
and the NDCG score is the DCG score divided by the DCG score obtained for\(y\).
References#
Jarvelin, K., & Kekalainen, J. (2002).Cumulated gain-based evaluation of IR techniques. ACM Transactions onInformation Systems (TOIS), 20(4), 422-446.
Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, May).A theoretical analysis of NDCG ranking measures. In Proceedings of the 26thAnnual Conference on Learning Theory (COLT 2013)
McSherry, F., & Najork, M. (2008, March). Computing information retrievalperformance measures efficiently in the presence of tied scores. InEuropean conference on information retrieval (pp. 414-421). Springer,Berlin, Heidelberg.
3.4.6.Regression metrics#
Thesklearn.metrics
module implements several loss, score, and utilityfunctions to measure regression performance. Some of those have been enhancedto handle the multioutput case:mean_squared_error
,mean_absolute_error
,r2_score
,explained_variance_score
,mean_pinball_loss
,d2_pinball_score
andd2_absolute_error_score
.
These functions have amultioutput
keyword argument which specifies theway the scores or losses for each individual target should be averaged. Thedefault is'uniform_average'
, which specifies a uniformly weighted meanover outputs. If anndarray
of shape(n_outputs,)
is passed, then itsentries are interpreted as weights and an according weighted average isreturned. Ifmultioutput
is'raw_values'
, then all unalteredindividual scores or losses will be returned in an array of shape(n_outputs,)
.
Ther2_score
andexplained_variance_score
accept an additionalvalue'variance_weighted'
for themultioutput
parameter. This optionleads to a weighting of each individual score by the variance of thecorresponding target variable. This setting quantifies the globally capturedunscaled variance. If the target variables are of different scale, then thisscore puts more importance on explaining the higher variance variables.
3.4.6.1.R² score, the coefficient of determination#
Ther2_score
function computes thecoefficient ofdetermination,usually denoted as\(R^2\).
It represents the proportion of variance (of y) that has been explained by theindependent variables in the model. It provides an indication of goodness offit and therefore a measure of how well unseen samples are likely to bepredicted by the model, through the proportion of explained variance.
As such variance is dataset dependent,\(R^2\) may not be meaningfully comparableacross different datasets. Best possible score is 1.0 and it can be negative(because the model can be arbitrarily worse). A constant model that alwayspredicts the expected (average) value of y, disregarding the input features,would get an\(R^2\) score of 0.0.
Note: when the prediction residuals have zero mean, the\(R^2\) score andtheExplained variance score are identical.
If\(\hat{y}_i\) is the predicted value of the\(i\)-th sampleand\(y_i\) is the corresponding true value for total\(n\) samples,the estimated\(R^2\) is defined as:
where\(\bar{y} = \frac{1}{n} \sum_{i=1}^{n} y_i\) and\(\sum_{i=1}^{n} (y_i - \hat{y}_i)^2 = \sum_{i=1}^{n} \epsilon_i^2\).
Note thatr2_score
calculates unadjusted\(R^2\) without correcting forbias in sample variance of y.
In the particular case where the true target is constant, the\(R^2\) score isnot finite: it is eitherNaN
(perfect predictions) or-Inf
(imperfectpredictions). Such non-finite scores may prevent correct model optimizationsuch as grid-search cross-validation to be performed correctly. For this reasonthe default behaviour ofr2_score
is to replace them with 1.0 (perfectpredictions) or 0.0 (imperfect predictions). Ifforce_finite
is set toFalse
, this score falls back on the original\(R^2\) definition.
Here is a small example of usage of ther2_score
function:
>>>fromsklearn.metricsimportr2_score>>>y_true=[3,-0.5,2,7]>>>y_pred=[2.5,0.0,2,8]>>>r2_score(y_true,y_pred)0.948>>>y_true=[[0.5,1],[-1,1],[7,-6]]>>>y_pred=[[0,2],[-1,2],[8,-5]]>>>r2_score(y_true,y_pred,multioutput='variance_weighted')0.938>>>y_true=[[0.5,1],[-1,1],[7,-6]]>>>y_pred=[[0,2],[-1,2],[8,-5]]>>>r2_score(y_true,y_pred,multioutput='uniform_average')0.936>>>r2_score(y_true,y_pred,multioutput='raw_values')array([0.965, 0.908])>>>r2_score(y_true,y_pred,multioutput=[0.3,0.7])0.925>>>y_true=[-2,-2,-2]>>>y_pred=[-2,-2,-2]>>>r2_score(y_true,y_pred)1.0>>>r2_score(y_true,y_pred,force_finite=False)nan>>>y_true=[-2,-2,-2]>>>y_pred=[-2,-2,-2+1e-8]>>>r2_score(y_true,y_pred)0.0>>>r2_score(y_true,y_pred,force_finite=False)-inf
Examples
SeeL1-based models for Sparse Signalsfor an example of R² score usage toevaluate Lasso and Elastic Net on sparse signals.
3.4.6.2.Mean absolute error#
Themean_absolute_error
function computesmean absoluteerror, a riskmetric corresponding to the expected value of the absolute error loss or\(l1\)-norm loss.
If\(\hat{y}_i\) is the predicted value of the\(i\)-th sample,and\(y_i\) is the corresponding true value, then the mean absolute error(MAE) estimated over\(n_{\text{samples}}\) is defined as
Here is a small example of usage of themean_absolute_error
function:
>>>fromsklearn.metricsimportmean_absolute_error>>>y_true=[3,-0.5,2,7]>>>y_pred=[2.5,0.0,2,8]>>>mean_absolute_error(y_true,y_pred)0.5>>>y_true=[[0.5,1],[-1,1],[7,-6]]>>>y_pred=[[0,2],[-1,2],[8,-5]]>>>mean_absolute_error(y_true,y_pred)0.75>>>mean_absolute_error(y_true,y_pred,multioutput='raw_values')array([0.5, 1. ])>>>mean_absolute_error(y_true,y_pred,multioutput=[0.3,0.7])0.85
3.4.6.3.Mean squared error#
Themean_squared_error
function computesmean squarederror, a riskmetric corresponding to the expected value of the squared (quadratic) error orloss.
If\(\hat{y}_i\) is the predicted value of the\(i\)-th sample,and\(y_i\) is the corresponding true value, then the mean squared error(MSE) estimated over\(n_{\text{samples}}\) is defined as
Here is a small example of usage of themean_squared_error
function:
>>>fromsklearn.metricsimportmean_squared_error>>>y_true=[3,-0.5,2,7]>>>y_pred=[2.5,0.0,2,8]>>>mean_squared_error(y_true,y_pred)0.375>>>y_true=[[0.5,1],[-1,1],[7,-6]]>>>y_pred=[[0,2],[-1,2],[8,-5]]>>>mean_squared_error(y_true,y_pred)0.7083
Examples
SeeGradient Boosting regressionfor an example of mean squared error usage to evaluate gradient boosting regression.
Taking the square root of the MSE, called the root mean squared error (RMSE), is anothercommon metric that provides a measure in the same units as the target variable. RMSE isavailable through theroot_mean_squared_error
function.
3.4.6.4.Mean squared logarithmic error#
Themean_squared_log_error
function computes a risk metriccorresponding to the expected value of the squared logarithmic (quadratic)error or loss.
If\(\hat{y}_i\) is the predicted value of the\(i\)-th sample,and\(y_i\) is the corresponding true value, then the mean squaredlogarithmic error (MSLE) estimated over\(n_{\text{samples}}\) isdefined as
Where\(\log_e (x)\) means the natural logarithm of\(x\). This metricis best to use when targets having exponential growth, such as populationcounts, average sales of a commodity over a span of years etc. Note that thismetric penalizes an under-predicted estimate greater than an over-predictedestimate.
Here is a small example of usage of themean_squared_log_error
function:
>>>fromsklearn.metricsimportmean_squared_log_error>>>y_true=[3,5,2.5,7]>>>y_pred=[2.5,5,4,8]>>>mean_squared_log_error(y_true,y_pred)0.0397>>>y_true=[[0.5,1],[1,2],[7,6]]>>>y_pred=[[0.5,2],[1,2.5],[8,8]]>>>mean_squared_log_error(y_true,y_pred)0.044
The root mean squared logarithmic error (RMSLE) is available through theroot_mean_squared_log_error
function.
3.4.6.5.Mean absolute percentage error#
Themean_absolute_percentage_error
(MAPE), also known as mean absolutepercentage deviation (MAPD), is an evaluation metric for regression problems.The idea of this metric is to be sensitive to relative errors. It is for examplenot changed by a global scaling of the target variable.
If\(\hat{y}_i\) is the predicted value of the\(i\)-th sampleand\(y_i\) is the corresponding true value, then the mean absolute percentageerror (MAPE) estimated over\(n_{\text{samples}}\) is defined as
where\(\epsilon\) is an arbitrary small yet strictly positive number toavoid undefined results when y is zero.
Themean_absolute_percentage_error
function supports multioutput.
Here is a small example of usage of themean_absolute_percentage_error
function:
>>>fromsklearn.metricsimportmean_absolute_percentage_error>>>y_true=[1,10,1e6]>>>y_pred=[0.9,15,1.2e6]>>>mean_absolute_percentage_error(y_true,y_pred)0.2666
In above example, if we had usedmean_absolute_error
, it would have ignoredthe small magnitude values and only reflected the error in prediction of highestmagnitude value. But that problem is resolved in case of MAPE because it calculatesrelative percentage error with respect to actual output.
Note
The MAPE formula here does not represent the common “percentage” definition: thepercentage in the range [0, 100] is converted to a relative value in the range [0,1] by dividing by 100. Thus, an error of 200% corresponds to a relative error of 2.The motivation here is to have a range of values that is more consistent with othererror metrics in scikit-learn, such asaccuracy_score
.
To obtain the mean absolute percentage error as per the Wikipedia formula,multiply themean_absolute_percentage_error
computed here by 100.
3.4.6.6.Median absolute error#
Themedian_absolute_error
is particularly interesting because it isrobust to outliers. The loss is calculated by taking the median of all absolutedifferences between the target and the prediction.
If\(\hat{y}_i\) is the predicted value of the\(i\)-th sampleand\(y_i\) is the corresponding true value, then the median absolute error(MedAE) estimated over\(n_{\text{samples}}\) is defined as
Themedian_absolute_error
does not support multioutput.
Here is a small example of usage of themedian_absolute_error
function:
>>>fromsklearn.metricsimportmedian_absolute_error>>>y_true=[3,-0.5,2,7]>>>y_pred=[2.5,0.0,2,8]>>>median_absolute_error(y_true,y_pred)0.5
3.4.6.7.Max error#
Themax_error
function computes the maximumresidual error , a metricthat captures the worst case error between the predicted value andthe true value. In a perfectly fitted single output regressionmodel,max_error
would be0
on the training set and though thiswould be highly unlikely in the real world, this metric shows theextent of error that the model had when it was fitted.
If\(\hat{y}_i\) is the predicted value of the\(i\)-th sample,and\(y_i\) is the corresponding true value, then the max error isdefined as
Here is a small example of usage of themax_error
function:
>>>fromsklearn.metricsimportmax_error>>>y_true=[3,2,7,1]>>>y_pred=[9,2,7,1]>>>max_error(y_true,y_pred)6.0
Themax_error
does not support multioutput.
3.4.6.8.Explained variance score#
Theexplained_variance_score
computes theexplained varianceregression score.
If\(\hat{y}\) is the estimated target output,\(y\) the corresponding(correct) target output, and\(Var\) isVariance, the square of the standard deviation,then the explained variance is estimated as follow:
The best possible score is 1.0, lower values are worse.
Link toR² score, the coefficient of determination
The difference between the explained variance score and theR² score, the coefficient of determinationis that the explained variance score does not account forsystematic offset in the prediction. For this reason, theR² score, the coefficient of determination should be preferred in general.
In the particular case where the true target is constant, the ExplainedVariance score is not finite: it is eitherNaN
(perfect predictions) or-Inf
(imperfect predictions). Such non-finite scores may prevent correctmodel optimization such as grid-search cross-validation to be performedcorrectly. For this reason the default behaviour ofexplained_variance_score
is to replace them with 1.0 (perfectpredictions) or 0.0 (imperfect predictions). You can set theforce_finite
parameter toFalse
to prevent this fix from happening and fallback on theoriginal Explained Variance score.
Here is a small example of usage of theexplained_variance_score
function:
>>>fromsklearn.metricsimportexplained_variance_score>>>y_true=[3,-0.5,2,7]>>>y_pred=[2.5,0.0,2,8]>>>explained_variance_score(y_true,y_pred)0.957>>>y_true=[[0.5,1],[-1,1],[7,-6]]>>>y_pred=[[0,2],[-1,2],[8,-5]]>>>explained_variance_score(y_true,y_pred,multioutput='raw_values')array([0.967, 1. ])>>>explained_variance_score(y_true,y_pred,multioutput=[0.3,0.7])0.990>>>y_true=[-2,-2,-2]>>>y_pred=[-2,-2,-2]>>>explained_variance_score(y_true,y_pred)1.0>>>explained_variance_score(y_true,y_pred,force_finite=False)nan>>>y_true=[-2,-2,-2]>>>y_pred=[-2,-2,-2+1e-8]>>>explained_variance_score(y_true,y_pred)0.0>>>explained_variance_score(y_true,y_pred,force_finite=False)-inf
3.4.6.9.Mean Poisson, Gamma, and Tweedie deviances#
Themean_tweedie_deviance
function computes themean Tweediedeviance errorwith apower
parameter (\(p\)). This is a metric that elicitspredicted expectation values of regression targets.
Following special cases exist,
when
power=0
it is equivalent tomean_squared_error
.when
power=1
it is equivalent tomean_poisson_deviance
.when
power=2
it is equivalent tomean_gamma_deviance
.
If\(\hat{y}_i\) is the predicted value of the\(i\)-th sample,and\(y_i\) is the corresponding true value, then the mean Tweediedeviance error (D) for power\(p\), estimated over\(n_{\text{samples}}\)is defined as
Tweedie deviance is a homogeneous function of degree2-power
.Thus, Gamma distribution withpower=2
means that simultaneously scalingy_true
andy_pred
has no effect on the deviance. For Poissondistributionpower=1
the deviance scales linearly, and for Normaldistribution (power=0
), quadratically. In general, the higherpower
the less weight is given to extreme deviations between trueand predicted targets.
For instance, let’s compare the two predictions 1.5 and 150 that are both50% larger than their corresponding true value.
The mean squared error (power=0
) is very sensitive to theprediction difference of the second point,:
>>>fromsklearn.metricsimportmean_tweedie_deviance>>>mean_tweedie_deviance([1.0],[1.5],power=0)0.25>>>mean_tweedie_deviance([100.],[150.],power=0)2500.0
If we increasepower
to 1,:
>>>mean_tweedie_deviance([1.0],[1.5],power=1)0.189>>>mean_tweedie_deviance([100.],[150.],power=1)18.9
the difference in errors decreases. Finally, by setting,power=2
:
>>>mean_tweedie_deviance([1.0],[1.5],power=2)0.144>>>mean_tweedie_deviance([100.],[150.],power=2)0.144
we would get identical errors. The deviance whenpower=2
is thus onlysensitive to relative errors.
3.4.6.10.Pinball loss#
Themean_pinball_loss
function is used to evaluate the predictiveperformance ofquantile regression models.
The value of pinball loss is equivalent to half ofmean_absolute_error
when the quantileparameteralpha
is set to 0.5.
Here is a small example of usage of themean_pinball_loss
function:
>>>fromsklearn.metricsimportmean_pinball_loss>>>y_true=[1,2,3]>>>mean_pinball_loss(y_true,[0,2,3],alpha=0.1)0.033>>>mean_pinball_loss(y_true,[1,2,4],alpha=0.1)0.3>>>mean_pinball_loss(y_true,[0,2,3],alpha=0.9)0.3>>>mean_pinball_loss(y_true,[1,2,4],alpha=0.9)0.033>>>mean_pinball_loss(y_true,y_true,alpha=0.1)0.0>>>mean_pinball_loss(y_true,y_true,alpha=0.9)0.0
It is possible to build a scorer object with a specific choice ofalpha
:
>>>fromsklearn.metricsimportmake_scorer>>>mean_pinball_loss_95p=make_scorer(mean_pinball_loss,alpha=0.95)
Such a scorer can be used to evaluate the generalization performance of aquantile regressor via cross-validation:
>>>fromsklearn.datasetsimportmake_regression>>>fromsklearn.model_selectionimportcross_val_score>>>fromsklearn.ensembleimportGradientBoostingRegressor>>>>>>X,y=make_regression(n_samples=100,random_state=0)>>>estimator=GradientBoostingRegressor(...loss="quantile",...alpha=0.95,...random_state=0,...)>>>cross_val_score(estimator,X,y,cv=5,scoring=mean_pinball_loss_95p)array([13.6, 9.7, 23.3, 9.5, 10.4])
It is also possible to build scorer objects for hyper-parameter tuning. Thesign of the loss must be switched to ensure that greater means better asexplained in the example linked below.
Examples
SeePrediction Intervals for Gradient Boosting Regressionfor an example of using the pinball loss to evaluate and tune thehyper-parameters of quantile regression models on data with non-symmetricnoise and outliers.
3.4.6.11.D² score#
The D² score computes the fraction of deviance explained.It is a generalization of R², where the squared error is generalized and replacedby a deviance of choice\(\text{dev}(y, \hat{y})\)(e.g., Tweedie, pinball or mean absolute error). D² is a form of askill score.It is calculated as
Where\(y_{\text{null}}\) is the optimal prediction of an intercept-only model(e.g., the mean ofy_true
for the Tweedie case, the median for absoluteerror and the alpha-quantile for pinball loss).
Like R², the best possible score is 1.0 and it can be negative (because themodel can be arbitrarily worse). A constant model that always predicts\(y_{\text{null}}\), disregarding the input features, would get a D² scoreof 0.0.
D² Tweedie score#
Thed2_tweedie_score
function implements the special case of D²where\(\text{dev}(y, \hat{y})\) is the Tweedie deviance, seeMean Poisson, Gamma, and Tweedie deviances.It is also known as D² Tweedie and is related to McFadden’s likelihood ratio index.
The argumentpower
defines the Tweedie power as formean_tweedie_deviance
. Note that forpower=0
,d2_tweedie_score
equalsr2_score
(for single targets).
A scorer object with a specific choice ofpower
can be built by:
>>>fromsklearn.metricsimportd2_tweedie_score,make_scorer>>>d2_tweedie_score_15=make_scorer(d2_tweedie_score,power=1.5)
D² pinball score#
Thed2_pinball_score
function implements the special caseof D² with the pinball loss, seePinball loss, i.e.:
The argumentalpha
defines the slope of the pinball loss as formean_pinball_loss
(Pinball loss). It determines thequantile levelalpha
for which the pinball loss and also D²are optimal. Note that foralpha=0.5
(the default)d2_pinball_score
equalsd2_absolute_error_score
.
A scorer object with a specific choice ofalpha
can be built by:
>>>fromsklearn.metricsimportd2_pinball_score,make_scorer>>>d2_pinball_score_08=make_scorer(d2_pinball_score,alpha=0.8)
D² absolute error score#
Thed2_absolute_error_score
function implements the special case oftheMean absolute error:
Here are some usage examples of thed2_absolute_error_score
function:
>>>fromsklearn.metricsimportd2_absolute_error_score>>>y_true=[3,-0.5,2,7]>>>y_pred=[2.5,0.0,2,8]>>>d2_absolute_error_score(y_true,y_pred)0.764>>>y_true=[1,2,3]>>>y_pred=[1,2,3]>>>d2_absolute_error_score(y_true,y_pred)1.0>>>y_true=[1,2,3]>>>y_pred=[2,2,2]>>>d2_absolute_error_score(y_true,y_pred)0.0
3.4.6.12.Visual evaluation of regression models#
Among methods to assess the quality of regression models, scikit-learn providesthePredictionErrorDisplay
class. It allows tovisually inspect the prediction errors of a model in two different manners.

The plot on the left shows the actual values vs predicted values. For anoise-free regression task aiming to predict the (conditional) expectation ofy
, a perfect regression model would display data points on the diagonaldefined by predicted equal to actual values. The further away from this optimalline, the larger the error of the model. In a more realistic setting withirreducible noise, that is, when not all the variations ofy
can be explainedby features inX
, then the best model would lead to a cloud of points denselyarranged around the diagonal.
Note that the above only holds when the predicted values is the expected valueofy
givenX
. This is typically the case for regression models thatminimize the mean squared error objective function or more generally themean Tweedie deviance for any value of its“power” parameter.
When plotting the predictions of an estimator that predicts a quantileofy
givenX
, e.g.QuantileRegressor
or any other model minimizing thepinball loss, afraction of the points are either expected to lie above or below the diagonaldepending on the estimated quantile level.
All in all, while intuitive to read, this plot does not really inform us onwhat to do to obtain a better model.
The right-hand side plot shows the residuals (i.e. the difference between theactual and the predicted values) vs. the predicted values.
This plot makes it easier to visualize if the residuals follow andhomoscedastic or heteroschedasticdistribution.
In particular, if the true distribution ofy|X
is Poisson or Gammadistributed, it is expected that the variance of the residuals of the optimalmodel would grow with the predicted value ofE[y|X]
(either linearly forPoisson or quadratically for Gamma).
When fitting a linear least squares regression model (seeLinearRegression
andRidge
), we can use this plot to checkif some of themodel assumptionsare met, in particular that the residuals should be uncorrelated, theirexpected value should be null and that their variance should be constant(homoschedasticity).
If this is not the case, and in particular if the residuals plot show somebanana-shaped structure, this is a hint that the model is likely mis-specifiedand that non-linear feature engineering or switching to a non-linear regressionmodel might be useful.
Refer to the example below to see a model evaluation that makes use of thisdisplay.
Examples
SeeEffect of transforming the targets in regression model foran example on how to use
PredictionErrorDisplay
to visualize the prediction quality improvement of a regression modelobtained by transforming the target before learning.
3.4.7.Clustering metrics#
Thesklearn.metrics
module implements several loss, score, and utilityfunctions to measure clustering performance. For more information see theClustering performance evaluation section for instance clustering, andBiclustering evaluation for biclustering.
3.4.8.Dummy estimators#
When doing supervised learning, a simple sanity check consists of comparingone’s estimator against simple rules of thumb.DummyClassifier
implements several such simple strategies for classification:
stratified
generates random predictions by respecting the trainingset class distribution.most_frequent
always predicts the most frequent label in the training set.prior
always predicts the class that maximizes the class prior(likemost_frequent
) andpredict_proba
returns the class prior.uniform
generates predictions uniformly at random.constant
always predicts a constant label that is provided by the user.A major motivation of this method is F1-scoring, when the positive classis in the minority.
Note that with all these strategies, thepredict
method completely ignoresthe input data!
To illustrateDummyClassifier
, first let’s create an imbalanceddataset:
>>>fromsklearn.datasetsimportload_iris>>>fromsklearn.model_selectionimporttrain_test_split>>>X,y=load_iris(return_X_y=True)>>>y[y!=1]=-1>>>X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=0)
Next, let’s compare the accuracy ofSVC
andmost_frequent
:
>>>fromsklearn.dummyimportDummyClassifier>>>fromsklearn.svmimportSVC>>>clf=SVC(kernel='linear',C=1).fit(X_train,y_train)>>>clf.score(X_test,y_test)0.63>>>clf=DummyClassifier(strategy='most_frequent',random_state=0)>>>clf.fit(X_train,y_train)DummyClassifier(random_state=0, strategy='most_frequent')>>>clf.score(X_test,y_test)0.579
We see thatSVC
doesn’t do much better than a dummy classifier. Now, let’schange the kernel:
>>>clf=SVC(kernel='rbf',C=1).fit(X_train,y_train)>>>clf.score(X_test,y_test)0.94
We see that the accuracy was boosted to almost 100%. A cross validationstrategy is recommended for a better estimate of the accuracy, if itis not too CPU costly. For more information see theCross-validation: evaluating estimator performancesection. Moreover if you want to optimize over the parameter space, it is highlyrecommended to use an appropriate methodology; see theTuning the hyper-parameters of an estimatorsection for details.
More generally, when the accuracy of a classifier is too close to random, itprobably means that something went wrong: features are not helpful, ahyperparameter is not correctly tuned, the classifier is suffering from classimbalance, etc…
DummyRegressor
also implements four simple rules of thumb for regression:
mean
always predicts the mean of the training targets.median
always predicts the median of the training targets.quantile
always predicts a user provided quantile of the training targets.constant
always predicts a constant value that is provided by the user.
In all these strategies, thepredict
method completely ignoresthe input data.