recall_score#

sklearn.metrics.recall_score(y_true,y_pred,*,labels=None,pos_label=1,average='binary',sample_weight=None,zero_division='warn')[source]#

Compute the recall.

The recall is the ratiotp/(tp+fn) wheretp is the number oftrue positives andfn the number of false negatives. The recall isintuitively the ability of the classifier to find all the positive samples.

The best value is 1 and the worst value is 0.

Support beyond term:binary targets is achieved by treatingmulticlassandmultilabel data as a collection of binary problems, one for eachlabel. For thebinary case, settingaverage='binary' will returnrecall forpos_label. Ifaverage is not'binary',pos_label is ignoredand recall for both classes are computed then averaged or both returned (whenaverage=None). Similarly, formulticlass andmultilabel targets,recall for alllabels are either returned or averaged depending on theaverageparameter. Uselabels specify the set of labels to calculate recall for.

Read more in theUser Guide.

Parameters:
y_true1d array-like, or label indicator array / sparse matrix

Ground truth (correct) target values.

y_pred1d array-like, or label indicator array / sparse matrix

Estimated targets as returned by a classifier.

labelsarray-like, default=None

The set of labels to include whenaverage!='binary', and theirorder ifaverageisNone. Labels present in the data can beexcluded, for example in multiclass classification to exclude a “negativeclass”. Labels not present in the data can be included and will be“assigned” 0 samples. For multilabel targets, labels are column indices.By default, all labels iny_true andy_pred are used in sorted order.

Changed in version 0.17:Parameterlabels improved for multiclass problem.

pos_labelint, float, bool or str, default=1

The class to report ifaverage='binary' and the data is binary,otherwise this parameter is ignored.For multiclass or multilabel targets, setlabels=[pos_label] andaverage!='binary' to report metrics for one label only.

average{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} or None, default=’binary’

This parameter is required for multiclass/multilabel targets.IfNone, the metrics for each class are returned. Otherwise, thisdetermines the type of averaging performed on the data:

'binary':

Only report results for the class specified bypos_label.This is applicable only if targets (y_{true,pred}) are binary.

'micro':

Calculate metrics globally by counting the total true positives,false negatives and false positives.

'macro':

Calculate metrics for each label, and find their unweightedmean. This does not take label imbalance into account.

'weighted':

Calculate metrics for each label, and find their average weightedby support (the number of true instances for each label). Thisalters ‘macro’ to account for label imbalance; it can result in anF-score that is not between precision and recall. Weighted recallis equal to accuracy.

'samples':

Calculate metrics for each instance, and find their average (onlymeaningful for multilabel classification where this differs fromaccuracy_score).

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

zero_division{“warn”, 0.0, 1.0, np.nan}, default=”warn”

Sets the value to return when there is a zero division.

Notes:

  • If set to “warn”, this acts like 0, but a warning is also raised.

  • If set tonp.nan, such values will be excluded from the average.

Added in version 1.3:np.nan option was added.

Returns:
recallfloat (if average is not None) or array of float of shape (n_unique_labels,)

Recall of the positive class in binary classification or weightedaverage of the recall of each class for the multiclass task.

See also

precision_recall_fscore_support

Compute precision, recall, F-measure and support for each class.

precision_score

Compute the ratiotp/(tp+fp) wheretp is the number of true positives andfp the number of false positives.

balanced_accuracy_score

Compute balanced accuracy to deal with imbalanced datasets.

multilabel_confusion_matrix

Compute a confusion matrix for each class or sample.

PrecisionRecallDisplay.from_estimator

Plot precision-recall curve given an estimator and some data.

PrecisionRecallDisplay.from_predictions

Plot precision-recall curve given binary class predictions.

Notes

Whentruepositive+falsenegative==0, recall returns 0 and raisesUndefinedMetricWarning. This behavior can be modified withzero_division.

Examples

>>>importnumpyasnp>>>fromsklearn.metricsimportrecall_score>>>y_true=[0,1,2,0,1,2]>>>y_pred=[0,2,1,0,0,1]>>>recall_score(y_true,y_pred,average='macro')0.33>>>recall_score(y_true,y_pred,average='micro')0.33>>>recall_score(y_true,y_pred,average='weighted')0.33>>>recall_score(y_true,y_pred,average=None)array([1., 0., 0.])>>>y_true=[0,0,0,0,0,0]>>>recall_score(y_true,y_pred,average=None)array([0.5, 0. , 0. ])>>>recall_score(y_true,y_pred,average=None,zero_division=1)array([0.5, 1. , 1. ])>>>recall_score(y_true,y_pred,average=None,zero_division=np.nan)array([0.5, nan, nan])
>>># multilabel classification>>>y_true=[[0,0,0],[1,1,1],[0,1,1]]>>>y_pred=[[0,0,0],[1,1,1],[1,1,0]]>>>recall_score(y_true,y_pred,average=None)array([1. , 1. , 0.5])

Gallery examples#

Probability Calibration curves

Probability Calibration curves

Post-tuning the decision threshold for cost-sensitive learning

Post-tuning the decision threshold for cost-sensitive learning

Precision-Recall

Precision-Recall