coverage_error#

sklearn.metrics.coverage_error(y_true,y_score,*,sample_weight=None)[source]#

Coverage error measure.

Compute how far we need to go through the ranked scores to cover alltrue labels. The best value is equal to the average numberof labels iny_true per sample.

Ties iny_scores are broken by giving maximal rank that would havebeen assigned to all tied values.

Note: Our implementation’s score is 1 greater than the one given inTsoumakas et al., 2010. This extends it to handle the degenerate casein which an instance has 0 true labels.

Read more in theUser Guide.

Parameters:
y_truearray-like of shape (n_samples, n_labels)

True binary labels in binary indicator format.

y_scorearray-like of shape (n_samples, n_labels)

Target scores, can either be probability estimates of the positiveclass, confidence values, or non-thresholded measure of decisions(as returned by “decision_function” on some classifiers).Fordecision_function scores, values greater than or equal tozero should indicate the positive class.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:
coverage_errorfloat

The coverage error.

References

[1]

Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010).Mining multi-label data. In Data mining and knowledge discoveryhandbook (pp. 667-685). Springer US.

Examples

>>>fromsklearn.metricsimportcoverage_error>>>y_true=[[1,0,0],[0,1,1]]>>>y_score=[[1,0,0],[0,1,1]]>>>coverage_error(y_true,y_score)1.5
On this page

This Page