classification_report_imbalanced#

imblearn.metrics.classification_report_imbalanced(y_true, y_pred, *, labels=None, target_names=None, sample_weight=None, digits=2, alpha=0.1, output_dict=False, zero_division='warn')[source]#

Build a classification report based on metrics used with imbalanced dataset.

Specific metrics have been proposed to evaluate the classification performed on imbalanced dataset. This report compiles the state-of-the-art metrics: precision/recall/specificity, geometric mean, and index balanced accuracy of the geometric mean.

Read more in the User Guide.

Parameters:
y_true1d array-like, or label indicator array / sparse matrix

Ground truth (correct) target values.

y_pred1d array-like, or label indicator array / sparse matrix

Estimated targets as returned by a classifier.

labelsarray-like of shape (n_labels,), default=None

Optional list of label indices to include in the report.

target_nameslist of str of shape (n_labels,), default=None

Optional display names matching the labels (same order).

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

digitsint, default=2

Number of digits for formatting output floating point values. When output_dict is True, this will be ignored and the returned values will not be rounded.

alphafloat, default=0.1

Weighting factor.

output_dictbool, default=False

If True, return output as dict.

New in version 0.8.

zero_division“warn” or {0, 1}, default=”warn”

Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised.

New in version 0.8.

Returns:
reportstring / dict

Text summary of the precision, recall, specificity, geometric mean, and index balanced accuracy. Dictionary returned if output_dict is True. Dictionary has the following structure:

{'label 1': {'pre':0.5,
             'rec':1.0,
             ...
            },
 'label 2': { ... },
  ...
}

Examples

>>> import numpy as np
>>> from imblearn.metrics import classification_report_imbalanced
>>> y_true = [0, 1, 2, 2, 2]
>>> y_pred = [0, 0, 2, 2, 1]
>>> target_names = ['class 0', 'class 1', 'class 2']
>>> print(classification_report_imbalanced(y_true, y_pred,     target_names=target_names))
                   pre       rec       spe        f1       geo       iba       sup

    class 0       0.50      1.00      0.75      0.67      0.87      0.77         1
    class 1       0.00      0.00      0.75      0.00      0.00      0.00         1
    class 2       1.00      0.67      1.00      0.80      0.82      0.64         3

avg / total       0.70      0.60      0.90      0.61      0.66      0.54         5

Examples using imblearn.metrics.classification_report_imbalanced#

Multiclass classification with under-sampling

Multiclass classification with under-sampling

Example of topic classification in text documents

Example of topic classification in text documents

Evaluate classification by compiling a report

Evaluate classification by compiling a report