.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/model_selection/plot_instance_hardness_cv.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_model_selection_plot_instance_hardness_cv.py: ==================================================== Distribute hard-to-classify datapoints over CV folds ==================================================== 'Instance hardness' refers to the difficulty to classify an instance. The way hard-to-classify instances are distributed over train and test sets has significant effect on the test set performance metrics. In this example we show how to deal with this problem. We are making the comparison with normal :class:`~sklearn.model_selection.StratifiedKFold` cross-validation splitter. .. GENERATED FROM PYTHON SOURCE LINES 12-16 .. code-block:: Python # Authors: Frits Hermans, https://fritshermans.github.io # License: MIT .. GENERATED FROM PYTHON SOURCE LINES 17-19 .. code-block:: Python print(__doc__) .. GENERATED FROM PYTHON SOURCE LINES 20-26 Create an imbalanced dataset with instance hardness --------------------------------------------------- We create an imbalanced dataset with using scikit-learn's :func:`~sklearn.datasets.make_blobs` function and set the class imbalance ratio to 5%. .. GENERATED FROM PYTHON SOURCE LINES 26-33 .. code-block:: Python import numpy as np from matplotlib import pyplot as plt from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=[950, 50], centers=((-3, 0), (3, 0)), random_state=10) _ = plt.scatter(X[:, 0], X[:, 1], c=y) .. image-sg:: /auto_examples/model_selection/images/sphx_glr_plot_instance_hardness_cv_001.png :alt: plot instance hardness cv :srcset: /auto_examples/model_selection/images/sphx_glr_plot_instance_hardness_cv_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 34-35 To introduce instance hardness in our dataset, we add some hard to classify samples: .. GENERATED FROM PYTHON SOURCE LINES 35-41 .. code-block:: Python X_hard, y_hard = make_blobs( n_samples=10, centers=((3, 0), (-3, 0)), cluster_std=1, random_state=10 ) X, y = np.vstack((X, X_hard)), np.hstack((y, y_hard)) _ = plt.scatter(X[:, 0], X[:, 1], c=y) .. image-sg:: /auto_examples/model_selection/images/sphx_glr_plot_instance_hardness_cv_002.png :alt: plot instance hardness cv :srcset: /auto_examples/model_selection/images/sphx_glr_plot_instance_hardness_cv_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 42-61 Compare cross validation scores using `StratifiedKFold` and `InstanceHardnessCV` -------------------------------------------------------------------------------- Now, we want to assess a linear predictive model. Therefore, we should use cross-validation. The most important concept with cross-validation is to create training and test splits that are representative of the the data in production to have statistical results that one can expect in production. By applying a standard :class:`~sklearn.model_selection.StratifiedKFold` cross-validation splitter, we do not control in which fold the hard-to-classify samples will be. The :class:`~imblearn.model_selection.InstanceHardnessCV` splitter allows to control the distribution of the hard-to-classify samples over the folds. Let's make an experiment to compare the results that we get with both splitters. We use a :class:`~sklearn.linear_model.LogisticRegression` classifier and :func:`~sklearn.model_selection.cross_validate` to calculate the cross validation scores. We use average precision for scoring. .. GENERATED FROM PYTHON SOURCE LINES 61-84 .. code-block:: Python import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.model_selection import StratifiedKFold, cross_validate from imblearn.model_selection import InstanceHardnessCV logistic_regression = LogisticRegression() results = {} for cv in ( StratifiedKFold(n_splits=5, shuffle=True, random_state=10), InstanceHardnessCV(estimator=LogisticRegression()), ): result = cross_validate( logistic_regression, X, y, cv=cv, scoring="average_precision", ) results[cv.__class__.__name__] = result["test_score"] results = pd.DataFrame(results) .. GENERATED FROM PYTHON SOURCE LINES 85-92 .. code-block:: Python ax = results.plot.box(vert=False, whis=[0, 100]) _ = ax.set( xlabel="Average precision", title="Cross validation scores with different splitters", xlim=(0, 1), ) .. image-sg:: /auto_examples/model_selection/images/sphx_glr_plot_instance_hardness_cv_003.png :alt: Cross validation scores with different splitters :srcset: /auto_examples/model_selection/images/sphx_glr_plot_instance_hardness_cv_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 93-98 The boxplot shows that the :class:`~imblearn.model_selection.InstanceHardnessCV` splitter results in less variation of average precision than :class:`~sklearn.model_selection.StratifiedKFold` splitter. When doing hyperparameter tuning or feature selection using a wrapper method (like :class:`~sklearn.feature_selection.RFECV`) this will give more stable results. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 1.533 seconds) **Estimated memory usage:** 204 MB .. _sphx_glr_download_auto_examples_model_selection_plot_instance_hardness_cv.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_instance_hardness_cv.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_instance_hardness_cv.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_instance_hardness_cv.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_