imblearn.under_sampling.ClusterCentroids

class imblearn.under_sampling.ClusterCentroids(sampling_strategy='auto', random_state=None, estimator=None, voting='auto', n_jobs=1, ratio=None)[source][source]

Perform under-sampling by generating centroids based on clustering methods.

Method that under samples the majority class by replacing a cluster of majority samples by the cluster centroid of a KMeans algorithm. This algorithm keeps N majority samples by fitting the KMeans algorithm with N cluster to the majority class and using the coordinates of the N cluster centroids as the new majority samples.

Read more in the User Guide.

Parameters:
sampling_strategy : float, str, dict, callable, (default=’auto’)

Sampling information to sample the data set.

  • When float, it corresponds to the desired ratio of the number of samples in the majority class over the number of samples in the minority class after resampling. Therefore, the ratio is expressed as \alpha_{us} = N_{rM} / N_{m} where N_{rM} and N_{m} are the number of samples in the majority class after resampling and the number of samples in the minority class, respectively.

    Warning

    float is only available for binary classification. An error is raised for multi-class classification.

  • When str, specify the class targeted by the resampling. The number of samples in the different classes will be equalized. Possible choices are:

    'majority': resample only the majority class;

    'not minority': resample all classes but the minority class;

    'not majority': resample all classes but the majority class;

    'all': resample all classes;

    'auto': equivalent to 'not minority'.

  • When dict, the keys correspond to the targeted classes. The values correspond to the desired number of samples for each targeted class.

  • When callable, function taking y and returns a dict. The keys correspond to the targeted classes. The values correspond to the desired number of samples for each class.

random_state : int, RandomState instance or None, optional (default=None)

Control the randomization of the algorithm.

  • If int, random_state is the seed used by the random number generator;
  • If RandomState instance, random_state is the random number generator;
  • If None, the random number generator is the RandomState instance used by np.random.
estimator : object, optional(default=KMeans())

Pass a sklearn.cluster.KMeans estimator.

voting : str, optional (default=’auto’)

Voting strategy to generate the new samples:

  • If 'hard', the nearest-neighbors of the centroids found using the clustering algorithm will be used.
  • If 'soft', the centroids found by the clustering algorithm will be used.
  • If 'auto', if the input is sparse, it will default on 'hard' otherwise, 'soft' will be used.

New in version 0.3.0.

n_jobs : int, optional (default=1)

The number of threads to open if possible.

ratio : str, dict, or callable

Deprecated since version 0.4: Use the parameter sampling_strategy instead. It will be removed in 0.6.

Notes

Supports multi-class resampling by sampling each class independently.

Examples

>>> from collections import Counter
>>> from sklearn.datasets import make_classification
>>> from imblearn.under_sampling import ClusterCentroids # doctest: +NORMALIZE_WHITESPACE
>>> X, y = make_classification(n_classes=2, class_sep=2,
... weights=[0.1, 0.9], n_informative=3, n_redundant=1, flip_y=0,
... n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10)
>>> print('Original dataset shape %s' % Counter(y))
Original dataset shape Counter({1: 900, 0: 100})
>>> cc = ClusterCentroids(random_state=42)
>>> X_res, y_res = cc.fit_resample(X, y)
>>> print('Resampled dataset shape %s' % Counter(y_res))
... # doctest: +ELLIPSIS
Resampled dataset shape Counter({...})
__init__(sampling_strategy='auto', random_state=None, estimator=None, voting='auto', n_jobs=1, ratio=None)[source][source]

Initialize self. See help(type(self)) for accurate signature.

fit(X, y)[source]

Check inputs and statistics of the sampler.

You should use fit_resample in all cases.

Parameters:
X : {array-like, sparse matrix}, shape (n_samples, n_features)

Data array.

y : array-like, shape (n_samples,)

Target array.

Returns:
self : object

Return the instance itself.

fit_resample(X, y)[source]

Resample the dataset.

Parameters:
X : {array-like, sparse matrix}, shape (n_samples, n_features)

Matrix containing the data which have to be sampled.

y : array-like, shape (n_samples,)

Corresponding label for each sample in X.

Returns:
X_resampled : {array-like, sparse matrix}, shape (n_samples_new, n_features)

The array containing the resampled data.

y_resampled : array-like, shape (n_samples_new,)

The corresponding label of X_resampled.

fit_sample(X, y)[source]

Resample the dataset.

Parameters:
X : {array-like, sparse matrix}, shape (n_samples, n_features)

Matrix containing the data which have to be sampled.

y : array-like, shape (n_samples,)

Corresponding label for each sample in X.

Returns:
X_resampled : {array-like, sparse matrix}, shape (n_samples_new, n_features)

The array containing the resampled data.

y_resampled : array-like, shape (n_samples_new,)

The corresponding label of X_resampled.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
self

Examples using imblearn.under_sampling.ClusterCentroids