imblearn.over_sampling.KMeansSMOTE

class imblearn.over_sampling.KMeansSMOTE(sampling_strategy='auto', random_state=None, k_neighbors=2, n_jobs=1, kmeans_estimator=None, cluster_balance_threshold='auto', density_exponent='auto')[source]

Apply a KMeans clustering before to over-sample using SMOTE.

This is an implementation of the algorithm described in [Rea5937a049dc-1].

Read more in the User Guide.

Parameters:
sampling_strategy : float, str, dict or callable, (default=’auto’)

Sampling information to resample the data set.

  • When float, it corresponds to the desired ratio of the number of samples in the minority class over the number of samples in the majority class after resampling. Therefore, the ratio is expressed as \alpha_{os} = N_{rm} / N_{M} where N_{rm} is the number of samples in the minority class after resampling and N_{M} is the number of samples in the majority class.

    Warning

    float is only available for binary classification. An error is raised for multi-class classification.

  • When str, specify the class targeted by the resampling. The number of samples in the different classes will be equalized. Possible choices are:

    'minority': resample only the minority class;

    'not minority': resample all classes but the minority class;

    'not majority': resample all classes but the majority class;

    'all': resample all classes;

    'auto': equivalent to 'not majority'.

  • When dict, the keys correspond to the targeted classes. The values correspond to the desired number of samples for each targeted class.

  • When callable, function taking y and returns a dict. The keys correspond to the targeted classes. The values correspond to the desired number of samples for each class.

random_state : int, RandomState instance or None, optional (default=None)

Control the randomization of the algorithm.

  • If int, random_state is the seed used by the random number generator;
  • If RandomState instance, random_state is the random number generator;
  • If None, the random number generator is the RandomState instance used by np.random.
k_neighbors : int or object, optional (default=2)

If int, number of nearest neighbours to used to construct synthetic samples. If object, an estimator that inherits from sklearn.neighbors.base.KNeighborsMixin that will be used to find the k_neighbors.

n_jobs : int, optional (default=1)

The number of threads to open if possible.

kmeans_estimator : int or object, optional (default=MiniBatchKMeans())

A KMeans instance or the number of clusters to be used. By default, we used a sklearn.cluster.MiniBatchKMeans which tend to be better with large number of samples.

cluster_balance_threshold : str or float, optional (default=”auto”)

The threshold at which a cluster is called balanced and where samples of the class selected for SMOTE will be oversampled. If “auto”, this will be determined by the ratio for each class, or it can be set manually.

density_exponent : str or float, optional (default=”auto”)

This exponent is used to determine the density of a cluster. Leaving this to “auto” will use a feature-length based exponent.

References

[Rea5937a049dc-1]Felix Last, Georgios Douzas, Fernando Bacao, “Oversampling for Imbalanced Learning Based on K-Means and SMOTE” https://arxiv.org/abs/1711.00837

Examples

>>> import numpy as np
>>> from imblearn.over_sampling import KMeansSMOTE
>>> from sklearn.datasets import make_blobs
>>> blobs = [100, 800, 100]
>>> X, y  = make_blobs(blobs, centers=[(-10, 0), (0,0), (10, 0)])
>>> # Add a single 0 sample in the middle blob
>>> X = np.concatenate([X, [[0, 0]]])
>>> y = np.append(y, 0)
>>> # Make this a binary classification problem
>>> y = y == 1
>>> sm = KMeansSMOTE(random_state=42)
>>> X_res, y_res = sm.fit_resample(X, y)
>>> # Find the number of new samples in the middle blob
>>> n_res_in_middle = ((X_res[:, 0] > -5) & (X_res[:, 0] < 5)).sum()
>>> print("Samples in the middle blob: %s" % n_res_in_middle)
Samples in the middle blob: 801
>>> print("Middle blob unchanged: %s" % (n_res_in_middle == blobs[1] + 1))
Middle blob unchanged: True
>>> print("More 0 samples: %s" % ((y_res == 0).sum() > (y == 0).sum()))
More 0 samples: True
Attributes:
kmeans_estimator_ : estimator

The fitted clustering method used before to apply SMOTE.

nn_k_ : estimator

The fitted k-NN estimator used in SMOTE.

cluster_balance_threshold_ : float

The threshold used during fit for calling a cluster balanced.

__init__(self, sampling_strategy='auto', random_state=None, k_neighbors=2, n_jobs=1, kmeans_estimator=None, cluster_balance_threshold='auto', density_exponent='auto')[source]

Initialize self. See help(type(self)) for accurate signature.

fit(self, X, y)[source]

Check inputs and statistics of the sampler.

You should use fit_resample in all cases.

Parameters:
X : {array-like, sparse matrix}, shape (n_samples, n_features)

Data array.

y : array-like, shape (n_samples,)

Target array.

Returns:
self : object

Return the instance itself.

fit_resample(self, X, y)[source]

Resample the dataset.

Parameters:
X : {array-like, sparse matrix}, shape (n_samples, n_features)

Matrix containing the data which have to be sampled.

y : array-like, shape (n_samples,)

Corresponding label for each sample in X.

Returns:
X_resampled : {array-like, sparse matrix}, shape (n_samples_new, n_features)

The array containing the resampled data.

y_resampled : array-like, shape (n_samples_new,)

The corresponding label of X_resampled.

fit_sample(self, X, y)[source]

Resample the dataset.

Parameters:
X : {array-like, sparse matrix}, shape (n_samples, n_features)

Matrix containing the data which have to be sampled.

y : array-like, shape (n_samples,)

Corresponding label for each sample in X.

Returns:
X_resampled : {array-like, sparse matrix}, shape (n_samples_new, n_features)

The array containing the resampled data.

y_resampled : array-like, shape (n_samples_new,)

The corresponding label of X_resampled.

get_params(self, deep=True)[source]

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

set_params(self, **params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
self

Examples using imblearn.over_sampling.KMeansSMOTE