KMeansSMOTE#
- class imblearn.over_sampling.KMeansSMOTE(*, sampling_strategy='auto', random_state=None, k_neighbors=2, n_jobs=None, kmeans_estimator=None, cluster_balance_threshold='auto', density_exponent='auto')[source]#
Apply a KMeans clustering before to over-sample using SMOTE.
This is an implementation of the algorithm described in [1].
Read more in the User Guide.
Added in version 0.5.
- Parameters:
- sampling_strategyfloat, str, dict or callable, default=’auto’
Sampling information to resample the data set.
When
float
, it corresponds to the desired ratio of the number of samples in the minority class over the number of samples in the majority class after resampling. Therefore, the ratio is expressed as \(\alpha_{os} = N_{rm} / N_{M}\) where \(N_{rm}\) is the number of samples in the minority class after resampling and \(N_{M}\) is the number of samples in the majority class.Warning
float
is only available for binary classification. An error is raised for multi-class classification.When
str
, specify the class targeted by the resampling. The number of samples in the different classes will be equalized. Possible choices are:'minority'
: resample only the minority class;'not minority'
: resample all classes but the minority class;'not majority'
: resample all classes but the majority class;'all'
: resample all classes;'auto'
: equivalent to'not majority'
.When
dict
, the keys correspond to the targeted classes. The values correspond to the desired number of samples for each targeted class.When callable, function taking
y
and returns adict
. The keys correspond to the targeted classes. The values correspond to the desired number of samples for each class.
- random_stateint, RandomState instance, default=None
Control the randomization of the algorithm.
If int,
random_state
is the seed used by the random number generator;If
RandomState
instance, random_state is the random number generator;If
None
, the random number generator is theRandomState
instance used bynp.random
.
- k_neighborsint or object, default=2
The nearest neighbors used to define the neighborhood of samples to use to generate the synthetic samples. You can pass:
an
int
corresponding to the number of neighbors to use. A~sklearn.neighbors.NearestNeighbors
instance will be fitted in this case.an instance of a compatible nearest neighbors algorithm that should implement both methods
kneighbors
andkneighbors_graph
. For instance, it could correspond to aNearestNeighbors
but could be extended to any compatible class.
- n_jobsint, default=None
Number of CPU cores used during the cross-validation loop.
None
means 1 unless in ajoblib.parallel_backend
context.-1
means using all processors. See Glossary for more details.- kmeans_estimatorint or object, default=None
A KMeans instance or the number of clusters to be used. By default, we used a
MiniBatchKMeans
which tend to be better with large number of samples.- cluster_balance_threshold“auto” or float, default=”auto”
The threshold at which a cluster is called balanced and where samples of the class selected for SMOTE will be oversampled. If “auto”, this will be determined by the ratio for each class, or it can be set manually.
- density_exponent“auto” or float, default=”auto”
This exponent is used to determine the density of a cluster. Leaving this to “auto” will use a feature-length based exponent.
- Attributes:
- sampling_strategy_dict
Dictionary containing the information to sample the dataset. The keys corresponds to the class labels from which to sample and the values are the number of samples to sample.
- kmeans_estimator_estimator
The fitted clustering method used before to apply SMOTE.
- nn_k_estimator
The fitted k-NN estimator used in SMOTE.
- cluster_balance_threshold_float
The threshold used during
fit
for calling a cluster balanced.- n_features_in_int
Number of features in the input dataset.
Added in version 0.9.
- feature_names_in_ndarray of shape (
n_features_in_
,) Names of features seen during
fit
. Defined only whenX
has feature names that are all strings.Added in version 0.10.
See also
SMOTE
Over-sample using SMOTE.
SMOTENC
Over-sample using SMOTE for continuous and categorical features.
SMOTEN
Over-sample using the SMOTE variant specifically for categorical features only.
SVMSMOTE
Over-sample using SVM-SMOTE variant.
BorderlineSMOTE
Over-sample using Borderline-SMOTE variant.
ADASYN
Over-sample using ADASYN.
References
[1]Felix Last, Georgios Douzas, Fernando Bacao, “Oversampling for Imbalanced Learning Based on K-Means and SMOTE” https://arxiv.org/abs/1711.00837
Examples
>>> import numpy as np >>> from imblearn.over_sampling import KMeansSMOTE >>> from sklearn.datasets import make_blobs >>> blobs = [100, 800, 100] >>> X, y = make_blobs(blobs, centers=[(-10, 0), (0,0), (10, 0)], random_state=0) >>> # Add a single 0 sample in the middle blob >>> X = np.concatenate([X, [[0, 0]]]) >>> y = np.append(y, 0) >>> # Make this a binary classification problem >>> y = y == 1 >>> sm = KMeansSMOTE( ... kmeans_estimator=MiniBatchKMeans(n_init=1, random_state=0), random_state=42 ... ) >>> X_res, y_res = sm.fit_resample(X, y) >>> # Find the number of new samples in the middle blob >>> n_res_in_middle = ((X_res[:, 0] > -5) & (X_res[:, 0] < 5)).sum() >>> print("Samples in the middle blob: %s" % n_res_in_middle) Samples in the middle blob: 801 >>> print("Middle blob unchanged: %s" % (n_res_in_middle == blobs[1] + 1)) Middle blob unchanged: True >>> print("More 0 samples: %s" % ((y_res == 0).sum() > (y == 0).sum())) More 0 samples: True
Methods
fit
(X, y, **params)Check inputs and statistics of the sampler.
fit_resample
(X, y, **params)Resample the dataset.
get_feature_names_out
([input_features])Get output feature names for transformation.
Get metadata routing of this object.
get_params
([deep])Get parameters for this estimator.
set_params
(**params)Set the parameters of this estimator.
- fit(X, y, **params)[source]#
Check inputs and statistics of the sampler.
You should use
fit_resample
in all cases.- Parameters:
- X{array-like, dataframe, sparse matrix} of shape (n_samples, n_features)
Data array.
- yarray-like of shape (n_samples,)
Target array.
- Returns:
- selfobject
Return the instance itself.
- fit_resample(X, y, **params)[source]#
Resample the dataset.
- Parameters:
- X{array-like, dataframe, sparse matrix} of shape (n_samples, n_features)
Matrix containing the data which have to be sampled.
- yarray-like of shape (n_samples,)
Corresponding label for each sample in X.
- Returns:
- X_resampled{array-like, dataframe, sparse matrix} of shape (n_samples_new, n_features)
The array containing the resampled data.
- y_resampledarray-like of shape (n_samples_new,)
The corresponding label of
X_resampled
.
- get_feature_names_out(input_features=None)[source]#
Get output feature names for transformation.
- Parameters:
- input_featuresarray-like of str or None, default=None
Input features.
If
input_features
isNone
, thenfeature_names_in_
is used as feature names in. Iffeature_names_in_
is not defined, then the following input feature names are generated:["x0", "x1", ..., "x(n_features_in_ - 1)"]
.If
input_features
is an array-like, theninput_features
must matchfeature_names_in_
iffeature_names_in_
is defined.
- Returns:
- feature_names_outndarray of str objects
Same as input features.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
Examples using imblearn.over_sampling.KMeansSMOTE
#
Compare over-sampling samplers