imblearn.ensemble.EasyEnsembleClassifier

class imblearn.ensemble.EasyEnsembleClassifier(n_estimators=10, base_estimator=None, warm_start=False, sampling_strategy='auto', replacement=False, n_jobs=1, random_state=None, verbose=0)[source][source]

Bag of balanced boosted learners also known as EasyEnsemble.

This algorithm is known as EasyEnsemble [1]. The classifier is an ensemble of AdaBoost learners trained on different balanced boostrap samples. The balancing is achieved by random under-sampling.

Read more in the User Guide.

Parameters:
n_estimators : int, optional (default=10)

Number of AdaBoost learners in the ensemble.

base_estimator : object, optional (default=AdaBoostClassifier())

The base AdaBoost classifier used in the inner ensemble. Note that you can set the number of inner learner by passing your own instance.

warm_start : bool, optional (default=False)

When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new ensemble.

sampling_strategy : float, str, dict, callable, (default=’auto’)

Sampling information to sample the data set.

  • When float, it corresponds to the desired ratio of the number of samples in the majority class over the number of samples in the minority class after resampling. Therefore, the ratio is expressed as \alpha_{us} = N_{rM} / N_{m} where N_{rM} and N_{m} are the number of samples in the majority class after resampling and the number of samples in the minority class, respectively.

    Warning

    float is only available for binary classification. An error is raised for multi-class classification.

  • When str, specify the class targeted by the resampling. The number of samples in the different classes will be equalized. Possible choices are:

    'majority': resample only the majority class;

    'not minority': resample all classes but the minority class;

    'not majority': resample all classes but the majority class;

    'all': resample all classes;

    'auto': equivalent to 'not minority'.

  • When dict, the keys correspond to the targeted classes. The values correspond to the desired number of samples for each targeted class.

  • When callable, function taking y and returns a dict. The keys correspond to the targeted classes. The values correspond to the desired number of samples for each class.

replacement : bool, optional (default=False)

Whether or not to sample randomly with replacement or not.

n_jobs : int, optional (default=1)

The number of jobs to run in parallel for both fit and predict. If -1, then the number of jobs is set to the number of cores.

random_state : int, RandomState instance or None, optional (default=None)

Control the randomization of the algorithm.

  • If int, random_state is the seed used by the random number generator;
  • If RandomState instance, random_state is the random number generator;
  • If None, the random number generator is the RandomState instance used by np.random.
verbose : int, optional (default=0)

Controls the verbosity of the building process.

Notes

The method is described in [1].

Supports multi-class resampling by sampling each class independently.

References

[1](1, 2, 3) X. Y. Liu, J. Wu and Z. H. Zhou, “Exploratory Undersampling for Class-Imbalance Learning,” in IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 2, pp. 539-550, April 2009.

Examples

>>> from collections import Counter
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import confusion_matrix
>>> from imblearn.ensemble import EasyEnsembleClassifier # doctest: +NORMALIZE_WHITESPACE
>>> X, y = make_classification(n_classes=2, class_sep=2,
... weights=[0.1, 0.9], n_informative=3, n_redundant=1, flip_y=0,
... n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10)
>>> print('Original dataset shape %s' % Counter(y))
Original dataset shape Counter({1: 900, 0: 100})
>>> X_train, X_test, y_train, y_test = train_test_split(X, y,
...                                                     random_state=0)
>>> eec = EasyEnsembleClassifier(random_state=42)
>>> eec.fit(X_train, y_train) # doctest: +ELLIPSIS
EasyEnsembleClassifier(...)
>>> y_pred = eec.predict(X_test)
>>> print(confusion_matrix(y_test, y_pred))
[[ 23   0]
 [  2 225]]
Attributes:
base_estimator_ : estimator

The base estimator from which the ensemble is grown.

estimators_ : list of estimators

The collection of fitted base estimators.

classes_ : array, shape (n_classes,)

The classes labels.

n_classes_ : int or list

The number of classes.

__init__(n_estimators=10, base_estimator=None, warm_start=False, sampling_strategy='auto', replacement=False, n_jobs=1, random_state=None, verbose=0)[source][source]

Initialize self. See help(type(self)) for accurate signature.

decision_function(X)[source]

Average of the decision functions of the base classifiers.

Parameters:
X : {array-like, sparse matrix} of shape = [n_samples, n_features]

The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.

Returns:
score : array, shape = [n_samples, k]

The decision function of the input samples. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Regression and binary classification are special cases with k == 1, otherwise k==n_classes.

estimators_samples_

The subset of drawn samples for each base estimator.

Returns a dynamically generated list of indices identifying the samples used for fitting each member of the ensemble, i.e., the in-bag samples.

Note: the list is re-created at each call to the property in order to reduce the object memory footprint by not storing the sampling data. Thus fetching the property may be slower than expected.

fit(X, y)[source][source]

Build a Bagging ensemble of AdaBoost classifier using balanced boostrasp with random under-sampling.

Parameters:
X : {array-like, sparse matrix}, shape (n_samples, n_features)

The training input samples.

y : array-like, shape (n_samples,)

The target values.

Returns:
self : object

Returns self.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

predict(X)[source]

Predict class for X.

The predicted class of an input sample is computed as the class with the highest mean predicted probability. If base estimators do not implement a predict_proba method, then it resorts to voting.

Parameters:
X : {array-like, sparse matrix} of shape = [n_samples, n_features]

The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.

Returns:
y : array of shape = [n_samples]

The predicted classes.

predict_log_proba(X)[source]

Predict class log-probabilities for X.

The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the base estimators in the ensemble.

Parameters:
X : {array-like, sparse matrix} of shape = [n_samples, n_features]

The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.

Returns:
p : array of shape = [n_samples, n_classes]

The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.

predict_proba(X)[source]

Predict class probabilities for X.

The predicted class probabilities of an input sample is computed as the mean predicted class probabilities of the base estimators in the ensemble. If base estimators do not implement a predict_proba method, then it resorts to voting and the predicted class probabilities of an input sample represents the proportion of estimators predicting each class.

Parameters:
X : {array-like, sparse matrix} of shape = [n_samples, n_features]

The training input samples. Sparse matrices are accepted only if they are supported by the base estimator.

Returns:
p : array of shape = [n_samples, n_classes]

The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.

score(X, y, sample_weight=None)[source]

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:
X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:
score : float

Mean accuracy of self.predict(X) wrt. y.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
self

Examples using imblearn.ensemble.EasyEnsembleClassifier