make_pipeline#

imblearn.pipeline.make_pipeline(*steps, memory=None, verbose=False)[source]#

Construct a Pipeline from the given estimators.

This is a shorthand for the Pipeline constructor; it does not require, and does not permit, naming the estimators. Instead, their names will be set to the lowercase of their types automatically.

Parameters:
*stepslist of estimators

A list of estimators.

memoryNone, str or object with the joblib.Memory interface, default=None

Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute named_steps or steps to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming.

verbosebool, default=False

If True, the time elapsed while fitting each step will be printed as it is completed.

Returns:
pPipeline

Returns an imbalanced-learn Pipeline instance that handles samplers.

See also

imblearn.pipeline.Pipeline

Class for creating a pipeline of transforms with a final estimator.

Examples

>>> from sklearn.naive_bayes import GaussianNB
>>> from sklearn.preprocessing import StandardScaler
>>> make_pipeline(StandardScaler(), GaussianNB(priors=None))
Pipeline(steps=[('standardscaler', StandardScaler()),
                ('gaussiannb', GaussianNB())])

Examples using imblearn.pipeline.make_pipeline#

Multiclass classification with under-sampling

Multiclass classification with under-sampling

Example of topic classification in text documents

Example of topic classification in text documents

Customized sampler to implement an outlier rejections estimator

Customized sampler to implement an outlier rejections estimator

Benchmark over-sampling methods in a face recognition task

Benchmark over-sampling methods in a face recognition task

Fitting model on imbalanced datasets and how to fight bias

Fitting model on imbalanced datasets and how to fight bias

Compare sampler combining over- and under-sampling

Compare sampler combining over- and under-sampling

Evaluate classification by compiling a report

Evaluate classification by compiling a report

Metrics specific to imbalanced learning

Metrics specific to imbalanced learning

Plotting Validation Curves

Plotting Validation Curves

Compare over-sampling samplers

Compare over-sampling samplers

Usage of pipeline embedding samplers

Usage of pipeline embedding samplers

Compare under-sampling samplers

Compare under-sampling samplers