Pipeline#
- classsklearn.pipeline.Pipeline(steps,*,transform_input=None,memory=None,verbose=False)[source]#
A sequence of data transformers with an optional final predictor.
Pipelineallows you to sequentially apply a list of transformers topreprocess the data and, if desired, conclude the sequence with a finalpredictor for predictive modeling.Intermediate steps of the pipeline must be transformers, that is, theymust implement
fitandtransformmethods.The finalestimator only needs to implementfit.The transformers in the pipeline can be cached usingmemoryargument.The purpose of the pipeline is to assemble several steps that can becross-validated together while setting different parameters. For this, itenables setting parameters of the various steps using their names and theparameter name separated by a
'__', as in the example below. A step’sestimator may be replaced entirely by setting the parameter with its nameto another estimator, or a transformer removed by setting it to'passthrough'orNone.For an example use case of
Pipelinecombined withGridSearchCV, refer toSelecting dimensionality reduction with Pipeline and GridSearchCV. TheexamplePipelining: chaining a PCA and a logistic regression shows howto grid search on a pipeline using'__'as a separator in the parameter names.Read more in theUser Guide.
Added in version 0.5.
- Parameters:
- stepslist of tuples
List of (name of step, estimator) tuples that are to be chained insequential order. To be compatible with the scikit-learn API, all stepsmust define
fit. All non-last steps must also definetransform. SeeCombining Estimators for more details.- transform_inputlist of str, default=None
The names of themetadata parameters that should be transformed by thepipeline before passing it to the step consuming it.
This enables transforming some input arguments to
fit(other thanX)to be transformed by the steps of the pipeline up to the step which requiresthem. Requirement is defined viametadata routing.For instance, this can be used to pass a validation set through the pipeline.You can only set this if metadata routing is enabled, which youcan enable using
sklearn.set_config(enable_metadata_routing=True).Added in version 1.6.
- memorystr or object with the joblib.Memory interface, default=None
Used to cache the fitted transformers of the pipeline. The last stepwill never be cached, even if it is a transformer. By default, nocaching is performed. If a string is given, it is the path to thecaching directory. Enabling caching triggers a clone of the transformersbefore fitting. Therefore, the transformer instance given to thepipeline cannot be inspected directly. Use the attribute
named_stepsorstepsto inspect estimators within the pipeline. Caching thetransformers is advantageous when fitting is time consuming. SeeCaching nearest neighborsfor an example on how to enable caching.- verbosebool, default=False
If True, the time elapsed while fitting each step will be printed as itis completed.
- Attributes:
named_stepsBunchAccess the steps by name.
classes_ndarray of shape (n_classes,)The classes labels.
n_features_in_intNumber of features seen during first step
fitmethod.feature_names_in_ndarray of shape (n_features_in_,)Names of features seen during first step
fitmethod.
See also
make_pipelineConvenience function for simplified pipeline construction.
Examples
>>>fromsklearn.svmimportSVC>>>fromsklearn.preprocessingimportStandardScaler>>>fromsklearn.datasetsimportmake_classification>>>fromsklearn.model_selectionimporttrain_test_split>>>fromsklearn.pipelineimportPipeline>>>X,y=make_classification(random_state=0)>>>X_train,X_test,y_train,y_test=train_test_split(X,y,...random_state=0)>>>pipe=Pipeline([('scaler',StandardScaler()),('svc',SVC())])>>># The pipeline can be used as any other estimator>>># and avoids leaking the test set into the train set>>>pipe.fit(X_train,y_train).score(X_test,y_test)0.88>>># An estimator's parameter can be set using '__' syntax>>>pipe.set_params(svc__C=10).fit(X_train,y_train).score(X_test,y_test)0.76
- decision_function(X,**params)[source]#
Transform the data, and apply
decision_functionwith the final estimator.Call
transformof each transformer in the pipeline. The transformeddata are finally passed to the final estimator that callsdecision_functionmethod. Only valid if the final estimatorimplementsdecision_function.- Parameters:
- Xiterable
Data to predict on. Must fulfill input requirements of first stepof the pipeline.
- **paramsdict of string -> object
Parameters requested and accepted by steps. Each step must haverequested certain metadata for these parameters to be forwarded tothem.
Added in version 1.4:Only available if
enable_metadata_routing=True. SeeMetadata Routing User Guide for moredetails.
- Returns:
- y_scorendarray of shape (n_samples, n_classes)
Result of calling
decision_functionon the final estimator.
- fit(X,y=None,**params)[source]#
Fit the model.
Fit all the transformers one after the other and sequentially transform thedata. Finally, fit the transformed data using the final estimator.
- Parameters:
- Xiterable
Training data. Must fulfill input requirements of first step of thepipeline.
- yiterable, default=None
Training targets. Must fulfill label requirements for all steps ofthe pipeline.
- **paramsdict of str -> object
If
enable_metadata_routing=False(default): Parameters passed to thefitmethod of each step, where each parameter name is prefixed suchthat parameterpfor stepshas keys__p.If
enable_metadata_routing=True: Parameters requested and accepted bysteps. Each step must have requested certain metadata for these parametersto be forwarded to them.
Changed in version 1.4:Parameters are now passed to the
transformmethod of theintermediate steps as well, if requested, and ifenable_metadata_routing=Trueis set viaset_config.SeeMetadata Routing User Guide for moredetails.
- Returns:
- selfobject
Pipeline with fitted steps.
- fit_predict(X,y=None,**params)[source]#
Transform the data, and apply
fit_predictwith the final estimator.Call
fit_transformof each transformer in the pipeline. Thetransformed data are finally passed to the final estimator that callsfit_predictmethod. Only valid if the final estimator implementsfit_predict.- Parameters:
- Xiterable
Training data. Must fulfill input requirements of first step ofthe pipeline.
- yiterable, default=None
Training targets. Must fulfill label requirements for all stepsof the pipeline.
- **paramsdict of str -> object
If
enable_metadata_routing=False(default): Parameters to thepredictcalled at the end of all transformations in the pipeline.If
enable_metadata_routing=True: Parameters requested and accepted bysteps. Each step must have requested certain metadata for these parametersto be forwarded to them.
Added in version 0.20.
Changed in version 1.4:Parameters are now passed to the
transformmethod of theintermediate steps as well, if requested, and ifenable_metadata_routing=True.SeeMetadata Routing User Guide for moredetails.
Note that while this may be used to return uncertainties from somemodels with
return_stdorreturn_cov, uncertainties that aregenerated by the transformations in the pipeline are not propagatedto the final estimator.
- Returns:
- y_predndarray
Result of calling
fit_predicton the final estimator.
- fit_transform(X,y=None,**params)[source]#
Fit the model and transform with the final estimator.
Fit all the transformers one after the other and sequentially transformthe data. Only valid if the final estimator either implements
fit_transformorfitandtransform.- Parameters:
- Xiterable
Training data. Must fulfill input requirements of first step of thepipeline.
- yiterable, default=None
Training targets. Must fulfill label requirements for all steps ofthe pipeline.
- **paramsdict of str -> object
If
enable_metadata_routing=False(default): Parameters passed to thefitmethod of each step, where each parameter name is prefixed suchthat parameterpfor stepshas keys__p.If
enable_metadata_routing=True: Parameters requested and accepted bysteps. Each step must have requested certain metadata for these parametersto be forwarded to them.
Changed in version 1.4:Parameters are now passed to the
transformmethod of theintermediate steps as well, if requested, and ifenable_metadata_routing=True.SeeMetadata Routing User Guide for moredetails.
- Returns:
- Xtndarray of shape (n_samples, n_transformed_features)
Transformed samples.
- get_feature_names_out(input_features=None)[source]#
Get output feature names for transformation.
Transform input features using the pipeline.
- Parameters:
- input_featuresarray-like of str or None, default=None
Input features.
- Returns:
- feature_names_outndarray of str objects
Transformed feature names.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please checkUser Guide on how the routingmechanism works.
- Returns:
- routingMetadataRouter
A
MetadataRouterencapsulatingrouting information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
Returns the parameters given in the constructor as well as theestimators contained within the
stepsof thePipeline.- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator andcontained subobjects that are estimators.
- Returns:
- paramsmapping of string to any
Parameter names mapped to their values.
- inverse_transform(X,**params)[source]#
Apply
inverse_transformfor each step in a reverse order.All estimators in the pipeline must support
inverse_transform.- Parameters:
- Xarray-like of shape (n_samples, n_transformed_features)
Data samples, where
n_samplesis the number of samples andn_featuresis the number of features. Must fulfillinput requirements of last step of pipeline’sinverse_transformmethod.- **paramsdict of str -> object
Parameters requested and accepted by steps. Each step must haverequested certain metadata for these parameters to be forwarded tothem.
Added in version 1.4:Only available if
enable_metadata_routing=True. SeeMetadata Routing User Guide for moredetails.
- Returns:
- X_originalndarray of shape (n_samples, n_features)
Inverse transformed data, that is, data in the original featurespace.
- propertynamed_steps#
Access the steps by name.
Read-only attribute to access any step by given name.Keys are steps names and values are the steps objects.
- predict(X,**params)[source]#
Transform the data, and apply
predictwith the final estimator.Call
transformof each transformer in the pipeline. The transformeddata are finally passed to the final estimator that callspredictmethod. Only valid if the final estimator implementspredict.- Parameters:
- Xiterable
Data to predict on. Must fulfill input requirements of first stepof the pipeline.
- **paramsdict of str -> object
If
enable_metadata_routing=False(default): Parameters to thepredictcalled at the end of all transformations in the pipeline.If
enable_metadata_routing=True: Parameters requested and accepted bysteps. Each step must have requested certain metadata for these parametersto be forwarded to them.
Added in version 0.20.
Changed in version 1.4:Parameters are now passed to the
transformmethod of theintermediate steps as well, if requested, and ifenable_metadata_routing=Trueis set viaset_config.SeeMetadata Routing User Guide for moredetails.
Note that while this may be used to return uncertainties from somemodels with
return_stdorreturn_cov, uncertainties that aregenerated by the transformations in the pipeline are not propagatedto the final estimator.
- Returns:
- y_predndarray
Result of calling
predicton the final estimator.
- predict_log_proba(X,**params)[source]#
Transform the data, and apply
predict_log_probawith the final estimator.Call
transformof each transformer in the pipeline. The transformeddata are finally passed to the final estimator that callspredict_log_probamethod. Only valid if the final estimatorimplementspredict_log_proba.- Parameters:
- Xiterable
Data to predict on. Must fulfill input requirements of first stepof the pipeline.
- **paramsdict of str -> object
If
enable_metadata_routing=False(default): Parameters to thepredict_log_probacalled at the end of all transformations in thepipeline.If
enable_metadata_routing=True: Parameters requested and accepted bysteps. Each step must have requested certain metadata for these parametersto be forwarded to them.
Added in version 0.20.
Changed in version 1.4:Parameters are now passed to the
transformmethod of theintermediate steps as well, if requested, and ifenable_metadata_routing=True.SeeMetadata Routing User Guide for moredetails.
- Returns:
- y_log_probandarray of shape (n_samples, n_classes)
Result of calling
predict_log_probaon the final estimator.
- predict_proba(X,**params)[source]#
Transform the data, and apply
predict_probawith the final estimator.Call
transformof each transformer in the pipeline. The transformeddata are finally passed to the final estimator that callspredict_probamethod. Only valid if the final estimator implementspredict_proba.- Parameters:
- Xiterable
Data to predict on. Must fulfill input requirements of first stepof the pipeline.
- **paramsdict of str -> object
If
enable_metadata_routing=False(default): Parameters to thepredict_probacalled at the end of all transformations in the pipeline.If
enable_metadata_routing=True: Parameters requested and accepted bysteps. Each step must have requested certain metadata for these parametersto be forwarded to them.
Added in version 0.20.
Changed in version 1.4:Parameters are now passed to the
transformmethod of theintermediate steps as well, if requested, and ifenable_metadata_routing=True.SeeMetadata Routing User Guide for moredetails.
- Returns:
- y_probandarray of shape (n_samples, n_classes)
Result of calling
predict_probaon the final estimator.
- score(X,y=None,sample_weight=None,**params)[source]#
Transform the data, and apply
scorewith the final estimator.Call
transformof each transformer in the pipeline. The transformeddata are finally passed to the final estimator that callsscoremethod. Only valid if the final estimator implementsscore.- Parameters:
- Xiterable
Data to predict on. Must fulfill input requirements of first stepof the pipeline.
- yiterable, default=None
Targets used for scoring. Must fulfill label requirements for allsteps of the pipeline.
- sample_weightarray-like, default=None
If not None, this argument is passed as
sample_weightkeywordargument to thescoremethod of the final estimator.- **paramsdict of str -> object
Parameters requested and accepted by steps. Each step must haverequested certain metadata for these parameters to be forwarded tothem.
Added in version 1.4:Only available if
enable_metadata_routing=True. SeeMetadata Routing User Guide for moredetails.
- Returns:
- scorefloat
Result of calling
scoreon the final estimator.
- score_samples(X)[source]#
Transform the data, and apply
score_sampleswith the final estimator.Call
transformof each transformer in the pipeline. The transformeddata are finally passed to the final estimator that callsscore_samplesmethod. Only valid if the final estimator implementsscore_samples.- Parameters:
- Xiterable
Data to predict on. Must fulfill input requirements of first stepof the pipeline.
- Returns:
- y_scorendarray of shape (n_samples,)
Result of calling
score_sampleson the final estimator.
- set_output(*,transform=None)[source]#
Set the output container when
"transform"and"fit_transform"are called.Calling
set_outputwill set the output of all estimators insteps.- Parameters:
- transform{“default”, “pandas”, “polars”}, default=None
Configure output of
transformandfit_transform."default": Default output format of a transformer"pandas": DataFrame output"polars": Polars outputNone: Transform configuration is unchanged
Added in version 1.4:
"polars"option was added.
- Returns:
- selfestimator instance
Estimator instance.
- set_params(**kwargs)[source]#
Set the parameters of this estimator.
Valid parameter keys can be listed with
get_params(). Note thatyou can directly set the parameters of the estimators contained insteps.- Parameters:
- **kwargsdict
Parameters of this estimator or parameters of estimators containedin
steps. Parameters of the steps may be set using its name andthe parameter name separated by a ‘__’.
- Returns:
- selfobject
Pipeline class instance.
- set_score_request(*,sample_weight:bool|None|str='$UNCHANGED$')→Pipeline[source]#
Configure whether metadata should be requested to be passed to the
scoremethod.Note that this method is only relevant when this estimator is used as asub-estimator within ameta-estimator and metadata routing is enabledwith
enable_metadata_routing=True(seesklearn.set_config).Please check theUser Guide on how the routingmechanism works.The options for each parameter are:
True: metadata is requested, and passed toscoreif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it toscore.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains theexisting request. This allows you to change the request for someparameters and not others.Added in version 1.3.
- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weightparameter inscore.
- Returns:
- selfobject
The updated object.
- transform(X,**params)[source]#
Transform the data, and apply
transformwith the final estimator.Call
transformof each transformer in the pipeline. The transformeddata are finally passed to the final estimator that callstransformmethod. Only valid if the final estimatorimplementstransform.This also works where final estimator is
Nonein which case all priortransformations are applied.- Parameters:
- Xiterable
Data to transform. Must fulfill input requirements of first stepof the pipeline.
- **paramsdict of str -> object
Parameters requested and accepted by steps. Each step must haverequested certain metadata for these parameters to be forwarded tothem.
Added in version 1.4:Only available if
enable_metadata_routing=True. SeeMetadata Routing User Guide for moredetails.
- Returns:
- Xtndarray of shape (n_samples, n_transformed_features)
Transformed data.
Gallery examples#
Column Transformer with Heterogeneous Data Sources
Selecting dimensionality reduction with Pipeline and GridSearchCV
Pipelining: chaining a PCA and a logistic regression
Permutation Importance vs Random Forest Feature Importance (MDI)
Explicit feature map approximation for RBF kernels
Balance model complexity and cross-validated score
Sample pipeline for text feature extraction and evaluation
Comparing Nearest Neighbors with and without Neighborhood Components Analysis
Restricted Boltzmann Machine features for digit classification