MaxAbsScaler#

classsklearn.preprocessing.MaxAbsScaler(*,copy=True)[source]#

Scale each feature by its maximum absolute value.

This estimator scales and translates each feature individually suchthat the maximal absolute value of each feature in thetraining set will be 1.0. It does not shift/center the data, andthus does not destroy any sparsity.

This scaler can also be applied to sparse CSR or CSC matrices.

MaxAbsScaler doesn’t reduce the effect of outliers; it only linearlyscales them down. For an example visualization, refer toCompareMaxAbsScaler with other scalers.

Added in version 0.17.

Parameters:
copybool, default=True

Set to False to perform inplace scaling and avoid a copy (if the inputis already a numpy array).

Attributes:
scale_ndarray of shape (n_features,)

Per feature relative scaling of the data.

Added in version 0.17:scale_ attribute.

max_abs_ndarray of shape (n_features,)

Per feature maximum absolute value.

n_features_in_int

Number of features seen duringfit.

Added in version 0.24.

feature_names_in_ndarray of shape (n_features_in_,)

Names of features seen duringfit. Defined only whenXhas feature names that are all strings.

Added in version 1.0.

n_samples_seen_int

The number of samples processed by the estimator. Will be reset onnew calls to fit, but increments acrosspartial_fit calls.

See also

maxabs_scale

Equivalent function without the estimator API.

Notes

NaNs are treated as missing values: disregarded in fit, and maintained intransform.

Examples

>>>fromsklearn.preprocessingimportMaxAbsScaler>>>X=[[1.,-1.,2.],...[2.,0.,0.],...[0.,1.,-1.]]>>>transformer=MaxAbsScaler().fit(X)>>>transformerMaxAbsScaler()>>>transformer.transform(X)array([[ 0.5, -1. ,  1. ],       [ 1. ,  0. ,  0. ],       [ 0. ,  1. , -0.5]])
fit(X,y=None)[source]#

Compute the maximum absolute value to be used for later scaling.

Parameters:
X{array-like, sparse matrix} of shape (n_samples, n_features)

The data used to compute the per-feature minimum and maximumused for later scaling along the features axis.

yNone

Ignored.

Returns:
selfobject

Fitted scaler.

fit_transform(X,y=None,**fit_params)[source]#

Fit to data, then transform it.

Fits transformer toX andy with optional parametersfit_paramsand returns a transformed version ofX.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_feature_names_out(input_features=None)[source]#

Get output feature names for transformation.

Parameters:
input_featuresarray-like of str or None, default=None

Input features.

  • Ifinput_features isNone, thenfeature_names_in_ isused as feature names in. Iffeature_names_in_ is not defined,then the following input feature names are generated:["x0","x1",...,"x(n_features_in_-1)"].

  • Ifinput_features is an array-like, theninput_features mustmatchfeature_names_in_ iffeature_names_in_ is defined.

Returns:
feature_names_outndarray of str objects

Same as input features.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please checkUser Guide on how the routingmechanism works.

Returns:
routingMetadataRequest

AMetadataRequest encapsulatingrouting information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator andcontained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

inverse_transform(X)[source]#

Scale back the data to the original representation.

Parameters:
X{array-like, sparse matrix} of shape (n_samples, n_features)

The data that should be transformed back.

Returns:
X_original{ndarray, sparse matrix} of shape (n_samples, n_features)

Transformed array.

partial_fit(X,y=None)[source]#

Online computation of max absolute value of X for later scaling.

All of X is processed as a single batch. This is intended for caseswhenfit is not feasible due to very large number ofn_samples or because X is read from a continuous stream.

Parameters:
X{array-like, sparse matrix} of shape (n_samples, n_features)

The data used to compute the mean and standard deviationused for later scaling along the features axis.

yNone

Ignored.

Returns:
selfobject

Fitted scaler.

set_output(*,transform=None)[source]#

Set output container.

SeeIntroducing the set_output APIfor an example on how to use the API.

Parameters:
transform{“default”, “pandas”, “polars”}, default=None

Configure output oftransform andfit_transform.

  • "default": Default output format of a transformer

  • "pandas": DataFrame output

  • "polars": Polars output

  • None: Transform configuration is unchanged

Added in version 1.4:"polars" option was added.

Returns:
selfestimator instance

Estimator instance.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects(such asPipeline). The latter haveparameters of the form<component>__<parameter> so that it’spossible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

transform(X)[source]#

Scale the data.

Parameters:
X{array-like, sparse matrix} of shape (n_samples, n_features)

The data that should be scaled.

Returns:
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)

Transformed array.

Gallery examples#

Compare the effect of different scalers on data with outliers

Compare the effect of different scalers on data with outliers