Normalizer#

classsklearn.preprocessing.Normalizer(norm='l2',*,copy=True)[source]#

Normalize samples individually to unit norm.

Each sample (i.e. each row of the data matrix) with at least onenon zero component is rescaled independently of other samples sothat its norm (l1, l2 or inf) equals one.

This transformer is able to work both with dense numpy arrays andscipy.sparse matrix (use CSR format if you want to avoid the burden ofa copy / conversion).

Scaling inputs to unit norms is a common operation for textclassification or clustering for instance. For instance the dotproduct of two l2-normalized TF-IDF vectors is the cosine similarityof the vectors and is the base similarity metric for the VectorSpace Model commonly used by the Information Retrieval community.

For an example visualization, refer toCompare Normalizer with otherscalers.

Read more in theUser Guide.

Parameters:
norm{‘l1’, ‘l2’, ‘max’}, default=’l2’

The norm to use to normalize each non zero sample. If norm=’max’is used, values will be rescaled by the maximum of the absolutevalues.

copybool, default=True

Set to False to perform inplace row normalization and avoid acopy (if the input is already a numpy array or a scipy.sparseCSR matrix).

Attributes:
n_features_in_int

Number of features seen duringfit.

Added in version 0.24.

feature_names_in_ndarray of shape (n_features_in_,)

Names of features seen duringfit. Defined only whenXhas feature names that are all strings.

Added in version 1.0.

See also

normalize

Equivalent function without the estimator API.

Notes

This estimator isstateless and does not need to be fitted.However, we recommend to callfit_transform instead oftransform, as parameter validation is only performed infit.

Examples

>>>fromsklearn.preprocessingimportNormalizer>>>X=[[4,1,2,2],...[1,3,9,3],...[5,7,5,1]]>>>transformer=Normalizer().fit(X)# fit does nothing.>>>transformerNormalizer()>>>transformer.transform(X)array([[0.8, 0.2, 0.4, 0.4],       [0.1, 0.3, 0.9, 0.3],       [0.5, 0.7, 0.5, 0.1]])
fit(X,y=None)[source]#

Only validates estimator’s parameters.

This method allows to: (i) validate the estimator’s parameters and(ii) be consistent with the scikit-learn transformer API.

Parameters:
X{array-like, sparse matrix} of shape (n_samples, n_features)

The data to estimate the normalization parameters.

yIgnored

Not used, present here for API consistency by convention.

Returns:
selfobject

Fitted transformer.

fit_transform(X,y=None,**fit_params)[source]#

Fit to data, then transform it.

Fits transformer toX andy with optional parametersfit_paramsand returns a transformed version ofX.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_feature_names_out(input_features=None)[source]#

Get output feature names for transformation.

Parameters:
input_featuresarray-like of str or None, default=None

Input features.

  • Ifinput_features isNone, thenfeature_names_in_ isused as feature names in. Iffeature_names_in_ is not defined,then the following input feature names are generated:["x0","x1",...,"x(n_features_in_-1)"].

  • Ifinput_features is an array-like, theninput_features mustmatchfeature_names_in_ iffeature_names_in_ is defined.

Returns:
feature_names_outndarray of str objects

Same as input features.

get_metadata_routing()[source]#

Get metadata routing of this object.

Please checkUser Guide on how the routingmechanism works.

Returns:
routingMetadataRequest

AMetadataRequest encapsulatingrouting information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator andcontained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

set_output(*,transform=None)[source]#

Set output container.

SeeIntroducing the set_output APIfor an example on how to use the API.

Parameters:
transform{“default”, “pandas”, “polars”}, default=None

Configure output oftransform andfit_transform.

  • "default": Default output format of a transformer

  • "pandas": DataFrame output

  • "polars": Polars output

  • None: Transform configuration is unchanged

Added in version 1.4:"polars" option was added.

Returns:
selfestimator instance

Estimator instance.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects(such asPipeline). The latter haveparameters of the form<component>__<parameter> so that it’spossible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_transform_request(*,copy:bool|None|str='$UNCHANGED$')Normalizer[source]#

Configure whether metadata should be requested to be passed to thetransform method.

Note that this method is only relevant when this estimator is used as asub-estimator within ameta-estimator and metadata routing is enabledwithenable_metadata_routing=True (seesklearn.set_config).Please check theUser Guide on how the routingmechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed totransform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it totransform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains theexisting request. This allows you to change the request for someparameters and not others.

Added in version 1.3.

Parameters:
copystr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing forcopy parameter intransform.

Returns:
selfobject

The updated object.

transform(X,copy=None)[source]#

Scale each non zero row of X to unit norm.

Parameters:
X{array-like, sparse matrix} of shape (n_samples, n_features)

The data to normalize, row by row. scipy.sparse matrices should bein CSR format to avoid an un-necessary copy.

copybool, default=None

Copy the input X or not.

Returns:
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)

Transformed array.

Gallery examples#

Scalable learning with polynomial kernel approximation

Scalable learning with polynomial kernel approximation

Compare the effect of different scalers on data with outliers

Compare the effect of different scalers on data with outliers

Clustering text documents using k-means

Clustering text documents using k-means