The ML.STANDARD_SCALER function

This document describes theML.STANDARD_SCALER function, which lets you scalea numerical expression by usingz-score.

When used in theTRANSFORM clause,thestandard deviation andmean values calculated to standardize theexpression are automatically used in prediction.

You can use this function with models that supportmanual feature preprocessing. For moreinformation, see the following documents:

Syntax

ML.STANDARD_SCALER(numerical_expression) OVER()

Arguments

ML.STANDARD_SCALER takes the following argument:

  • numerical_expression: thenumericalexpression to scale.

Output

ML.STANDARD_SCALER returns aFLOAT64 value that represents the scalednumerical expression.

Example

The following example scales a set of numerical expressions to have amean of0 and standard deviation of1:

SELECTf,ML.STANDARD_SCALER(f)OVER()ASoutputFROMUNNEST([1,2,3,4,5])ASf;

The output looks similar to the following:

+---+---------------------+| f |       output        |+---+---------------------+| 1 | -1.2649110640673518 || 5 |  1.2649110640673518 || 2 | -0.6324555320336759 || 4 |  0.6324555320336759 || 3 |                 0.0 |+---+---------------------+

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-11-24 UTC.