BigQuery Explainable AI overview
This document describes how BigQuery ML supportsExplainable artificial intelligence (AI), sometimes called XAI.
Explainable AI helps you understand the results thatyour predictive machine learning model generates for classification andregression tasks by defining how each feature in a row of data contributed tothe predicted result. This information is often referred to as featureattribution. You can use this information to verify that the model is behavingas expected, to recognize biases in your models, and to inform ways toimprove your model and your training data.
BigQuery ML and Vertex AI both have Explainable AIofferings which offer feature-based explanations. You can performexplainability in BigQuery ML, or you canregister your modelin Vertex AI and perform explainability there.
Local versus global explainability
There are two types of explainability: local explainability and globalexplainability. These are also known respectively aslocal feature importance andglobal feature importance.
- Local explainability returns feature attribution values for each explainedexample. These values describe how much a particular feature affected theprediction relative to the baseline prediction.
- Global explainability returns the feature's overall influence on themodel and is often obtained by aggregating the feature attributions over theentire dataset. A higher absolute value indicates the feature had a greaterinfluence on the model's predictions.
Explainable AI offerings in BigQuery ML
Explainable AI in BigQuery ML supports a variety of machinelearning models, including both time series and non-time series models. Each ofthe models takes advantage of a different explainability method.
| Model category | Model types | Explainability method | Basic explanation of the method | Local explain functions | Global explain functions |
|---|---|---|---|---|---|
| Supervised models | Linear & Logistic Regression | Shapley values | Shapley values for linear models are equal tomodel weight * feature value, where feature values are standardized and model weights are trained with the standardized feature values. | ML.EXPLAIN_PREDICT1 | ML.GLOBAL_EXPLAIN2 |
| Standard Errors andP-values | Standard errors and p-values are used for significance testing against the model weights. | N/A | ML.ADVANCED_WEIGHTS4 | ||
| Boosted trees Random forest | Tree SHAP | Tree SHAP is an algorithm to compute exactSHAP values for decision tree-based models. | ML.EXPLAIN_PREDICT1 | ML.GLOBAL_EXPLAIN2 | |
| Approximate Feature Contribution | Approximates the feature contribution values. It is faster and simpler compared to Tree SHAP. | ML.EXPLAIN_PREDICT1 | ML.GLOBAL_EXPLAIN2 | ||
| Gini Index-based feature importance | A global feature importance score that indicates how useful or valuable each feature was in the construction of the boosted tree or random forest model during training. | N/A | ML.FEATURE_IMPORTANCE | ||
| Deep Neural Network (DNN) Wide-and-Deep | Integrated gradients | A gradients-based method that efficiently computes feature attributions with the same axiomatic properties as the Shapley value. It provides a sampling approximation of exact feature attributions. Its accuracy is controlled by theintegrated_gradients_num_steps parameter. | ML.EXPLAIN_PREDICT1 | ML.GLOBAL_EXPLAIN2 | |
| AutoML Tables | Sampled Shapley | Sampled Shapley assigns credit for the model's outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values. | N/A | ML.GLOBAL_EXPLAIN2 | |
| Time series models | ARIMA_PLUS | Time series decomposition | Decomposes the time series into multiple components if those components are present in the time series. The components include trend, seasonal, holiday, step changes, and spike and dips. See ARIMA_PLUSmodeling pipeline for more details. | ML.EXPLAIN_FORECAST3 | N/A |
| ARIMA_PLUS_XREG | Time series decomposition and Shapley values | Decomposes the time series into multiple components, including trend, seasonal, holiday, step changes, and spike and dips (similar toARIMA_PLUS). Attribution of each external regressor is calculated based on Shapley Values, which is equal tomodel weight * feature value. | ML.EXPLAIN_FORECAST3 | N/A |
1ML_EXPLAIN_PREDICT is an extended version ofML.PREDICT.
2ML.GLOBAL_EXPLAIN returns the global explainabilityobtained by taking the mean absolute attribution that each feature receives forall the rows in the evaluation dataset.
3ML.EXPLAIN_FORECAST is an extended version ofML.FORECAST.
4ML.ADVANCED_WEIGHTS is an extended version ofML.WEIGHTS.
Explainable AI in Vertex AI
Explainable AI is available in Vertex AI for the followingsubset of exportable supervised learning models:
| Model type | Explainable AI method |
|---|---|
| dnn_classifier | Integrated gradients |
| dnn_regressor | Integrated gradients |
| dnn_linear_combined_classifier | Integrated gradients |
| dnn_linear_combined_regressor | Integrated gradients |
| boosted_tree_regressor | Sampled shapley |
| boosted_tree_classifier | Sampled shapley |
| random_forest_regressor | Sampled shapley |
| random_forest_classifier | Sampled shapley |
SeeFeature Attribution Methodsto learn more about these methods.
Enable Explainable AI in Model Registry
When your BigQuery ML model is registered inModel Registry, and if it is a type of model that supportsExplainable AI, you can enable Explainable AI on the model when deploying to anendpoint. When you register your BigQuery ML model, all of theassociated metadata is populated for you.
Note: Explainable AI incurs a minor additional cost. SeeVertex AI pricing tolearn more.- Register your BigQuery ML model to the Model Registry.
- Go to theModel Registry page from the BigQuery sectionin the Google Cloud console.
- From the Model Registry, select theBigQuery ML model and click the model version to redirectto the model detail page.
- SelectMore actions from the model version.
- ClickDeploy to endpoint.
- Define your endpoint - create an endpoint name and click continue.
- Select a machine type, for example,
n1-standard-2. - UnderModel settings, in the logging section, select the checkbox toenable Explainability options.
- ClickDone, and thenContinue to deploy to the endpoint.
To learn how to use XAI on your models from theModel Registry, seeGet an online explanation using your deployed model.To learn more about XAI in Vertex AI, seeGet explanations.
What's next
- Learn how tomanage BigQuery ML models in Vertex AI.
- For more information about supported SQL statements and functions for modelsthat support explainability,seeEnd-to-end user journeys for ML models.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.