ModelExplanation Stay organized with collections Save and categorize content based on your preferences.
Aggregated explanation metrics for a Model over a set of instances.
meanAttributions[]object (Attribution)Output only. Aggregated attributions explaining the Model's prediction outputs over the set of instances. The attributions are grouped by outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item.Attribution.output_index can be used to identify which output this attribution is explaining.
ThebaselineOutputValue,instanceOutputValue andfeatureAttributions fields are averaged over the test data.
NOTE: Currently AutoML tabular classification Models produce only one attribution, which averages attributions over all the classes it predicts.Attribution.approximation_error is not populated.
| JSON representation |
|---|
{"meanAttributions":[{object ( |
Attribution
Attribution that explains a particular prediction output.
baselineOutputValuenumberOutput only. Model predicted output if the input instance is constructed from the baselines of all the features defined inExplanationMetadata.inputs. The field name of the output is determined by the key inExplanationMetadata.outputs.
If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located byoutputIndex.
If there are multiple baselines, their output values are averaged.
instanceOutputValuenumberOutput only. Model predicted output on the corresponding [explanation instance][ExplainRequest.instances]. The field name of the output is determined by the key inExplanationMetadata.outputs.
If the Model predicted output has multiple dimensions, this is the value in the output located byoutputIndex.
featureAttributionsvalue (Value format)Output only. Attributions of each explained feature. Features are extracted from theprediction instances according toexplanation metadata for inputs.
The value is a struct, whose keys are the name of the feature. The values are how much the feature in theinstance contributed to the predicted result.
The format of the value is determined by the feature's input format:
If the feature is a scalar value, the attribution value is a
floating number.If the feature is an array of scalar values, the attribution value is an
array.If the feature is a struct, the attribution value is a
struct. The keys in the attribution value struct are the same as the keys in the feature struct. The formats of the values in the attribution struct are determined by the formats of the values in the feature struct.
TheExplanationMetadata.feature_attributions_schema_uri field, pointed to by theExplanationSpec field of theEndpoint.deployed_models object, points to the schema file that describes the features and their attribution values (if it is populated).
outputIndex[]integerOutput only. The index that locates the explained prediction output.
If the prediction output is a scalar value, outputIndex is not populated. If the prediction output has multiple dimensions, the length of the outputIndex list is the same as the number of dimensions of the output. The i-th element in outputIndex is the element index of the i-th dimension of the output vector. Indices start from 0.
outputDisplayNamestringOutput only. The display name of the output identified byoutputIndex. For example, the predicted class name by a multi-classification Model.
This field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using outputIndex.
approximationErrornumberOutput only. Error offeatureAttributions caused by approximation used in the explanation method. Lower value means more precise attributions.
- For Sampled Shapley
attribution, increasingpathCountmight reduce the error. - For Integrated Gradients
attribution, increasingstepCountmight reduce the error. - For
XRAI attribution, increasingstepCountmight reduce the error.
Seethis introduction for more information.
outputNamestringOutput only. name of the explain output. Specified as the key inExplanationMetadata.outputs.
| JSON representation |
|---|
{"baselineOutputValue":number,"instanceOutputValue":number,"featureAttributions":value,"outputIndex":[integer],"outputDisplayName":string,"approximationError":number,"outputName":string} |
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-06-27 UTC.