Evaluate AutoML forecast models Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to evaluate your AutoML forecast modelsusing model evaluation metrics. These metrics provide quantitative measurementsof how your model performed on thetest set.How you interpret and use thesemetrics depends on your business need and the problem your model is trained tosolve. For example, you might have a lower tolerance for false positives thanfor false negatives or the other way around. These kinds of questions affectwhich metrics you focus on.
Before you begin
Before you can evaluate a model, you musttrain itand wait for the training to complete.
Use the console or the API to check the status of your training job.
Google Cloud console
In the Google Cloud console, in the Vertex AI section, go totheTraining page.
If the status of your training job is "Training", continue towait for the training job to finish. If the status of your training jobis "Finished", you are ready to begin model evaluation.
API
Select a tab that corresponds to your language or environment:
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where your model is stored.
- PROJECT: Yourproject ID.
- TRAINING_PIPELINE_ID: ID of the training pipeline.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines/TRAINING_PIPELINE_ID
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines/TRAINING_PIPELINE_ID"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/trainingPipelines/TRAINING_PIPELINE_ID" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
If the model is still being trained:
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION/trainingPipelines/TRAINING_PIPELINE_ID", ... "modelToUpload": { "displayName": "MODEL_DISPLAY_NAME" }, "state": "PIPELINE_STATE_RUNNING", ... } If the model training is complete:
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION/trainingPipelines/TRAINING_PIPELINE_ID", ... "modelToUpload": { "name": "projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID", "displayName": "MODEL_DISPLAY_NAME", "versionID": "1" }, "state": "PIPELINE_STATE_SUCCEEDED", ... }
Get evaluation metrics
You can get an aggregate set ofevaluation metrics for your model. The followingcontent describes how to get these metrics using the Google Cloud console or API.
Google Cloud console
In the Google Cloud console, in the Vertex AI section, go totheModels page.
In theRegion drop-down, select the region where your model is located.
From the list of models, select your model.
Select your model's version number.
In theEvaluate tab, you can view your model's aggregate evaluationmetrics.
API
To view aggregate model evaluation metrics, use theprojects.locations.models.evaluations.getmethod.
Select a tab that corresponds to your language or environment:
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where your model is stored.
- PROJECT: Yourproject ID.
- MODEL_ID: The ID of themodel resource. TheMODEL_ID appears in the training pipeline after model training is successfully completed. Refer to theBefore you begin section to get theMODEL_ID.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "modelEvaluations": [ { "name": "projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID", "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/forecasting_metrics_1.0.0.yaml", "metrics": { "rootMeanSquaredError": 719.0045, "meanAbsoluteError": 487.0792, "meanAbsolutePercentageError": "Infinity", "rSquared": 0.8837288, "rootMeanSquaredLogError": "NaN", "quantileMetrics": [ { "quantile": 0.2, "scaledPinballLoss": 157.34422, "observedQuantile": 0.15918367346938775 }, { "quantile": 0.5, "scaledPinballLoss": 243.5396, "observedQuantile": 0.45551020408163267 }, { "quantile": 0.8, "scaledPinballLoss": 175.39418, "observedQuantile": 0.81183673469387752 } ] }, "createTime": "2021-04-13T01:00:54.091953Z" } ]}Model evaluation metrics
A schema file determines which evaluation metrics Vertex AIprovides for each objective.
You can view and download schema files from the following Cloud Storagelocation:
gs://google-cloud-aiplatform/schema/modelevaluation/
The evaluation metrics for forecasting models are:
- MAE: The mean absolute error (MAE) is the average absolute differencebetween the target values and the predicted values. This metric ranges from zeroto infinity; a lower value indicates a higher quality model.
- MAPE: Mean absolute percentage error (MAPE) is the average absolutepercentage difference between the labels and the predicted values. This metricranges between zero and infinity; a lower value indicates a higher qualitymodel.
MAPE is not shown if the target column contains any 0 values. In this case,MAPE is undefined. - RMSE: The root-mean-squared error is the square root of the averagesquared difference between the target and predicted values. RMSE is moresensitive to outliers than MAE,so if you're concerned about large errors, thenRMSE can be a more useful metric to evaluate. Similar to MAE, a smaller valueindicates a higher quality model (0 represents a perfect predictor).
- RMSLE: The root-mean-squared logarithmic error metric is similar to RMSE,except that it uses the natural logarithm of the predicted and actual valuesplus 1. RMSLE penalizes under-inference more heavily than over-inference. Itcan also be a good metric when you don't want to penalize differences for largeinference values more heavily than for small inference values. This metricranges from zero to infinity; a lower value indicates a higher quality model.The RMSLE evaluation metric is returned only if all label and predicted valuesare non-negative.
- r^2: r squared (r^2) is the square of the Pearson correlationcoefficient between the labels and predicted values. This metric ranges betweenzero and one. A higher value indicates a closer fit to the regression line.
- Quantile: The percent quantile, which indicates the probability that anobserved value will be below the predicted value. For example, at the 0.2quantile, the observed values are expected to be lower than the predicted values20% of the time. Vertex AI provides this metric if you specify
minimize-quantile-lossfor the optimization objective. - Observed quantile: Shows the percentage of true values that were lessthan the predicted value for a given quantile. Vertex AI providesthis metric if you specify
minimize-quantile-lossfor theoptimization objective. - Scaled pinball loss: The scaled pinball loss at a particular quantile.A lower value indicates a higher quality model at the given quantile.Vertex AI provides this metric if you specify
minimize-quantile-lossfor the optimization objective. - Model feature attributions: Vertex AI shows you how much each feature impacts a model. The values are provided as apercentage for each feature: the higher the percentage, the more impact the feature had onmodel training. Review this information to ensure that all of the most importantfeatures make sense for your data and business problem. To learn more, seeFeature attributions for forecasting.
What's next
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.