Evaluate AutoML classification and regression models Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to evaluate your AutoML classification andregression models.
Vertex AI provides model evaluation metrics to help you determine theperformance of your models, such as precision and recall metrics.Vertex AI calculates evaluation metrics by using thetest set.
Before you begin
Before you evaluate your model,train the model.
How you use model evaluation metrics
Model evaluation metrics provide quantitative measurements of how your modelperformed on the test set. How you interpret and use those metrics depends onyour business need and the problem your model is trained to solve. For example,you might have a lower tolerance for false positives than for false negatives orthe other way around. These kinds of questions affect which metrics you focus on.
Get evaluation metrics
You can get an aggregate set ofevaluation metrics for your model and, for someobjectives, evaluation metrics for a particular class or label. Evaluationmetrics for a particular class or label is also known as anevaluation slice.The following content describes how to get aggregate evaluation metrics andevaluation slices by using the Google Cloud console or API.
Google Cloud console
In the Google Cloud console, in the Vertex AI section, go totheModels page.
In theRegion drop-down, select the region where your model is located.
From the list of models, click your model, which opens the model'sEvaluate tab.
In theEvaluate tab, you can view your model's aggregate evaluationmetrics, such as theAverage precision andRecall.
If the model objective has evaluation slices, the console shows a list oflabels. You can click a label to view evaluation metrics for that label,as shown in the following example:

API
API requests for getting evaluation metrics are the same for each data type andobjective, but the outputs are different. The following samples show the samerequest but different responses.
Get aggregate model evaluation metrics
The aggregate model evaluation metrics provide information about the model asa whole. To see information about a specific slice, list themodel evaluation slices.
To view aggregate model evaluation metrics, use theprojects.locations.models.evaluations.getmethod.
Select the tab below for your objective:
Classification
Vertex AI returns an array of confidence metrics. Each elementshows evaluation metrics at a differentconfidenceThreshold value(starting from 0 and going up to 1). By viewing different threshold values, youcan see how the threshold affects other metrics such as precision and recall.
Select a tab that corresponds to your language or environment:
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where your model is stored.
- PROJECT: Yourproject ID.
- MODEL_ID: The ID of themodel resource.
- PROJECT_NUMBER: Your project's automatically generatedproject number.
- EVALUATION_ID: ID for the model evaluation (appears in the response).
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "modelEvaluations": [ { "name": "projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID", "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml", "metrics": { "auPrc": 0.97762364, "auRoc": 0.97566897, "logLoss": 0.19153881, "confidenceMetrics": [ { "recall": 1, "precision": 0.5, "falsePositiveRate": 1, "f1Score": 0.6666667, "recallAt1": 0.90911126, "precisionAt1": 0.90911126, "falsePositiveRateAt1": 0.09088874, "f1ScoreAt1": 0.90911126, "truePositiveCount": "4467", "falsePositiveCount": "4467" }, { "confidenceThreshold": 0.003269856, "recall": 0.9997761, "precision": 0.56993365, "falsePositiveRate": 0.7544213, "f1Score": 0.7260018, "recallAt1": 0.90911126, "precisionAt1": 0.90911126, "falsePositiveRateAt1": 0.09088874, "f1ScoreAt1": 0.90911126, "truePositiveCount": "4466", "falsePositiveCount": "3370", "falseNegativeCount": "1", "trueNegativeCount": "1097" }, { "confidenceThreshold": 0.1103351, "recall": 0.9899261, "precision": 0.79819494, "falsePositiveRate": 0.25027984, "f1Score": 0.8837814, "recallAt1": 0.90911126, "precisionAt1": 0.90911126, "falsePositiveRateAt1": 0.09088874, "f1ScoreAt1": 0.90911126, "truePositiveCount": "4422", "falsePositiveCount": "1118", "falseNegativeCount": "45", "trueNegativeCount": "3349" }, ... ], "confusionMatrix": { "annotationSpecs": [ { "displayName": "1" }, { "displayName": "2" } ], "rows": [ [ 3817, 140 ], [ 266, 244 ] ] } }, "createTime": "2020-10-09T00:19:15.463930Z", "sliceDimensions": [ "annotationSpec" ], "modelExplanation": { "meanAttributions": [ { "featureAttributions": { "Age": 0.022972771897912025, "Job": 0.031542550772428513, "MaritalStatus": 0.015506803058087826, "Education": 0.019189134240150452, "Default": 0.00021766019926872104, "Balance": 0.031217793002724648, "Housing": 0.06786702573299408, "Loan": 0.0072592208161950111, "Contact": 0.083566240966320038, "Day": 0.074894927442073822, "Month": 0.19679982960224152, "Duration": 0.35500210523605347, "Campaign": 0.033425047993659973, "PDays": 0.013902961276471615, "Previous": 0.0061685866676270962, "POutcome": 0.040467333048582077 } } ] } } ]}Java
Before trying this sample, follow theJava setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AIJava API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
importcom.google.cloud.aiplatform.v1.ModelEvaluation;importcom.google.cloud.aiplatform.v1.ModelEvaluationName;importcom.google.cloud.aiplatform.v1.ModelServiceClient;importcom.google.cloud.aiplatform.v1.ModelServiceSettings;importjava.io.IOException;publicclassGetModelEvaluationTabularClassificationSample{publicstaticvoidmain(String[]args)throwsIOException{// TODO(developer): Replace these variables before running the sample.// To obtain evaluationId run the code block below after setting modelServiceSettings.//// try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings))// {// String location = "us-central1";// ModelName modelFullId = ModelName.of(project, location, modelId);// ListModelEvaluationsRequest modelEvaluationsrequest =// ListModelEvaluationsRequest.newBuilder().setParent(modelFullId.toString()).build();// for (ModelEvaluation modelEvaluation :// modelServiceClient.listModelEvaluations(modelEvaluationsrequest).iterateAll()) {// System.out.format("Model Evaluation Name: %s%n", modelEvaluation.getName());// }// }Stringproject="YOUR_PROJECT_ID";StringmodelId="YOUR_MODEL_ID";StringevaluationId="YOUR_EVALUATION_ID";getModelEvaluationTabularClassification(project,modelId,evaluationId);}staticvoidgetModelEvaluationTabularClassification(Stringproject,StringmodelId,StringevaluationId)throwsIOException{ModelServiceSettingsmodelServiceSettings=ModelServiceSettings.newBuilder().setEndpoint("us-central1-aiplatform.googleapis.com:443").build();// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests. After completing all of your requests, call// the "close" method on the client to safely clean up any remaining background resources.try(ModelServiceClientmodelServiceClient=ModelServiceClient.create(modelServiceSettings)){Stringlocation="us-central1";ModelEvaluationNamemodelEvaluationName=ModelEvaluationName.of(project,location,modelId,evaluationId);ModelEvaluationmodelEvaluation=modelServiceClient.getModelEvaluation(modelEvaluationName);System.out.println("Get Model Evaluation Tabular Classification Response");System.out.format("\tName: %s\n",modelEvaluation.getName());System.out.format("\tMetrics Schema Uri: %s\n",modelEvaluation.getMetricsSchemaUri());System.out.format("\tMetrics: %s\n",modelEvaluation.getMetrics());System.out.format("\tCreate Time: %s\n",modelEvaluation.getCreateTime());System.out.format("\tSlice Dimensions: %s\n",modelEvaluation.getSliceDimensionsList());}}}Node.js
Before trying this sample, follow theNode.js setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AINode.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
/** * TODO(developer): Uncomment these variables before running the sample * (not necessary if passing values as arguments). To obtain evaluationId, * instantiate the client and run the following the commands. */// const parentName = `projects/${project}/locations/${location}/models/${modelId}`;// const evalRequest = {// parent: parentName// };// const [evalResponse] = await modelServiceClient.listModelEvaluations(evalRequest);// console.log(evalResponse);// const modelId = 'YOUR_MODEL_ID';// const evaluationId = 'YOUR_EVALUATION_ID';// const project = 'YOUR_PROJECT_ID';// const location = 'YOUR_PROJECT_LOCATION';// Imports the Google Cloud Model Service Client libraryconst{ModelServiceClient}=require('@google-cloud/aiplatform');// Specifies the location of the api endpointconstclientOptions={apiEndpoint:'us-central1-aiplatform.googleapis.com',};// Instantiates a clientconstmodelServiceClient=newModelServiceClient(clientOptions);asyncfunctiongetModelEvaluationTabularClassification(){// Configure the parent resourcesconstname=`projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;constrequest={name,};// Get model evaluation requestconst[response]=awaitmodelServiceClient.getModelEvaluation(request);console.log('Get model evaluation tabular classification response');console.log(`\tName :${response.name}`);console.log(`\tMetrics schema uri :${response.metricsSchemaUri}`);console.log(`\tMetrics :${JSON.stringify(response.metrics)}`);console.log(`\tCreate time :${JSON.stringify(response.createTime)}`);console.log(`\tSlice dimensions :${response.sliceDimensions}`);constmodelExplanation=response.modelExplanation;console.log('\tModel explanation');if(!modelExplanation){console.log('\t\t{}');}else{constmeanAttributions=modelExplanation.meanAttributions;if(!meanAttributions){console.log('\t\t\t []');}else{for(constmeanAttributionofmeanAttributions){console.log('\t\tMean attribution');console.log(`\t\t\tBaseline output value : \${meanAttribution.baselineOutputValue}`);console.log(`\t\t\tInstance output value : \${meanAttribution.instanceOutputValue}`);console.log(`\t\t\tFeature attributions : \${JSON.stringify(meanAttribution.featureAttributions)}`);console.log(`\t\t\tOutput index :${meanAttribution.outputIndex}`);console.log(`\t\t\tOutput display name : \${meanAttribution.outputDisplayName}`);console.log(`\t\t\tApproximation error : \${meanAttribution.approximationError}`);}}}}getModelEvaluationTabularClassification();Python
To learn how to install or update the Vertex AI SDK for Python, seeInstall the Vertex AI SDK for Python. For more information, see thePython API reference documentation.
fromgoogle.cloudimportaiplatformdefget_model_evaluation_tabular_classification_sample(project:str,model_id:str,evaluation_id:str,location:str="us-central1",api_endpoint:str="us-central1-aiplatform.googleapis.com",):""" To obtain evaluation_id run the following commands where LOCATION is the region where the model is stored, PROJECT is the project ID, and MODEL_ID is the ID of your model. model_client = aiplatform.gapic.ModelServiceClient( client_options={ 'api_endpoint':'LOCATION-aiplatform.googleapis.com' } ) evaluations = model_client.list_model_evaluations(parent='projects/PROJECT/locations/LOCATION/models/MODEL_ID') print("evaluations:", evaluations) """# The AI Platform services require regional API endpoints.client_options={"api_endpoint":api_endpoint}# Initialize client that will be used to create and send requests.# This client only needs to be created once, and can be reused for multiple requests.client=aiplatform.gapic.ModelServiceClient(client_options=client_options)name=client.model_evaluation_path(project=project,location=location,model=model_id,evaluation=evaluation_id)response=client.get_model_evaluation(name=name)print("response:",response)Regression
Select a tab that corresponds to your language or environment:
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where your model is stored.
- PROJECT: .
- MODEL_ID: The ID of themodel resource.
- PROJECT_NUMBER: Your project's automatically generatedproject number.
- EVALUATION_ID: ID for the model evaluation (appears in the response).
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "modelEvaluations": [ { "name": "projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID", "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/regression_metrics_1.0.0.yaml", "metrics": { "rootMeanSquaredError": 2553.6309, "meanAbsoluteError": 1373.3932, "meanAbsolutePercentageError": "Infinity", "rSquared": 0.060764354, "rootMeanSquaredLogError": "NaN" }, "createTime": "2020-10-09T01:20:37.045482Z", "modelExplanation": { "meanAttributions": [ { "featureAttributions": { "Age": 0.22535169124603271, "Job": 0.049311652779579163, "MaritalStatus": 0.033439181745052338, "Education": 0.10934026539325714, "Default": 0.021301545202732086, "Housing": 0.0631907731294632, "Loan": 0.055760543793439865, "Contact": 0.010930608958005905, "Day": 0.14066702127456665, "Month": 0.17570944130420685, "Duration": 0.054339192807674408, "Campaign": 0.015468073077499866, "PDays": 0.020416950806975365, "Previous": 0.0037290120963007212, "POutcome": 0.0040646209381520748, "Deposit": 0.016979435458779335 } } ] } } ]}Java
Before trying this sample, follow theJava setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AIJava API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
importcom.google.cloud.aiplatform.v1.ModelEvaluation;importcom.google.cloud.aiplatform.v1.ModelEvaluationName;importcom.google.cloud.aiplatform.v1.ModelServiceClient;importcom.google.cloud.aiplatform.v1.ModelServiceSettings;importjava.io.IOException;publicclassGetModelEvaluationTabularRegressionSample{publicstaticvoidmain(String[]args)throwsIOException{// TODO(developer): Replace these variables before running the sample.// To obtain evaluationId run the code block below after setting modelServiceSettings.//// try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings))// {// String location = "us-central1";// ModelName modelFullId = ModelName.of(project, location, modelId);// ListModelEvaluationsRequest modelEvaluationsrequest =// ListModelEvaluationsRequest.newBuilder().setParent(modelFullId.toString()).build();// for (ModelEvaluation modelEvaluation :// modelServiceClient.listModelEvaluations(modelEvaluationsrequest).iterateAll()) {// System.out.format("Model Evaluation Name: %s%n", modelEvaluation.getName());// }// }Stringproject="YOUR_PROJECT_ID";StringmodelId="YOUR_MODEL_ID";StringevaluationId="YOUR_EVALUATION_ID";getModelEvaluationTabularRegression(project,modelId,evaluationId);}staticvoidgetModelEvaluationTabularRegression(Stringproject,StringmodelId,StringevaluationId)throwsIOException{ModelServiceSettingsmodelServiceSettings=ModelServiceSettings.newBuilder().setEndpoint("us-central1-aiplatform.googleapis.com:443").build();// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests. After completing all of your requests, call// the "close" method on the client to safely clean up any remaining background resources.try(ModelServiceClientmodelServiceClient=ModelServiceClient.create(modelServiceSettings)){Stringlocation="us-central1";ModelEvaluationNamemodelEvaluationName=ModelEvaluationName.of(project,location,modelId,evaluationId);ModelEvaluationmodelEvaluation=modelServiceClient.getModelEvaluation(modelEvaluationName);System.out.println("Get Model Evaluation Tabular Regression Response");System.out.format("\tName: %s\n",modelEvaluation.getName());System.out.format("\tMetrics Schema Uri: %s\n",modelEvaluation.getMetricsSchemaUri());System.out.format("\tMetrics: %s\n",modelEvaluation.getMetrics());System.out.format("\tCreate Time: %s\n",modelEvaluation.getCreateTime());System.out.format("\tSlice Dimensions: %s\n",modelEvaluation.getSliceDimensionsList());}}}Node.js
Before trying this sample, follow theNode.js setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AINode.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
/** * TODO(developer): Uncomment these variables before running the sample * (not necessary if passing values as arguments). To obtain evaluationId, * instantiate the client and run the following the commands. */// const parentName = `projects/${project}/locations/${location}/models/${modelId}`;// const evalRequest = {// parent: parentName// };// const [evalResponse] = await modelServiceClient.listModelEvaluations(evalRequest);// console.log(evalResponse);// const modelId = 'YOUR_MODEL_ID';// const evaluationId = 'YOUR_EVALUATION_ID';// const project = 'YOUR_PROJECT_ID';// const location = 'YOUR_PROJECT_LOCATION';// Imports the Google Cloud Model Service Client libraryconst{ModelServiceClient}=require('@google-cloud/aiplatform');// Specifies the location of the api endpointconstclientOptions={apiEndpoint:'us-central1-aiplatform.googleapis.com',};// Instantiates a clientconstmodelServiceClient=newModelServiceClient(clientOptions);asyncfunctiongetModelEvaluationTabularRegression(){// Configure the parent resourcesconstname=`projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;constrequest={name,};// Get model evaluation requestconst[response]=awaitmodelServiceClient.getModelEvaluation(request);console.log('Get model evaluation tabular regression response');console.log(`\tName :${response.name}`);console.log(`\tMetrics schema uri :${response.metricsSchemaUri}`);console.log(`\tMetrics :${JSON.stringify(response.metrics)}`);console.log(`\tCreate time :${JSON.stringify(response.createTime)}`);console.log(`\tSlice dimensions :${response.sliceDimensions}`);constmodelExplanation=response.modelExplanation;console.log('\tModel explanation');if(!modelExplanation){console.log('\t\t{}');}else{constmeanAttributions=modelExplanation.meanAttributions;if(!meanAttributions){console.log('\t\t\t []');}else{for(constmeanAttributionofmeanAttributions){console.log('\t\tMean attribution');console.log(`\t\t\tBaseline output value : \${meanAttribution.baselineOutputValue}`);console.log(`\t\t\tInstance output value : \${meanAttribution.instanceOutputValue}`);console.log(`\t\t\tFeature attributions : \${JSON.stringify(meanAttribution.featureAttributions)}`);console.log(`\t\t\tOutput index :${meanAttribution.outputIndex}`);console.log(`\t\t\tOutput display name : \${meanAttribution.outputDisplayName}`);console.log(`\t\t\tApproximation error : \${meanAttribution.approximationError}`);}}}}getModelEvaluationTabularRegression();Python
To learn how to install or update the Vertex AI SDK for Python, seeInstall the Vertex AI SDK for Python. For more information, see thePython API reference documentation.
fromgoogle.cloudimportaiplatformdefget_model_evaluation_tabular_regression_sample(project:str,model_id:str,evaluation_id:str,location:str="us-central1",api_endpoint:str="us-central1-aiplatform.googleapis.com",):""" To obtain evaluation_id run the following commands where LOCATION is the region where the model is stored, PROJECT is the project ID, and MODEL_ID is the ID of your model. model_client = aiplatform.gapic.ModelServiceClient( client_options={ 'api_endpoint':'LOCATION-aiplatform.googleapis.com' } ) evaluations = model_client.list_model_evaluations(parent='projects/PROJECT/locations/LOCATION/models/MODEL_ID') print("evaluations:", evaluations) """# The AI Platform services require regional API endpoints.client_options={"api_endpoint":api_endpoint}# Initialize client that will be used to create and send requests.# This client only needs to be created once, and can be reused for multiple requests.client=aiplatform.gapic.ModelServiceClient(client_options=client_options)name=client.model_evaluation_path(project=project,location=location,model=model_id,evaluation=evaluation_id)response=client.get_model_evaluation(name=name)print("response:",response)List all evaluation slices (classification models only)
Theprojects.locations.models.evaluations.slices.listmethod lists all evaluation slices for your model. You musthave the model's evaluation ID, which you can get when youview the aggregated evaluation metrics.
You can use model evaluation slices to determine how the model performed on aspecific label. Thevalue field tells you which label the metrics are for.
Vertex AI returns an array of confidence metrics. Each elementshows evaluation metrics at a differentconfidenceThreshold value(starting from 0 and going up to 1). By viewing different threshold values, youcan see how the threshold affects other metrics such as precision and recall.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where Model is located. For example,
us-central1. - PROJECT: .
- MODEL_ID: The ID of your model.
- EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "modelEvaluationSlices": [ { "name": "projects/693884908213/locations/us-central1/models/705305922892726272/evaluations/4515484958386859492/slices/1785244630562158241", "slice": { "dimension": "annotationSpec", "value": "2" }, "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml", "metrics": { "auPrc": 0.6108714, "auRoc": 0.9362428, "logLoss": 0.9680687, "confidenceMetrics": [ { "recall": 1, "precision": 0.11417058, "falsePositiveRate": 1, "f1Score": 0.20494273, "recallAt1": 0.47843137, "precisionAt1": 0.6354167, "falsePositiveRateAt1": 0.035380337, "f1ScoreAt1": 0.5458613, "truePositiveCount": "510", "falsePositiveCount": "3957" }, { "confidenceThreshold": 0.003269856, "recall": 0.9980392, "precision": 0.15108341, "falsePositiveRate": 0.7227698, "f1Score": 0.26243877, "recallAt1": 0.47843137, "precisionAt1": 0.6354167, "falsePositiveRateAt1": 0.035380337, "f1ScoreAt1": 0.5458613, "truePositiveCount": "509", "falsePositiveCount": "2860", "falseNegativeCount": "1", "trueNegativeCount": "1097" }, { "confidenceThreshold": 0.016592776, "recall": 0.9882353, "precision": 0.23344141, "falsePositiveRate": 0.41824615, "f1Score": 0.37766954, "recallAt1": 0.47843137, "precisionAt1": 0.6354167, "falsePositiveRateAt1": 0.035380337, "f1ScoreAt1": 0.5458613, "truePositiveCount": "504", "falsePositiveCount": "1655", "falseNegativeCount": "6", "trueNegativeCount": "2302" }, ... ] }, "createTime": "2020-10-09T00:19:15.480435Z" }, { "name": "projects/693884908213/locations/us-central1/models/705305922892726272/evaluations/4515484958386859492/slices/8107013027312442123", "slice": { "dimension": "annotationSpec", "value": "1" }, "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml", "metrics": { "auPrc": 0.9916441, "auRoc": 0.93830043, "logLoss": 0.09145534, "confidenceMetrics": [ { "recall": 1, "precision": 0.8858294, "falsePositiveRate": 1, "f1Score": 0.93945867, "recallAt1": 0.96461964, "precisionAt1": 0.9348518, "falsePositiveRateAt1": 0.52156866, "f1ScoreAt1": 0.94950247, "truePositiveCount": "3957", "falsePositiveCount": "510" }, { "confidenceThreshold": 0.064618945, "recall": 0.9997473, "precision": 0.88639927, "falsePositiveRate": 0.9941176, "f1Score": 0.93966746, "recallAt1": 0.96461964, "precisionAt1": 0.9348518, "falsePositiveRateAt1": 0.52156866, "f1ScoreAt1": 0.94950247, "truePositiveCount": "3956", "falsePositiveCount": "507", "falseNegativeCount": "1", "trueNegativeCount": "3" }, ...Java
Before trying this sample, follow theJava setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AIJava API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
importcom.google.cloud.aiplatform.v1.ModelEvaluationName;importcom.google.cloud.aiplatform.v1.ModelEvaluationSlice;importcom.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;importcom.google.cloud.aiplatform.v1.ModelServiceClient;importcom.google.cloud.aiplatform.v1.ModelServiceSettings;importjava.io.IOException;publicclassListModelEvaluationSliceSample{publicstaticvoidmain(String[]args)throwsIOException{// TODO(developer): Replace these variables before running the sample.// To obtain evaluationId run the code block below after setting modelServiceSettings.//// try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings))// {// String location = "us-central1";// ModelName modelFullId = ModelName.of(project, location, modelId);// ListModelEvaluationsRequest modelEvaluationsrequest =// ListModelEvaluationsRequest.newBuilder().setParent(modelFullId.toString()).build();// for (ModelEvaluation modelEvaluation :// modelServiceClient.listModelEvaluations(modelEvaluationsrequest).iterateAll()) {// System.out.format("Model Evaluation Name: %s%n", modelEvaluation.getName());// }// }Stringproject="YOUR_PROJECT_ID";StringmodelId="YOUR_MODEL_ID";StringevaluationId="YOUR_EVALUATION_ID";listModelEvaluationSliceSample(project,modelId,evaluationId);}staticvoidlistModelEvaluationSliceSample(Stringproject,StringmodelId,StringevaluationId)throwsIOException{ModelServiceSettingsmodelServiceSettings=ModelServiceSettings.newBuilder().setEndpoint("us-central1-aiplatform.googleapis.com:443").build();// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests. After completing all of your requests, call// the "close" method on the client to safely clean up any remaining background resources.try(ModelServiceClientmodelServiceClient=ModelServiceClient.create(modelServiceSettings)){Stringlocation="us-central1";ModelEvaluationNamemodelEvaluationName=ModelEvaluationName.of(project,location,modelId,evaluationId);for(ModelEvaluationSlicemodelEvaluationSlice:modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()){System.out.format("Model Evaluation Slice Name: %s\n",modelEvaluationSlice.getName());System.out.format("Metrics Schema Uri: %s\n",modelEvaluationSlice.getMetricsSchemaUri());System.out.format("Metrics: %s\n",modelEvaluationSlice.getMetrics());System.out.format("Create Time: %s\n",modelEvaluationSlice.getCreateTime());Sliceslice=modelEvaluationSlice.getSlice();System.out.format("Slice Dimensions: %s\n",slice.getDimension());System.out.format("Slice Value: %s\n\n",slice.getValue());}}}}Node.js
Before trying this sample, follow theNode.js setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AINode.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
/** * TODO(developer): Uncomment these variables before running the sample * (not necessary if passing values as arguments). To obtain evaluationId, * instantiate the client and run the following the commands. */// const parentName = `projects/${project}/locations/${location}/models/${modelId}`;// const evalRequest = {// parent: parentName// };// const [evalResponse] = await modelServiceClient.listModelEvaluations(evalRequest);// console.log(evalResponse);// const modelId = 'YOUR_MODEL_ID';// const evaluationId = 'YOUR_EVALUATION_ID';// const project = 'YOUR_PROJECT_ID';// const location = 'YOUR_PROJECT_LOCATION';// Imports the Google Cloud Model Service Client libraryconst{ModelServiceClient}=require('@google-cloud/aiplatform');// Specifies the location of the api endpointconstclientOptions={apiEndpoint:'us-central1-aiplatform.googleapis.com',};// Instantiates a clientconstmodelServiceClient=newModelServiceClient(clientOptions);asyncfunctionlistModelEvaluationSlices(){// Configure the parent resourcesconstparent=`projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;constrequest={parent,};// Get and print out a list of all the evaluation slices for this resourceconst[response]=awaitmodelServiceClient.listModelEvaluationSlices(request);console.log('List model evaluation response',response);console.log(response);}listModelEvaluationSlices();Python
To learn how to install or update the Vertex AI SDK for Python, seeInstall the Vertex AI SDK for Python. For more information, see thePython API reference documentation.
fromgoogle.cloudimportaiplatformdeflist_model_evaluation_slices_sample(project:str,model_id:str,evaluation_id:str,location:str="us-central1",api_endpoint:str="us-central1-aiplatform.googleapis.com",):""" To obtain evaluation_id run the following commands where LOCATION is the region where the model is stored, PROJECT is the project ID, and MODEL_ID is the ID of your model. model_client = aiplatform.gapic.ModelServiceClient( client_options={ 'api_endpoint':'LOCATION-aiplatform.googleapis.com' } ) evaluations = model_client.list_model_evaluations(parent='projects/PROJECT/locations/LOCATION/models/MODEL_ID') print("evaluations:", evaluations) """# The AI Platform services require regional API endpoints.client_options={"api_endpoint":api_endpoint}# Initialize client that will be used to create and send requests.# This client only needs to be created once, and can be reused for multiple requests.client=aiplatform.gapic.ModelServiceClient(client_options=client_options)parent=client.model_evaluation_path(project=project,location=location,model=model_id,evaluation=evaluation_id)response=client.list_model_evaluation_slices(parent=parent)formodel_evaluation_sliceinresponse:print("model_evaluation_slice:",model_evaluation_slice)Get metrics for a single slice
To view evaluation metrics for a single slice, use theprojects.locations.models.evaluations.slices.getmethod. You must have the slice ID, which is provided when youlist allslices. The following sample applies to all data types andobjectives.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where Model is located. For example, us-central1.
- PROJECT: .
- MODEL_ID: The ID of your model.
- EVALUATION_ID: ID of the model evaluation that contains the evaluation slice to retrieve.
- SLICE_ID: ID of an evaluation slice to get.
- PROJECT_NUMBER: Your project's automatically generatedproject number.
- EVALUATION_METRIC_SCHEMA_FILE_NAME: The name of a schema file that defines the evaluation metrics to return such as
classification_metrics_1.0.0.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID", "slice": { "dimension": "annotationSpec", "value": "a particular class or label" }, "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/EVALUATION_METRIC_SCHEMA_FILE_NAME.yaml", "metrics": {evaluation metrics for the slice }, "createTime": "2020-10-08T23:35:54.770876Z"}Java
Before trying this sample, follow theJava setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AIJava API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
importcom.google.cloud.aiplatform.v1.ModelEvaluationSlice;importcom.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;importcom.google.cloud.aiplatform.v1.ModelEvaluationSliceName;importcom.google.cloud.aiplatform.v1.ModelServiceClient;importcom.google.cloud.aiplatform.v1.ModelServiceSettings;importjava.io.IOException;publicclassGetModelEvaluationSliceSample{publicstaticvoidmain(String[]args)throwsIOException{// TODO(developer): Replace these variables before running the sample.// To obtain evaluationId run the code block below after setting modelServiceSettings.//// try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings))// {// String location = "us-central1";// ModelName modelFullId = ModelName.of(project, location, modelId);// ListModelEvaluationsRequest modelEvaluationsrequest =// ListModelEvaluationsRequest.newBuilder().setParent(modelFullId.toString()).build();// for (ModelEvaluation modelEvaluation :// modelServiceClient.listModelEvaluations(modelEvaluationsrequest).iterateAll()) {// System.out.format("Model Evaluation Name: %s%n", modelEvaluation.getName());// }// }Stringproject="YOUR_PROJECT_ID";StringmodelId="YOUR_MODEL_ID";StringevaluationId="YOUR_EVALUATION_ID";StringsliceId="YOUR_SLICE_ID";getModelEvaluationSliceSample(project,modelId,evaluationId,sliceId);}staticvoidgetModelEvaluationSliceSample(Stringproject,StringmodelId,StringevaluationId,StringsliceId)throwsIOException{ModelServiceSettingsmodelServiceSettings=ModelServiceSettings.newBuilder().setEndpoint("us-central1-aiplatform.googleapis.com:443").build();// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests. After completing all of your requests, call// the "close" method on the client to safely clean up any remaining background resources.try(ModelServiceClientmodelServiceClient=ModelServiceClient.create(modelServiceSettings)){Stringlocation="us-central1";ModelEvaluationSliceNamemodelEvaluationSliceName=ModelEvaluationSliceName.of(project,location,modelId,evaluationId,sliceId);ModelEvaluationSlicemodelEvaluationSlice=modelServiceClient.getModelEvaluationSlice(modelEvaluationSliceName);System.out.println("Get Model Evaluation Slice Response");System.out.format("Model Evaluation Slice Name: %s\n",modelEvaluationSlice.getName());System.out.format("Metrics Schema Uri: %s\n",modelEvaluationSlice.getMetricsSchemaUri());System.out.format("Metrics: %s\n",modelEvaluationSlice.getMetrics());System.out.format("Create Time: %s\n",modelEvaluationSlice.getCreateTime());Sliceslice=modelEvaluationSlice.getSlice();System.out.format("Slice Dimensions: %s\n",slice.getDimension());System.out.format("Slice Value: %s\n",slice.getValue());}}}Node.js
Before trying this sample, follow theNode.js setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AINode.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
/** * TODO(developer): Uncomment these variables before running the sample * (not necessary if passing values as arguments). To obtain evaluationId, * instantiate the client and run the following the commands. */// const parentName = `projects/${project}/locations/${location}/models/${modelId}`;// const evalRequest = {// parent: parentName// };// const [evalResponse] = await modelServiceClient.listModelEvaluations(evalRequest);// console.log(evalResponse);// const modelId = 'YOUR_MODEL_ID';// const evaluationId = 'YOUR_EVALUATION_ID';// const sliceId = 'YOUR_SLICE_ID';// const project = 'YOUR_PROJECT_ID';// const location = 'YOUR_PROJECT_LOCATION';// Imports the Google Cloud Model Service client libraryconst{ModelServiceClient}=require('@google-cloud/aiplatform');// Specifies the location of the api endpointconstclientOptions={apiEndpoint:'us-central1-aiplatform.googleapis.com',};// Specifies the location of the api endpointconstmodelServiceClient=newModelServiceClient(clientOptions);asyncfunctiongetModelEvaluationSlice(){// Configure the parent resourceconstname=`projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}/slices/${sliceId}`;constrequest={name,};// Get and print out a list of all the endpoints for this resourceconst[response]=awaitmodelServiceClient.getModelEvaluationSlice(request);console.log('Get model evaluation slice');console.log(`\tName :${response.name}`);console.log(`\tMetrics_Schema_Uri :${response.metricsSchemaUri}`);console.log(`\tMetrics :${JSON.stringify(response.metrics)}`);console.log(`\tCreate time :${JSON.stringify(response.createTime)}`);console.log('Slice');constslice=response.slice;console.log(`\tDimension :${slice.dimension}`);console.log(`\tValue :${slice.value}`);}getModelEvaluationSlice();Python
To learn how to install or update the Vertex AI SDK for Python, seeInstall the Vertex AI SDK for Python. For more information, see thePython API reference documentation.
fromgoogle.cloudimportaiplatformdefget_model_evaluation_slice_sample(project:str,model_id:str,evaluation_id:str,slice_id:str,location:str="us-central1",api_endpoint:str="us-central1-aiplatform.googleapis.com",):""" To obtain evaluation_id run the following commands where LOCATION is the region where the model is stored, PROJECT is the project ID, and MODEL_ID is the ID of your model. model_client = aiplatform.gapic.ModelServiceClient( client_options={ 'api_endpoint':'LOCATION-aiplatform.googleapis.com' } ) evaluations = model_client.list_model_evaluations(parent='projects/PROJECT/locations/LOCATION/models/MODEL_ID') print("evaluations:", evaluations) """# The AI Platform services require regional API endpoints.client_options={"api_endpoint":api_endpoint}# Initialize client that will be used to create and send requests.# This client only needs to be created once, and can be reused for multiple requests.client=aiplatform.gapic.ModelServiceClient(client_options=client_options)name=client.model_evaluation_slice_path(project=project,location=location,model=model_id,evaluation=evaluation_id,slice=slice_id,)response=client.get_model_evaluation_slice(name=name)print("response:",response)Model evaluation metrics
Vertex AI returns several different evaluation metrics, such asprecision, recall, and confidence thresholds. The metrics thatVertex AI returns depend on your model's objective. For example,Vertex AI provides different evaluation metrics for an imageclassification model compared to an image object detection model.
A schema file determines which evaluation metrics Vertex AIprovides for each objective.
You can view and download schema files from the following Cloud Storagelocation:
gs://google-cloud-aiplatform/schema/modelevaluation/
The evaluation metrics are:
Classification
- AuPRC: Theareaunder the precision-recall (PR) curve, also referred to as averageprecision. This value ranges from zero to one, where a higher value indicatesa higher-quality model.
- AuROC: Theareaunder receiver operating characteristic curve. This ranges from zero to one,where a higher value indicates a higher-quality model.
- Log loss: The cross-entropy between the model inferences and the targetvalues. This ranges from zero to infinity, where a lower value indicates ahigher-quality model.
- Confidence threshold: A confidence score that determines whichinferences to return. A model returns inferences that are at this value orhigher. A higher confidence threshold increases precision but lowers recall.Vertex AI returns confidence metrics at different threshold valuesto show how the threshold affectsprecisionandrecall.
- Recall: The fraction of inferences with this class that the modelcorrectly predicted. Also calledtrue positive rate.
- Recall at 1: The recall (true positive rate) when only considering thelabel that has the highest inference score and not below the confidencethreshold for each example.
- Precision: The fraction of classification inferences produced by themodel that were correct.
- Precision at 1: The precision when only considering the label that hasthe highest inference score and not below the confidence threshold for eachexample.
- F1 score: The harmonic mean of precision and recall. F1 is a usefulmetric if you're looking for a balance between precision and recall and there'san uneven class distribution.
- F1 score at 1: The harmonic mean of recall at 1 and precision at 1.
- True negative count: The number of times a model correctly predicteda negative class.
- True positive count: The number of times a model correctly predicted apositive class.
- False negative count: The number of times a model mistakenly predicteda negative class.
- False positive count: The number of times a model mistakenly predicteda positive class.
- False positive rate: The fraction of incorrectly predicted results out ofall predicted results.
- False positive rate at 1: The false positive rate when only consideringthe label that has the highest inference score and not below the confidencethreshold for each example.
- Confusion matrix: Aconfusionmatrix shows how often a model correctly predicted a result. For incorrectlypredicted results, the matrix shows what the model predicted instead. Theconfusion matrix helps you understand where your model is "confusing" tworesults.
- Model feature attributions: Vertex AI shows you how much each feature impacts a model. The values are provided as a percentage for each feature: the higher the percentage, the more impact the feature had on model training. Review this information to ensure that all of the most important features make sense for your data and business problem. To learn more, seeFeature attributions for classification and regression.
Regression
- MAE: The mean absolute error (MAE) is the average absolute differencebetween the target values and the predicted values. This metric ranges from zeroto infinity; a lower value indicates a higher quality model.
- RMSE: The root-mean-squared error is the square root of the averagesquared difference between the target and predicted values. RMSE is moresensitive to outliers than MAE,so if you're concerned about large errors, thenRMSE can be a more useful metric to evaluate. Similar to MAE, a smaller valueindicates a higher quality model (0 represents a perfect predictor).
- RMSLE: The root-mean-squared logarithmic error metric is similar to RMSE,except that it uses the natural logarithm of the predicted and actual valuesplus 1. RMSLE penalizes under-inference more heavily than over-inference. Itcan also be a good metric when you don't want to penalize differences for largeinference values more heavily than for small inference values. This metricranges from zero to infinity; a lower value indicates a higher quality model.The RMSLE evaluation metric is returned only if all label and predicted valuesare non-negative.
- r^2: r squared (r^2) is the square of the Pearson correlationcoefficient between the labels and predicted values. This metric ranges betweenzero and one. A higher value indicates a closer fit to the regression line.
- MAPE: Mean absolute percentage error (MAPE) is the average absolutepercentage difference between the labels and the predicted values. This metricranges between zero and infinity; a lower value indicates a higher qualitymodel.
MAPE is not shown if the target column contains any 0 values. In this case,MAPE is undefined. - Model feature attributions: Vertex AI shows you how much each feature impacts a model. The values are provided as apercentage for each feature: the higher the percentage, the more impact the feature had onmodel training. Review this information to ensure that all of the most importantfeatures make sense for your data and business problem. To learn more, seeFeature attributions for classification and regression.
What's next
Once you're ready to make predictions with your classification or regressionmodel, you have two options:
- Make online (real-time) predictions using your model.
- Get batch predictions directly from your model.
Additionally, you can:
- View the architecture of your model.
- Learn how toexport your model.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.