Export AutoML Edge models Stay organized with collections Save and categorize content based on your preferences.
This page describes how to use Vertex AI to export your imageAutoML Edge models to Cloud Storage.
For information about exporting tabular models, seeExporting an AutoML tabular model.
Introduction
After you havetrained an AutoML Edgemodel you can, in some cases, export the model in different formats, dependingon how you want to use it. The exported model files are saved in aCloud Storage bucket, and can they be used for prediction in the environmentof your choosing.
You cannot use an Edge model in Vertex AI to serve predictions;you must deploy Edge model to an external device to get predictions.
Export a model
Use the following code samples to identify an AutoML Edge model,specify an output file storage location, and then send the export model request.
Image
Select the tab below for your objective:
Classification
Trained AutoML Edge image classification models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Edge TPU TF Lite - Export your model as a TF Lite package to run your model on Edge TPU devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Core ML - Export an .mlmodel file to run your model on iOS and macOS devices.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to theModels page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- ClickExport.
- In theExport model side window, specify the location in Cloud Storage to store Edge model export output.
- ClickExport.
- ClickDone to close theExport model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Yourproject ID.
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the options are:
tflite(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.edgetpu-tflite(Edge TPU TF Lite) - Export your model as a TF Lite package to run your model on Edge TPU devices.tf-saved-model(Container) - Export your model as a TF Saved Model to run on a Docker container.core-ml(Core ML) - Export an .mlmodel file to run your model on iOS and macOS devices.tf-js(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } }}To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list. Save the request body in a file namedrequest.json, and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list. Save the request body in a file namedrequest.json, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as theOPERATION_ID.
Response
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelOperationMetadata", "genericMetadata": { "createTime": "2020-07-16T20:06:33.679353Z", "updateTime": "2020-07-16T20:06:33.679353Z" }, "outputInfo": { "artifactOutputUri": "gs://OUTPUT_BUCKET/model-MODEL_ID/EXPORT_FORMAT/YYYY-MM-DDThh:mm:ss.sssZ" } }}You canget the status of the export operation to see when it finishes.
Java
Before trying this sample, follow theJava setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AIJava API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
importcom.google.api.gax.longrunning.OperationFuture;importcom.google.cloud.aiplatform.v1.ExportModelOperationMetadata;importcom.google.cloud.aiplatform.v1.ExportModelRequest;importcom.google.cloud.aiplatform.v1.ExportModelResponse;importcom.google.cloud.aiplatform.v1.GcsDestination;importcom.google.cloud.aiplatform.v1.ModelName;importcom.google.cloud.aiplatform.v1.ModelServiceClient;importcom.google.cloud.aiplatform.v1.ModelServiceSettings;importjava.io.IOException;importjava.util.concurrent.ExecutionException;importjava.util.concurrent.TimeUnit;importjava.util.concurrent.TimeoutException;publicclassExportModelSample{publicstaticvoidmain(String[]args)throwsIOException,InterruptedException,ExecutionException,TimeoutException{// TODO(developer): Replace these variables before running the sample.Stringproject="YOUR_PROJECT_ID";StringmodelId="YOUR_MODEL_ID";StringgcsDestinationOutputUriPrefix="gs://YOUR_GCS_SOURCE_BUCKET/path_to_your_destination/";StringexportFormat="YOUR_EXPORT_FORMAT";exportModelSample(project,modelId,gcsDestinationOutputUriPrefix,exportFormat);}staticvoidexportModelSample(Stringproject,StringmodelId,StringgcsDestinationOutputUriPrefix,StringexportFormat)throwsIOException,InterruptedException,ExecutionException,TimeoutException{ModelServiceSettingsmodelServiceSettings=ModelServiceSettings.newBuilder().setEndpoint("us-central1-aiplatform.googleapis.com:443").build();// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests. After completing all of your requests, call// the "close" method on the client to safely clean up any remaining background resources.try(ModelServiceClientmodelServiceClient=ModelServiceClient.create(modelServiceSettings)){Stringlocation="us-central1";GcsDestination.BuildergcsDestination=GcsDestination.newBuilder();gcsDestination.setOutputUriPrefix(gcsDestinationOutputUriPrefix);ModelNamemodelName=ModelName.of(project,location,modelId);ExportModelRequest.OutputConfigoutputConfig=ExportModelRequest.OutputConfig.newBuilder().setExportFormatId(exportFormat).setArtifactDestination(gcsDestination).build();OperationFuture<ExportModelResponse,ExportModelOperationMetadata>exportModelResponseFuture=modelServiceClient.exportModelAsync(modelName,outputConfig);System.out.format("Operation name: %s\n",exportModelResponseFuture.getInitialFuture().get().getName());System.out.println("Waiting for operation to finish...");ExportModelResponseexportModelResponse=exportModelResponseFuture.get(300,TimeUnit.SECONDS);System.out.format("Export Model Response: %s\n",exportModelResponse);}}}Node.js
Before trying this sample, follow theNode.js setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AINode.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
/** * TODO(developer): Uncomment these variables before running the sample.\ (Not necessary if passing values as arguments) */// const modelId = 'YOUR_MODEL_ID';// const gcsDestinationOutputUriPrefix ='YOUR_GCS_DEST_OUTPUT_URI_PREFIX';// eg. "gs://<your-gcs-bucket>/destination_path"// const exportFormat = 'YOUR_EXPORT_FORMAT';// const project = 'YOUR_PROJECT_ID';// const location = 'YOUR_PROJECT_LOCATION';// Imports the Google Cloud Model Service Client libraryconst{ModelServiceClient}=require('@google-cloud/aiplatform');// Specifies the location of the api endpointconstclientOptions={apiEndpoint:'us-central1-aiplatform.googleapis.com',};// Instantiates a clientconstmodelServiceClient=newModelServiceClient(clientOptions);asyncfunctionexportModel(){// Configure the name resourcesconstname=`projects/${project}/locations/${location}/models/${modelId}`;// Configure the outputConfig resourcesconstoutputConfig={exportFormatId:exportFormat,gcsDestination:{outputUriPrefix:gcsDestinationOutputUriPrefix,},};constrequest={name,outputConfig,};// Export Model requestconst[response]=awaitmodelServiceClient.exportModel(request);console.log(`Long running operation :${response.name}`);// Wait for operation to completeawaitresponse.promise();constresult=response.result;console.log(`Export model response :${JSON.stringify(result)}`);}exportModel();Python
To learn how to install or update the Vertex AI SDK for Python, seeInstall the Vertex AI SDK for Python. For more information, see thePython API reference documentation.
fromgoogle.cloudimportaiplatformdefexport_model_sample(project:str,model_id:str,gcs_destination_output_uri_prefix:str,location:str="us-central1",api_endpoint:str="us-central1-aiplatform.googleapis.com",timeout:int=300,):# The AI Platform services require regional API endpoints.client_options={"api_endpoint":api_endpoint}# Initialize client that will be used to create and send requests.# This client only needs to be created once, and can be reused for multiple requests.client=aiplatform.gapic.ModelServiceClient(client_options=client_options)output_config={"artifact_destination":{"output_uri_prefix":gcs_destination_output_uri_prefix},# For information about export formats: https://cloud.google.com/ai-platform-unified/docs/export/export-edge-model#aiplatform_export_model_sample-drest"export_format_id":"tf-saved-model",}name=client.model_path(project=project,location=location,model=model_id)response=client.export_model(name=name,output_config=output_config)print("Long running operation:",response.operation.name)print("output_info:",response.metadata.output_info)export_model_response=response.result(timeout=timeout)print("export_model_response:",export_model_response)Classification
Trained AutoML Edge image classification models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Edge TPU TF Lite - Export your model as a TF Lite package to run your model on Edge TPU devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Core ML - Export an .mlmodel file to run your model on iOS and macOS devices.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to theModels page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- ClickExport.
- In theExport model side window, specify the location in Cloud Storage to store Edge model export output.
- ClickExport.
- ClickDone to close theExport model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Yourproject ID.
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the options are:
tflite(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.edgetpu-tflite(Edge TPU TF Lite) - Export your model as a TF Lite package to run your model on Edge TPU devices.tf-saved-model(Container) - Export your model as a TF Saved Model to run on a Docker container.core-ml(Core ML) - Export an .mlmodel file to run your model on iOS and macOS devices.tf-js(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } }}To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list. Save the request body in a file namedrequest.json, and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list. Save the request body in a file namedrequest.json, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as theOPERATION_ID.
Response
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelOperationMetadata", "genericMetadata": { "createTime": "2020-07-16T20:06:33.679353Z", "updateTime": "2020-07-16T20:06:33.679353Z" }, "outputInfo": { "artifactOutputUri": "gs://OUTPUT_BUCKET/model-MODEL_ID/EXPORT_FORMAT/YYYY-MM-DDThh:mm:ss.sssZ" } }}You canget the status of the export operation to see when it finishes.
Java
Before trying this sample, follow theJava setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AIJava API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
importcom.google.api.gax.longrunning.OperationFuture;importcom.google.cloud.aiplatform.v1.ExportModelOperationMetadata;importcom.google.cloud.aiplatform.v1.ExportModelRequest;importcom.google.cloud.aiplatform.v1.ExportModelResponse;importcom.google.cloud.aiplatform.v1.GcsDestination;importcom.google.cloud.aiplatform.v1.ModelName;importcom.google.cloud.aiplatform.v1.ModelServiceClient;importcom.google.cloud.aiplatform.v1.ModelServiceSettings;importjava.io.IOException;importjava.util.concurrent.ExecutionException;importjava.util.concurrent.TimeUnit;importjava.util.concurrent.TimeoutException;publicclassExportModelSample{publicstaticvoidmain(String[]args)throwsIOException,InterruptedException,ExecutionException,TimeoutException{// TODO(developer): Replace these variables before running the sample.Stringproject="YOUR_PROJECT_ID";StringmodelId="YOUR_MODEL_ID";StringgcsDestinationOutputUriPrefix="gs://YOUR_GCS_SOURCE_BUCKET/path_to_your_destination/";StringexportFormat="YOUR_EXPORT_FORMAT";exportModelSample(project,modelId,gcsDestinationOutputUriPrefix,exportFormat);}staticvoidexportModelSample(Stringproject,StringmodelId,StringgcsDestinationOutputUriPrefix,StringexportFormat)throwsIOException,InterruptedException,ExecutionException,TimeoutException{ModelServiceSettingsmodelServiceSettings=ModelServiceSettings.newBuilder().setEndpoint("us-central1-aiplatform.googleapis.com:443").build();// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests. After completing all of your requests, call// the "close" method on the client to safely clean up any remaining background resources.try(ModelServiceClientmodelServiceClient=ModelServiceClient.create(modelServiceSettings)){Stringlocation="us-central1";GcsDestination.BuildergcsDestination=GcsDestination.newBuilder();gcsDestination.setOutputUriPrefix(gcsDestinationOutputUriPrefix);ModelNamemodelName=ModelName.of(project,location,modelId);ExportModelRequest.OutputConfigoutputConfig=ExportModelRequest.OutputConfig.newBuilder().setExportFormatId(exportFormat).setArtifactDestination(gcsDestination).build();OperationFuture<ExportModelResponse,ExportModelOperationMetadata>exportModelResponseFuture=modelServiceClient.exportModelAsync(modelName,outputConfig);System.out.format("Operation name: %s\n",exportModelResponseFuture.getInitialFuture().get().getName());System.out.println("Waiting for operation to finish...");ExportModelResponseexportModelResponse=exportModelResponseFuture.get(300,TimeUnit.SECONDS);System.out.format("Export Model Response: %s\n",exportModelResponse);}}}Node.js
Before trying this sample, follow theNode.js setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AINode.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
/** * TODO(developer): Uncomment these variables before running the sample.\ (Not necessary if passing values as arguments) */// const modelId = 'YOUR_MODEL_ID';// const gcsDestinationOutputUriPrefix ='YOUR_GCS_DEST_OUTPUT_URI_PREFIX';// eg. "gs://<your-gcs-bucket>/destination_path"// const exportFormat = 'YOUR_EXPORT_FORMAT';// const project = 'YOUR_PROJECT_ID';// const location = 'YOUR_PROJECT_LOCATION';// Imports the Google Cloud Model Service Client libraryconst{ModelServiceClient}=require('@google-cloud/aiplatform');// Specifies the location of the api endpointconstclientOptions={apiEndpoint:'us-central1-aiplatform.googleapis.com',};// Instantiates a clientconstmodelServiceClient=newModelServiceClient(clientOptions);asyncfunctionexportModel(){// Configure the name resourcesconstname=`projects/${project}/locations/${location}/models/${modelId}`;// Configure the outputConfig resourcesconstoutputConfig={exportFormatId:exportFormat,gcsDestination:{outputUriPrefix:gcsDestinationOutputUriPrefix,},};constrequest={name,outputConfig,};// Export Model requestconst[response]=awaitmodelServiceClient.exportModel(request);console.log(`Long running operation :${response.name}`);// Wait for operation to completeawaitresponse.promise();constresult=response.result;console.log(`Export model response :${JSON.stringify(result)}`);}exportModel();Python
To learn how to install or update the Vertex AI SDK for Python, seeInstall the Vertex AI SDK for Python. For more information, see thePython API reference documentation.
fromgoogle.cloudimportaiplatformdefexport_model_sample(project:str,model_id:str,gcs_destination_output_uri_prefix:str,location:str="us-central1",api_endpoint:str="us-central1-aiplatform.googleapis.com",timeout:int=300,):# The AI Platform services require regional API endpoints.client_options={"api_endpoint":api_endpoint}# Initialize client that will be used to create and send requests.# This client only needs to be created once, and can be reused for multiple requests.client=aiplatform.gapic.ModelServiceClient(client_options=client_options)output_config={"artifact_destination":{"output_uri_prefix":gcs_destination_output_uri_prefix},# For information about export formats: https://cloud.google.com/ai-platform-unified/docs/export/export-edge-model#aiplatform_export_model_sample-drest"export_format_id":"tf-saved-model",}name=client.model_path(project=project,location=location,model=model_id)response=client.export_model(name=name,output_config=output_config)print("Long running operation:",response.operation.name)print("output_info:",response.metadata.output_info)export_model_response=response.result(timeout=timeout)print("export_model_response:",export_model_response)Object detection
Trained AutoML Edge image object detection models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to theModels page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Select theDeploy & Test tab to view the available exportformats.
- Select your desired export model format from theUse your edge-optimized model section.
- In theExport model side window, specify the location in Cloud Storage to store Edge model export output.
- ClickExport.
- ClickDone to close theExport model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: .
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the options are:
tflite(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.tf-saved-model(Container) - Export your model as a TF Saved Model to run on a Docker container.tf-js(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } }}To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list. Save the request body in a file namedrequest.json, and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list. Save the request body in a file namedrequest.json, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as theOPERATION_ID.
Response
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelOperationMetadata", "genericMetadata": { "createTime": "2020-07-16T20:06:33.679353Z", "updateTime": "2020-07-16T20:06:33.679353Z" }, "outputInfo": { "artifactOutputUri": "gs://OUTPUT_BUCKET/model-MODEL_ID/EXPORT_FORMAT/YYYY-MM-DDThh:mm:ss.sssZ" } }}You canget the status of the export operation to see when it finishes.
Java
Before trying this sample, follow theJava setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AIJava API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
importcom.google.api.gax.longrunning.OperationFuture;importcom.google.cloud.aiplatform.v1.ExportModelOperationMetadata;importcom.google.cloud.aiplatform.v1.ExportModelRequest;importcom.google.cloud.aiplatform.v1.ExportModelResponse;importcom.google.cloud.aiplatform.v1.GcsDestination;importcom.google.cloud.aiplatform.v1.ModelName;importcom.google.cloud.aiplatform.v1.ModelServiceClient;importcom.google.cloud.aiplatform.v1.ModelServiceSettings;importjava.io.IOException;importjava.util.concurrent.ExecutionException;importjava.util.concurrent.TimeUnit;importjava.util.concurrent.TimeoutException;publicclassExportModelSample{publicstaticvoidmain(String[]args)throwsIOException,InterruptedException,ExecutionException,TimeoutException{// TODO(developer): Replace these variables before running the sample.Stringproject="YOUR_PROJECT_ID";StringmodelId="YOUR_MODEL_ID";StringgcsDestinationOutputUriPrefix="gs://YOUR_GCS_SOURCE_BUCKET/path_to_your_destination/";StringexportFormat="YOUR_EXPORT_FORMAT";exportModelSample(project,modelId,gcsDestinationOutputUriPrefix,exportFormat);}staticvoidexportModelSample(Stringproject,StringmodelId,StringgcsDestinationOutputUriPrefix,StringexportFormat)throwsIOException,InterruptedException,ExecutionException,TimeoutException{ModelServiceSettingsmodelServiceSettings=ModelServiceSettings.newBuilder().setEndpoint("us-central1-aiplatform.googleapis.com:443").build();// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests. After completing all of your requests, call// the "close" method on the client to safely clean up any remaining background resources.try(ModelServiceClientmodelServiceClient=ModelServiceClient.create(modelServiceSettings)){Stringlocation="us-central1";GcsDestination.BuildergcsDestination=GcsDestination.newBuilder();gcsDestination.setOutputUriPrefix(gcsDestinationOutputUriPrefix);ModelNamemodelName=ModelName.of(project,location,modelId);ExportModelRequest.OutputConfigoutputConfig=ExportModelRequest.OutputConfig.newBuilder().setExportFormatId(exportFormat).setArtifactDestination(gcsDestination).build();OperationFuture<ExportModelResponse,ExportModelOperationMetadata>exportModelResponseFuture=modelServiceClient.exportModelAsync(modelName,outputConfig);System.out.format("Operation name: %s\n",exportModelResponseFuture.getInitialFuture().get().getName());System.out.println("Waiting for operation to finish...");ExportModelResponseexportModelResponse=exportModelResponseFuture.get(300,TimeUnit.SECONDS);System.out.format("Export Model Response: %s\n",exportModelResponse);}}}Node.js
Before trying this sample, follow theNode.js setup instructions in theVertex AI quickstart using client libraries. For more information, see theVertex AINode.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.
/** * TODO(developer): Uncomment these variables before running the sample.\ (Not necessary if passing values as arguments) */// const modelId = 'YOUR_MODEL_ID';// const gcsDestinationOutputUriPrefix ='YOUR_GCS_DEST_OUTPUT_URI_PREFIX';// eg. "gs://<your-gcs-bucket>/destination_path"// const exportFormat = 'YOUR_EXPORT_FORMAT';// const project = 'YOUR_PROJECT_ID';// const location = 'YOUR_PROJECT_LOCATION';// Imports the Google Cloud Model Service Client libraryconst{ModelServiceClient}=require('@google-cloud/aiplatform');// Specifies the location of the api endpointconstclientOptions={apiEndpoint:'us-central1-aiplatform.googleapis.com',};// Instantiates a clientconstmodelServiceClient=newModelServiceClient(clientOptions);asyncfunctionexportModel(){// Configure the name resourcesconstname=`projects/${project}/locations/${location}/models/${modelId}`;// Configure the outputConfig resourcesconstoutputConfig={exportFormatId:exportFormat,gcsDestination:{outputUriPrefix:gcsDestinationOutputUriPrefix,},};constrequest={name,outputConfig,};// Export Model requestconst[response]=awaitmodelServiceClient.exportModel(request);console.log(`Long running operation :${response.name}`);// Wait for operation to completeawaitresponse.promise();constresult=response.result;console.log(`Export model response :${JSON.stringify(result)}`);}exportModel();Python
To learn how to install or update the Vertex AI SDK for Python, seeInstall the Vertex AI SDK for Python. For more information, see thePython API reference documentation.
fromgoogle.cloudimportaiplatformdefexport_model_sample(project:str,model_id:str,gcs_destination_output_uri_prefix:str,location:str="us-central1",api_endpoint:str="us-central1-aiplatform.googleapis.com",timeout:int=300,):# The AI Platform services require regional API endpoints.client_options={"api_endpoint":api_endpoint}# Initialize client that will be used to create and send requests.# This client only needs to be created once, and can be reused for multiple requests.client=aiplatform.gapic.ModelServiceClient(client_options=client_options)output_config={"artifact_destination":{"output_uri_prefix":gcs_destination_output_uri_prefix},# For information about export formats: https://cloud.google.com/ai-platform-unified/docs/export/export-edge-model#aiplatform_export_model_sample-drest"export_format_id":"tf-saved-model",}name=client.model_path(project=project,location=location,model=model_id)response=client.export_model(name=name,output_config=output_config)print("Long running operation:",response.operation.name)print("output_info:",response.metadata.output_info)export_model_response=response.result(timeout=timeout)print("export_model_response:",export_model_response)Get status of the operation
Image
Use the following code to get the status of the export operation. This code is the same for all objectives:
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: .
- OPERATION_ID:The ID of the target operation. This ID is typically contained in the response to the original request.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID" | Select-Object -Expand Content
{ "name": "projects/PROJECT/locations/LOCATION/models/MODEL_ID/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelOperationMetadata", "genericMetadata": { "createTime": "2020-10-12T20:53:40.130785Z", "updateTime": "2020-10-12T20:53:40.793983Z" }, "outputInfo": { "artifactOutputUri": "gs://OUTPUT_BUCKET/model-MODEL_ID/EXPORT_FORMAT/YYYY-MM-DDThh:mm:ss.sssZ" } }, "done": true, "response": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelResponse" }}Output files
Image
Select the tab below for your model format:
TF Lite
TheOUTPUT_BUCKET you specified in the request determines where the output files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/tflite/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
model.tflite: A file containing a version of the model that is ready to be used with TensorFlow Lite.
dict.txt) as one of the output files. This information is now included in the.tflite file itself. For information about extracting this information, see the following TensorFlow documentation:Edge TPU
TheOUTPUT_BUCKET you specified in the request determines where the output files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/edgetpu-tflite/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
edgetpu_model.tflite: A file containing a version of the model for TensorFlow Lite, passed through the Edge TPU compiler to be compatible with the Edge TPU.
dict.txt) as one of the output files. This information is now included in the.tflite file itself. For information about extracting this information, see the following TensorFlow documentation:Container
TheOUTPUT_BUCKET you specified in the request determines where the output files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/tf-saved-model/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
saved_model.pb: A protocol buffer file containing the graph definition and the weights of the model.
Core ML
TheOUTPUT_BUCKET you specified in the request determines where the output files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/core-ml/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
dict.txt: A label file. Each line in the label filedict.txtrepresents a label of the predictions returned by the model, in the same order they were requested.Sample
dict.txtrosesdaisytulipsdandelionsunflowers
model.mlmodel: A file specifying a Core ML model.
Tensorflow.js
TheOUTPUT_BUCKET you specified in the request determines where the output files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/tf-js/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
dict.txt: A label file. Each line in the label filedict.txtrepresents a label of the predictions returned by the model, in the same order they were requested.Sample
dict.txtrosesdaisytulipsdandelionsunflowers
group1-shard1of3.bin: A binary file.group1-shard2of3.bin: A binary file.group1-shard3of3.bin: A binary file.model.json: A JSON file representation of a model.Sample
model.json(shortened for clarity){ "format": "graph-model", "generatedBy": "2.4.0", "convertedBy": "TensorFlow.js Converter v1.7.0", "userDefinedMetadata": { "signature": { "inputs": { "image:0": { "name": "image:0", "dtype": "DT_FLOAT", "tensorShape": { "dim": [ { "size": "1" }, { "size": "224" }, { "size": "224" }, { "size": "3" } ] } } }, "outputs": { "scores:0": { "name": "scores:0", "dtype": "DT_FLOAT", "tensorShape": { "dim": [ { "size": "1" }, { "size": "5" } ] } } } } }, "modelTopology": { "node": [ { "name": "image", "op": "Placeholder", "attr": { "dtype": { "type": "DT_FLOAT" }, "shape": { "shape": { "dim": [ { "size": "1" }, { "size": "224" }, { "size": "224" }, { "size": "3" } ] } } } }, { "name": "mnas_v4_a_1/feature_network/feature_extractor/Mean/reduction_indices", "op": "Const", "attr": { "value": { "tensor": { "dtype": "DT_INT32", "tensorShape": { "dim": [ { "size": "2" } ] } } }, "dtype": { "type": "DT_INT32" } } }, ... { "name": "scores", "op": "Identity", "input": [ "Softmax" ], "attr": { "T": { "type": "DT_FLOAT" } } } ], "library": {}, "versions": {} }, "weightsManifest": [ { "paths": [ "group1-shard1of3.bin", "group1-shard2of3.bin", "group1-shard3of3.bin" ], "weights": [ { "name": "mnas_v4_a_1/feature_network/feature_extractor/Mean/reduction_indices", "shape": [ 2 ], "dtype": "int32" }, { "name": "mnas_v4_a/output/fc/tf_layer/kernel", "shape": [ 1280, 5 ], "dtype": "float32" }, ... { "name": "mnas_v4_a_1/feature_network/lead_cell_17/op_0/conv2d_0/Conv2D_weights", "shape": [ 1, 1, 320, 1280 ], "dtype": "float32" }, { "name": "mnas_v4_a_1/feature_network/cell_14/op_0/expand_0/Conv2D_bn_offset", "shape": [ 1152 ], "dtype": "float32" } ] } ]}
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.