Vertex AI client library

This document describes the Vertex AI Neural Architecture Search client library.

Neural Architecture Search client (invertex_nas_cli.py)wraps the job management API and facilitatesthe Neural Architecture Search development. It provides the followingsubcommands:

  • vertex_nas_cli.py build: builds Neural Architecture Search containers and pushes toArtifact Registry.
  • vertex_nas_cli.py run_latency_calculator_local: runs latency calculator locally for Neural Architecture Search stage-1 search job.
  • vertex_nas_cli.py search_in_local: runs Neural Architecture Search job locally on your machine with a randomly sampled architecture.
  • vertex_nas_cli.py search: runs Neural Architecture Search job with stage-1 search and stage-2 training on Google Cloud.
  • vertex_nas_cli.py search_resume: resumes a previous Neural Architecture Search job on Google Cloud.
  • vertex_nas_cli.py list_trials: lists Neural Architecture Search trials for specific job.
  • vertex_nas_cli.py train: trains searched model architecture (trial) in Google Cloud.

Build

Run the following command to see the list of arguments supported byvertex_nas_cli.py build:

python3 vertex_nas_cli.py build -h
Note: Instead of building with Dockerfile, you can also use other tools likebazelto build the trainer, and use it with the Neural Architecture Search service.

If--trainer_docker_id is specified, it builds the trainer docker from the dockerfile specified by the flag--trainer_docker_file. The docker is built with fullURIgcr.io/project_id/trainer_docker_id and pushed toArtifact Registry.

If--latency_calculator_docker_id is specified, it builds the latencycalculator docker from the docker file specified by the flag--latency_calculator_docker_file. The docker is built with full URIgcr.io/project_id/latency_calculator_docker_id and pushed toArtifact Registry.

Instead of building with Dockerfile, you can also use other tools likebazelto build the trainer, and use it with the Neural Architecture Search service.

Run latency calculator local

Run the following command to see the list of arguments supported byvertex_nas_cli.py run_latency_calculator_local:

python3 vertex_nas_cli.py run_latency_calculator_local -h

Search in local

Run the following command to see the list of arguments supported byvertex_nas_cli.py search_in_local:

python3 vertex_nas_cli.py search_in_local -h

You need to specify either--search_space_module or--prebuilt_search_spaceso thatvertex_nas_cli.pyinternally generates a random model architecture to use.

This command will run the dockergcr.io/project_id/trainer_docker_id:latest on yourlocal machine with a randomly sampled architecture.

You can pass through the flags to be used by the container after--search_docker_flags. For example,you can pass through--training_data_path andvalidation_data_path to thecontainer:

python3vertex_nas_cli.pysearch_in_local\--project_id=${PROJECT_ID}\--trainer_docker_id=${TRAINER_DOCKER_ID}\--prebuilt_search_space=spinenet\--use_prebuilt_trainer=True\--local_output_dir=${JOB_DIR}\--search_docker_flags\training_data_path=/test_data/test-coco.tfrecord\validation_data_path=/test_data/test-coco.tfrecord\model=retinanet

Search

Run the following command to see the list of arguments supported byvertex_nas_cli.py search:

python3 vertex_nas_cli.py search -h

You need to specify either--search_space_module or--prebuilt_search_spaceso thatvertex_nas_cli.pyinternally createssearch_space_spec.

The machines to run Neural Architecture Search jobs can be specified by--accelerator_type.For more information or to customize for your own needs, like using more GPUs, seeadd_machine_configurations.

Use the flags with prefixtrain_ to set the stage-2 trainingrelated parameters.

Search Resume

Run the following command to see the list of arguments supported byvertex_nas_cli.py search_resume:

python3 vertex_nas_cli.py search_resume -h

You can resume a previously run search job by passingprevious_nas_job_id and optionallyprevious_latency_job_id.Theprevious_latency_job_id flag is needed only if your previous search jobinvolved a Google Cloud latency job. If instead of a Google Cloud latency jobyou used an on-premises latency calculator, then you have torun that on-premises latency calculator job separately again.The previous search job should notitself be a resume job. The region for the search resume jobshould be the same as for the previous search job.An examplesearch_resume command looks like the following:

python3vertex_nas_cli.pysearch_resume\--project_id=${PROJECT}\--region=${REGION}\--job_name="${JOB_NAME}"\--previous_nas_job_id=${previous_nas_job_id}\--previous_latency_job_id=${previous_latency_job_id}\--root_output_dir=${GCS_ROOT_DIR}\--max_nas_trial=2\--max_parallel_nas_trial=2\--max_failed_nas_trial=2

List trials

Run the following command to see the list of arguments supported byvertex_nas_cli.py list_trials:

python3 vertex_nas_cli.py list_trials -h
Note: Thejob_id flag is different from thejob_name flag. Thejob_id is a unique numeric number assigned to theVertex AI Neural Architecture Search job.

Train

Run the following command to see the list of arguments supported byvertex_nas_cli.py train:

python3 vertex_nas_cli.py train -h

Proxy-task variance measurement

Run the following command to see the list of arguments supported byvertex_nas_cli.py measure_proxy_task_variance:

python3vertex_nas_cli.pymeasure_proxy_task_variance-h

Proxy-task model selection

Run the following command to see the list of arguments supported byvertex_nas_cli.py select_proxy_task_models:

python3 vertex_nas_cli.py select_proxy_task_models -h

Proxy-task search

Run the following command to see the list of arguments supported byvertex_nas_cli.py search_proxy_task:

python3 vertex_nas_cli.py search_proxy_task -h

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.