Configure compute resources for inference

Vertex AI allocatesnodes to handle online and batch inferences.When youdeploy a custom-trained model or AutoML model to anEndpointresource to serve online inferences or whenyourequest batch inferences, you cancustomize the type of virtual machine that the inference service uses forthese nodes. You can optionally configure inference nodes to use GPUs.

Machine types differ in a few ways:

  • Number of virtual CPUs (vCPUs) per node
  • Amount of memory per node
  • Pricing

By selecting a machine type with more computing resources, you can serveinferences with lower latency or handle more inference requests at the sametime.

Manage cost and availability

To help manage costs or ensure availability of VM resources,Vertex AI provides the following:

  • To help ensure that you pay only for the computing resources that you need,you can use Vertex AI Inference autoscaling. For more information, seeScale inference nodes for Vertex AI Inference.

  • To make sure that VM resources are available when your inference jobs needthem, you can use Compute Engine reservations. Reservations provide ahigh level of assurance in obtaining capacity for Compute Engineresources. For more information, seeUse reservations with inference.

  • To reduce the cost of running your inference jobs, you can useSpot VMs. Spot VMs are virtual machine (VM)instances that are excess Compute Engine capacity.Spot VMs have significant discounts, butCompute Engine might preemptively stop or deleteSpot VMs to reclaim the capacity at any time.For more information, seeUse Spot VMs with inference.

Where to specify compute resources

Online inference

If you want to use a custom-trained model or an AutoML tabular model to serveonline inferences, you must specify a machine type when you deploy theModelresource as aDeployedModel to anEndpoint. For other types of AutoMLmodels, Vertex AI configures the machine types automatically.

Specify the machine type (and, optionally, GPU configuration) in thededicatedResources.machineSpec field of yourDeployedModel.

Learn how to deploy each model type:

Batch inference

If you want to get batch inferences from a custom-trained model or an AutoMLtabular model, you must specify a machine type when youcreate aBatchPredictionJob resource. Specify themachine type (and, optionally, GPU configuration) in thededicatedResources.machineSpec field of yourBatchPredictionJob.

Machine types

The following tables compare the available machine types for serving inferencesfrom custom-trained models and AutoML tabular models.

For information about TPU accelerator types, seeDeploy a model to Cloud TPU VMs.

Machine types: CPU

E2 Series

NamevCPUsMemory (GB)
e2-standard-228
e2-standard-4416
e2-standard-8832
e2-standard-161664
e2-standard-3232128
e2-highmem-2216
e2-highmem-4432
e2-highmem-8864
e2-highmem-1616128
e2-highcpu-222
e2-highcpu-444
e2-highcpu-888
e2-highcpu-161616
e2-highcpu-323232

N1 Series

NamevCPUsMemory (GB)
n1-standard-227.5
n1-standard-4415
n1-standard-8830
n1-standard-161660
n1-standard-3232120
n1-highmem-2213
n1-highmem-4426
n1-highmem-8852
n1-highmem-1616104
n1-highmem-3232208
n1-highcpu-443.6
n1-highcpu-887.2
n1-highcpu-161614.4
n1-highcpu-323228.8

N2 Series

NamevCPUsMemory (GB)
n2-standard-228
n2-standard-4416
n2-standard-8832
n2-standard-161664
n2-standard-3232128
n2-standard-4848192
n2-standard-6464256
n2-standard-8080320
n2-standard-9696384
n2-standard-128128512
n2-highmem-2216
n2-highmem-4432
n2-highmem-8864
n2-highmem-1616128
n2-highmem-3232256
n2-highmem-4848384
n2-highmem-6464512
n2-highmem-8080640
n2-highmem-9696768
n2-highmem-128128864
n2-highcpu-222
n2-highcpu-444
n2-highcpu-888
n2-highcpu-161616
n2-highcpu-323232
n2-highcpu-484848
n2-highcpu-646464
n2-highcpu-808080
n2-highcpu-969696

N2D Series

NamevCPUsMemory (GB)
n2d-standard-228
n2d-standard-4416
n2d-standard-8832
n2d-standard-161664
n2d-standard-3232128
n2d-standard-4848192
n2d-standard-6464256
n2d-standard-8080320
n2d-standard-9696384
n2d-standard-128128512
n2d-standard-224224896
n2d-highmem-2216
n2d-highmem-4432
n2d-highmem-8864
n2d-highmem-1616128
n2d-highmem-3232256
n2d-highmem-4848384
n2d-highmem-6464512
n2d-highmem-8080640
n2d-highmem-9696768
n2d-highcpu-222
n2d-highcpu-444
n2d-highcpu-888
n2d-highcpu-161616
n2d-highcpu-323232
n2d-highcpu-484848
n2d-highcpu-646464
n2d-highcpu-808080
n2d-highcpu-969696
n2d-highcpu-128128128
n2d-highcpu-224224224

C2 Series

NamevCPUsMemory (GB)
c2-standard-4416
c2-standard-8832
c2-standard-161664
c2-standard-3030120
c2-standard-6060240

C2D Series

NamevCPUsMemory (GB)
c2d-standard-228
c2d-standard-4416
c2d-standard-8832
c2d-standard-161664
c2d-standard-3232128
c2d-standard-5656224
c2d-standard-112112448
c2d-highcpu-224
c2d-highcpu-448
c2d-highcpu-8816
c2d-highcpu-161632
c2d-highcpu-323264
c2d-highcpu-5656112
c2d-highcpu-112112224
c2d-highmem-2216
c2d-highmem-4432
c2d-highmem-8864
c2d-highmem-1616128
c2d-highmem-3232256
c2d-highmem-5656448
c2d-highmem-112112896

C3 Series

NamevCPUsMemory (GB)
c3-highcpu-448
c3-highcpu-8816
c3-highcpu-222244
c3-highcpu-444488
c3-highcpu-8888176
c3-highcpu-176176352

Machine types: GPU

A2 Series

NamevCPUsMemory (GB)GPUs (NVIDIA A100)
a2-highgpu-1g12851 (A100 40GB)
a2-highgpu-2g241702 (A100 40GB)
a2-highgpu-4g483404 (A100 40GB)
a2-highgpu-8g966808 (A100 40GB)
a2-megagpu-16g96136016 (A100 40GB)
a2-ultragpu-1g121701 (A100 80GB)
a2-ultragpu-2g243402 (A100 80GB)
a2-ultragpu-4g486804 (A100 80GB)
a2-ultragpu-8g9613608 (A100 80GB)

A3 Series

NamevCPUsMemory (GB)GPUs (NVIDIA H100 or H200)
a3-highgpu-1g262341 (H100 80GB)
a3-highgpu-2g524682 (H100 80GB)
a3-highgpu-4g1049364 (H100 80GB)
a3-highgpu-8g20818728 (H100 80GB)
a3-edgegpu-8g20818728 (H100 80GB)
a3-ultragpu-8g22429528 (H200 141GB)

A4 Series

NamevCPUsMemory (GB)GPUs (NVIDIA B200)
a4-highgpu-8g2243,9688

A4X Series

NamevCPUsMemory (GB)GPUs (NVIDIA GB200)
a4x-highgpu-4g1408844

G2 Series

NamevCPUsMemory (GB)GPUs (NVIDIA L4)
g2-standard-44161
g2-standard-88321
g2-standard-1212481
g2-standard-1616641
g2-standard-2424962
g2-standard-32321281
g2-standard-48481924
g2-standard-96963848

G4 Series

NamevCPUsMemory (GB)GPUs (NVIDIA RTX PRO 6000)
g4-standard-48481801
g4-standard-96963602
g4-standard-1921927204
g4-standard-38438414408

Learn aboutpricing for each machinetype. Read more about the detailed specifications ofthese machine types in theCompute Engine documentation about machinetypes.

Find the ideal machine type

Online inference

To find the ideal machine type for your use case, we recommend loading your modelon multiple machine types and measuring characteristics such as the latency,cost, concurrency, and throughput.

One way to do this is to runthis notebookon multiple machine types and compare the results to find the one that worksbest for you.

Vertex AI reserves approximately 1 vCPU on each replicafor running system processes. This means that running the notebook on a singlecore machine type would be comparable to using a 2-core machine type for servinginferences.

When considering inference costs, remember that although larger machines costmore, they can lower overall cost because fewer replicas are required to servethe same workload. This is particularly evident for GPUs, which tend to costmore per hour, but can both provide lower latency and cost less overall.

Batch inference

For more information, seeChoose machine type and replica count.

Optional GPU accelerators

Some configurations, such as theA2 seriesandG2 series, have afixed number of GPUs built-in.

TheA4X (a4x-highgpu-4g)series requires a minimum replica count of 18. This machine is purchased perrack, and has a minimum of 18 VMs.

Other configurations, such as the N1 series, let you optionally add GPUs to accelerate eachinference node.

Note: GPUs arenot recommended for use with AutoML tabular models. For thistype of model, GPUs don't provide a worthwhile performance benefit. SpecifyingGPUs during AutoML model deployment isn't supported in Google Cloud console.

To add optional GPU accelerators, you must account for several requirements:

The following table shows the optional GPUs that are available for onlineinference and how many of each type of GPU you can use with eachCompute Engine machine type:

Valid numbers of GPUs for each machine type
Machine typeNVIDIA Tesla P100NVIDIA Tesla V100NVIDIA Tesla P4NVIDIA Tesla T4
n1-standard-21, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-standard-41, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-standard-81, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-standard-161, 2, 42, 4, 81, 2, 41, 2, 4
n1-standard-322, 44, 82, 42, 4
n1-highmem-21, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-highmem-41, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-highmem-81, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-highmem-161, 2, 42, 4, 81, 2, 41, 2, 4
n1-highmem-322, 44, 82, 42, 4
n1-highcpu-21, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-highcpu-41, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-highcpu-81, 2, 41, 2, 4, 81, 2, 41, 2, 4
n1-highcpu-161, 2, 42, 4, 81, 2, 41, 2, 4
n1-highcpu-322, 44, 82, 42, 4

Optional GPUs incuradditional costs.

Coschedule multiple replicas on a single VM

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

To optimize the cost of your deployment, you can deploy multiple replicas of thesame model onto a single VM equipped with multiple GPU hardware accelerators,such as thea3-highgpu-8g VM, which has eight NVIDIA H100 GPUs. Each modelreplica can be assigned to one or more GPUs.

For smaller workloads, you can also partition a single GPU into multiplesmaller instances usingNVIDIA multi-instance GPUs (MIG).This lets you assign resources at a sub-GPU level, maximizing theutilization of each accelerator. For more information on multi-instance GPUs,see theNVIDIA multi-instance GPU user guide.

Both of these capabilities are designed to provide more efficient resourceutilization and greater cost-effectiveness for your serving workloads.

Limitations

This feature is subject to the following limitations:

  • All of the coscheduled model replicas must be the same model version.
  • Usingdeployment resource poolsto share resources across deployments isn't supported.

Supported machine types

The following machine types are supported. Note that, for machine types thatonly have one GPU, no coscheduling is needed.

Machine typeCoscheduleCoschedule + MIG
a2-highgpu-1gN/AYes
a2-highgpu-2gYesYes
a2-highgpu-4gYesYes
a2-highgpu-8gYesYes
a2-highgpu-16gYesYes
a2-ultragpu-1gN/AYes
a2-ultragpu-2gYesYes
a2-ultragpu-4gYesYes
a2-ultragpu-8gYesYes
a3-edgegpu-8gYesYes
a3-highgpu-1gN/AYes
a3-highgpu-2gYesYes
a3-highgpu-4gYesYes
a3-highgpu-8gYesYes
a3-megagpu-8gYesYes
a3-ultragpu-8gYesYes
a4-highgpu-8gYesYes
a4x-highgpu-8gYesYes
g4-standard-48N/AYes
g4-standard-96YesYes
g4-standard-192YesYes
g4-standard-384YesYes

Prerequisites

Before using this feature, readDeploy a model by using the gcloud CLI or Vertex AI API.

Deploying the model replicas

The following samples demonstrate how to deploy coscheduled model replicas.

Note: In this preview, NVIDIA MIG is supported for Vertex AI API and RESTAPI, but not for Google Cloud CLI.Note: When MIG is enabled, you can't use GPU sharing, because each replica islimited to consuming MIG in a single GPU instance. Therefore, theaccelerator_count must be set to 1 when agpu_partition_size is specified.

gcloud

Use the followinggcloud command to deploy coscheduled model replicas on a VM:

gcloudaiendpointsdeploy-modelENDPOINT_ID\--region=LOCATION_ID\--model=MODEL_ID\--display-name=DEPLOYED_MODEL_NAME\--min-replica-count=MIN_REPLICA_COUNT\--max-replica-count=MAX_REPLICA_COUNT\--machine-type=MACHINE_TYPE\--accelerator=type=ACC_TYPE,count=ACC_COUNT\--traffic-split=0=100

Replace the following:

  • ENDPOINT_ID: The ID for the endpoint.
  • LOCATION_ID: The region where you are using Vertex AI.
  • MODEL_ID: The model ID for the model to be deployed.
  • DEPLOYED_MODEL_NAME: A name for theDeployedModel. You can use the display name of theModel for theDeployedModel as well.
  • MIN_REPLICA_COUNT: The minimum number of nodes for this deployment.The node count can be increased or decreased as required by the inference load,up to the maximum number of nodes and never fewer than this number of nodes.
  • MAX_REPLICA_COUNT: The maximum number of nodes for this deployment.The node count can be increased or decreased as required by the inference load,up to this number of nodes and never fewer than the minimum number of nodes.. One VM is required for every 2 replicas tobe deployed.
  • MACHINE_TYPE: The type of VM to use for this deployment. Must befrom the accelerator-optimized family.
  • ACC_TYPE: The GPU accelerator type. Should correspond to theMACHINE_TYPE. Fora3-highgpu-8g, usenvidia-h100-80gb.
  • ACC_COUNT: The number of GPUs that each replica can use. Must beat least 1 and no more than the total number of GPUs in the machine.

REST

Before using any of the request data, make the following replacements:

HTTP method and URL:

POST https://LOCATION_ID-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION_ID/endpoints/ENDPOINT_ID:deployModel

Request JSON body:

{  "deployedModel": {    "model": "projects/PROJECT_NUMBER/locations/LOCATION_ID/models/MODEL_ID",    "displayName": "DEPLOYED_MODEL_NAME",    "dedicatedResources": {      "machineSpec": {        "machineType": "MACHINE_TYPE",        "acceleratorType": "ACC_TYPE",        "gpuPartitionSize": "GPU_PARTITION_SIZE",        "acceleratorCount": "ACC_COUNT""      },      "minReplicaCount":MIN_REPLICA_COUNT,      "maxReplicaCount":MAX_REPLICA_COUNT,      "autoscalingMetricSpecs": [        {          "metricName": "aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle",          "target": 70        }      ]    }  }}

To send your request, expand one of these options:

curl (Linux, macOS, or Cloud Shell)

Save the request body in a file namedrequest.json, and execute the following command:

curl -X POST \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION_ID-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION_ID/endpoints/ENDPOINT_ID:deployModel"

PowerShell (Windows)

Save the request body in a file namedrequest.json, and execute the following command:

$headers = @{  }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION_ID-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION_ID/endpoints/ENDPOINT_ID:deployModel" | Select-Object -Expand Content

You should receive a successful status code (2xx) and an empty response.

Python

To learn how to install or update the Vertex AI SDK for Python, seeInstall the Vertex AI SDK for Python. For more information, see thePython API reference documentation.

Use the following Python command to deploy coscheduled model replicas on a VM.

endpoint.deploy(model=<var>MODEL</var>,machine_type=MACHINE_TYPE,min_replica_count=MIN_REPLICA_COUNT,max_replica_count=MAX_REPLICA_COUNT,accelerator_type=ACC_TYPE,gpu_partition_size=GPU_PARTITION_SIZE,accelerator_count=ACC_COUNT)

Replace the following:

  • MODEL: The model object returned by thefollowing API call:

    model=aiplatform.Model(model_name=model_name)
  • MACHINE_TYPE: The type of VM to use for this deployment. Must befrom the accelerator-optimized family. In the preview, onlya3-highgpu-8gis supported.

  • MIN_REPLICA_COUNT: The minimum number of nodes for this deployment.The node count can be increased or decreased as required by the inference load,up to the maximum number of nodes and never fewer than this number of nodes.

  • MAX_REPLICA_COUNT: The maximum number of nodes for this deployment.The node count can be increased or decreased as required by the inference load,up to this number of nodes and never fewer than the minimum number of nodes.

  • ACC_TYPE: The GPU accelerator type. Should correspond to theGPU_PARTITION_SIZE.

  • GPU_PARTITION_SIZE: The GPU partition size. For example,"1g.10gb". For a comprehensive list of supported partition sizes for eachGPU type, seeMulti-instance GPU partitions.

  • ACC_COUNT: The number of GPUs that each replica can use. Must beat least 1 and no more than the total number of GPUs in the machine. Fora3-highgpu-8g, specify between 1 and 8.

Monitor VM usage

Use the following instructions to monitor the actual machine count for yourdeployed replicas in the Metrics Explorer.

  1. In the Google Cloud console, go to theMetrics Explorer page.

    Goto Metrics Explorer

  2. Select the project you want to view metrics for.

  3. From theMetric drop-down menu, clickSelect a metric.

  4. In theFilter by resource or metric name search bar, enterVertex AI Endpoint.

  5. Select theVertex AI Endpoint > Prediction metric category. UnderActive metrics, selectMachine count.

  6. ClickApply.

Billing

Billing is based on the number of VMs that are used, not the number of GPUs.You canmonitor your VM usage by using Metrics Explorer.

High availability

Because more than one replica is being coscheduled on the same VM,Vertex AI Inference cannot spread your deployment across multiple VMs andtherefore multiple zones until your replica count exceeds the single VM node.For high availability purposes, Google recommends deploying on at least twonodes (VMs).

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-16 UTC.