Accelerator-optimized machine family

This document describes the accelerator-optimized machine family, whichprovides you with Compute Engine instances that have pre-attached NVIDIAGPUs. These instances are designed specifically for artificial intelligence (AI), machine learning (ML), high performance computing (HPC),and graphics-intensive applications.

The accelerator-optimized machine family is available in the following machineseries: A4X Max, A4X, A4, A3, A2, G4, and G2. Each machine type within a serieshas a specific model and number of NVIDIA GPUs attached. You can also attachsome GPU modelsto N1 general-purpose machine types.

Recommended machine series by workload type

The following section provides the recommended machine series based on yourGPU workloads:

Workload typeRecommended machine type
Pre-training modelsA4X Max, A4X, A4, A3 Ultra, A3 Mega, A3 High, and A2

To identify the best fit, seeRecommendations for pre-training models in the AI Hypercomputer documentation.

Fine-tuning modelsA4X Max, A4X, A4, A3 Ultra, A3 Mega, A3 High, A2, and G4

To identify the best fit, seeRecommendations for fine-tuning models in the AI Hypercomputer documentation.

Serving inferenceA4X Max, A4X, A4, A3 Ultra, A3 Mega, A3 High, A3 Edge, A2, and G4

To identify the best fit, seeRecommendations for serving inference in the AI Hypercomputer documentation.

Graphics-intensive workloadsG4, G2, and N1+T4
High performance computingFor high performance computing workloads, any accelerator-optimized machine series works well. The best fit depends on the amount of computation that must be offloaded to the GPU.

For more information, seeRecommendations for HPC in the AI Hypercomputer documentation.

Pricing and consumption options

Consumption options refers to the ways to get and use compute resources.Google Cloud bills accelerator-optimized machine types for their attached GPUs,predefined vCPU, memory, and bundled Local SSD (if applicable). Discountsfor accelerator-optimized instances vary based on the consumption option you use.For more pricing information for accelerator-optimized instances, see theAccelerator-optimized machine type familysection on the VM instance pricing page.

Discounts for accelerator-optimized instances vary based on the consumptionoption you choose:

  • On-demand: You can receivecommitted use discounts (CUDs)for some resources by purchasing resource-based commitments. However, GPUsand Local SSD disks that you use with the on-demand option areineligible for CUDs. To receive CUDs for GPUs and Local SSD disks, use oneof the reservation options instead.
  • Spot: Spot VMs automatically receive discounts throughSpot VMs pricing.
  • Flex-start: Instances provisioned by using the Flex-start consumptionoption automatically receive discounts throughDynamic Workload Scheduler pricing.
  • Reservations: You can receive CUDs for your accelerator-optimized machine typeresources by purchasing resource-based commitments. Commitments forGPUs and Local SSD disksrequire attached reservations for those resources.

Consumption option availability by machine type

The following table summarizes the availability of each consumption option bymachine types. For more information about how to choose a consumptionoption, seeChoose a consumption modelin the AI Hypercomputer documentation.

Note: Before you create and submit a future reservation request for asupported GPU machine type, you must contact youraccount team or thesales team to discussyour request. Otherwise, Google Cloud is likely to decline it.
Machine typeOn-demandSpotFlex-startOn-demand reservationsFuture reservationsFuture reservations in calendar modeFuture reservations in AI Hypercomputer

The A4X Max and A4X machine series

Caution: TheCompute Engine Service Level Agreement (SLA)doesn't apply to the A4X Max and A4X machine series.

The A4X Max and A4X machine series runs on an exascale platform based onNVIDIA's rack-scale architectureand is optimized for compute and memory-intensive, network-bound ML training andHPC workloads. A4X Max and A4X differ primarily in their GPU and networkingcomponents. A4X Max also offersbare metal instances,which provide direct access to the host server's CPU and memory,without Compute Engine's hypervisor in the middle.

All machine types in the A4X Max and A4X series have two sockets with NVIDIAGrace™ CPUs with Arm® Neoverse™ V2 cores. These CPUs connect to four GPUs withfast chip-to-chipNVLink-C2 communication.

NVLink domain

Both A4X Max and A4X machine series are built on NVIDIA's NVL72 rack-scalearchitecture, which uses NVLink domains to enable large-scale,high-performance GPU computing.An NVLink Domain is a group of interconnected NVIDIA NVSwitch chips and theGPUs that connect to them, forming a high-speed network fabric that allows fordirect and fast communication between GPUs. For the A4X and A4X Max, a singleNVL72 (NVLink) Domain is composed of 18 instances and 72 GPUs.

A4X Max and A4X comparison

The following table provides a detailed comparison of the A4X Max and A4Xmachine types:

FeatureA4X MaxA4X
GPU accelerationA4X Max instances have NVIDIA GB300 Ultra Superchips automatically attached. These Superchips featureNVIDIA B300 GPUs, offering up to 20 TB of total GPU memory per NVL72 domain, which provides roughly 279 GB per GPU.A4X instances have NVIDIA GB200 Superchips automatically attached. These Superchips haveNVIDIA B200 GPUs and offer 186 GB memory per GPU.
Enhanced networking with RoCE

For A4X Max instances, RoCE increases network performance by combiningNVIDIA ConnectX-8 (CX-8) SuperNICs and Google's datacenter-wide network, which featureseight-way rail-alignment. This configuration delivers even higher performance with up to 3,200 Gbps of bandwidth, optimized for demanding large-scale training and HPC tasks.

For general purpose networking, each instance also has up to 400 Gbps of bandwidth.

For A4X instances, RDMA over Converged Ethernet (RoCE) increases network performance by combiningNVIDIA ConnectX-7 (CX-7) NICs Google's datacenter-wide network, which featuresfour-way rail-alignment. This architecture provides up to 1,600 Gbps of bandwidth, enabling high-throughput, low-latency communication for large-scale distributed workloads.

For general purpose networking, each instance also has up to 400 Gbps of bandwidth.

Performance

The NVIDIA GB300 Ultra Superchips provide 15 PetaFLOPS of dense FP4 performance. For large-scale FP4 inference, the GB300 Ultra Superchips are expected to deliver 20-40% higher performance over the GB200 Superchips.

The NVIDIA GB200 Superchips provide 10 PetaFLOPS of dense FP4 performance.
Bare metal and VM supportBare metal instances onlyVM instances only
OS supportA4X Max instances support a range of Linux OS images. However, because bare metal instances use the IDPF network driver, your OS image must support IDPF. If you want to use an OS image that is available on Compute Engine,OS images that support IDPF.A4X instances support a range of Linux OS images. For a complete list of supported operating systems on Compute Engine, seeOS support for GPUs.
CPU platformBoth A4X Max and A4X machine types use the NVIDIA Grace CPU platform with Arm® Neoverse™ V2 cores. For more details about the platform, seeCPU platforms.
NVLink scalabilityFor both A4X Max and A4X machine types, multi-node NVLink scales up to 72 GPUs in a single domain and provides GPU NVLink bandwidth of 1800 GBps, bidirectionally per GPU.
Disk support

A4X Max and A4X instances support Local SSD for fast scratch disks, which is useful for feeding data into GPUs while preventing I/O bottlenecks. For durable storage, you can attach Hyperdisk volumes.

12,000 GiB of Local SSD is automatically added to A4X Max and A4X instances.

For durable storage, you can also attach up to 512 TiB of Hyperdisk storage. For more information about disk types, seeChoose a disk type.

Dense allocation and topology-aware scheduling supportBoth A4X Max and A4X machine types support requesting blocks of densely allocated capacity. Your host machines are allocated physically close to each other, provisioned as blocks of resources, and are interconnected with a dynamic ML network fabric to minimize network hops and optimize for low latency. Additionally, for A4X Max and A4X instances you can get topology information at the node and cluster level that can be used for job placement.

A4X Max machine type (bare metal)

A4X Max accelerator-optimized machine types use NVIDIA GB300 Grace Blackwell Ultra Superchips (nvidia-gb300) and are ideal for foundation model training and serving. A4X Max machine types are available asbare metal instances.

A4X Max is an exascale platform based onNVIDIA GB300 NVL72. Each machine has two sockets with NVIDIA Grace CPUs with Arm Neoverse V2 cores. These CPUs are connected to four NVIDIA B300 Blackwell GPUs with fast chip-to-chip (NVLink-C2C) communication.

Note: To get started with A4X Max machine types, contact your account team.
Attached NVIDIA GB300 Grace Blackwell Ultra Superchips
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a4x-maxgpu-4g-metal14496012,00063,60041,116

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A4X machine type

A4X accelerator-optimized machine types use NVIDIA GB200 Grace Blackwell Superchips (nvidia-gb200) and are ideal for foundation model training and serving.

A4X is an exascale platform based onNVIDIA GB200 NVL72. Each machine has two sockets with NVIDIA Grace CPUs with Arm Neoverse V2 cores. These CPUs are connected to four NVIDIA B200 Blackwell GPUs with fast chip-to-chip (NVLink-C2C) communication.

Note: When provisioning A4X instances, you mustreserve capacity to create instances and cluster. You can then create instances that use the features and services available from AI Hypercomputer. For more information, seeDeployment options overview in the AI Hypercomputer documentation.
Attached NVIDIA GB200 Grace Blackwell Superchips
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a4x-highgpu-4g14088412,00062,0004744

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A4X Max and A4X limitations

The following limitations apply to A4X Max and A4X instances:

Caution: TheCompute Engine Service Level Agreement (SLA) doesn't apply to the A4X Max and A4X machine series.

Supported disk types for A4X Max and A4X instances

A4X Max

A4X Max instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supported for the boot disk
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Local SSD: which is automatically added to instances that are created by using any of the A4X Max machine types
Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached Local SSD
a4x-maxgpu-4g-metal3232320004

A4X

A4X instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supported for the boot disk
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk ML (hyperdisk-ml)
  • Local SSD: which is automatically added to instances that are created by using any of the A4X machine types
Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached Local SSD
a4x-highgpu-4g1281280012884

1Hyperdisk usage is charged separately frommachine type pricing. For disk pricing, seeHyperdisk pricing.

Disk and capacity limits

You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed 512 TiB for all Hyperdisks.

For details about the capacity limits, seeHyperdisk size and attachment limits.

The A4 machine series

The A4 machine series offers machine types with up to 224 vCPUs, and3,968 GB of memory. A4 instances provide up to 3x performance of previousGPU instance types for most GPU accelerated workloads. A4 is recommended for MLtraining workloads especially at large scales—for example,hundreds or thousands of GPUs. The A4 machine series is available in a singlemachine type.

VM instances created by using the A4 machine type provide the followingfeatures:

A4 machine type

A4 accelerator-optimizedmachine types haveNVIDIA B200 Blackwell GPUs(nvidia-b200) attached and are ideal for foundation modeltraining and serving.

Note: When provisioning A4 machine types, you mustreserve capacity to create instances or clusters, use Spot VMs, useFlex-start VMs, or create a resize request in a MIG. For instructions on how to create A4instances, seeCreate an A3 Ultra or A4 instance. .
Attached NVIDIA B200 Blackwell GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a4-highgpu-8g2243,96812,000103,60081,440

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth, seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A4 limitations

  • You can only request capacity by using thesupported consumption optionsfor an A4 machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A4 machine type.
  • You can only use an A4 machine type in certainregionsand zones.
  • You can't use Persistent Disk (regional or zonal). You can only useGoogle Cloud Hyperdisk.
  • The A4 machine type is only available on theEmerald Rapids CPU platform.
  • You can't change the machine type of an instance to or from A4 machine type. You must create a new instance with this machine type.
  • A4 machine types don't supportsole-tenancy.
  • You can't run Windows operating systems on an A4 machine type.
  • For A4 instances, when you useethtool -S to monitor GPU networking, physical portcounters that end in_phy don't update. This is expected behavior for instances that usethe MRDMA Virtual Function (VF) architecture.For more information, seeMRDMA functions and network monitoring tools.
  • You can't attach Hyperdisk ML disks that were created before February 4, 2026 to A4 machine types.

Supported disk types for A4 instances

A4 instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supported for the boot disk
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk ML (hyperdisk-ml)
  • Local SSD: which is automatically added to instances that are created by using any of the A4 machine types
Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached Local SSD
a4-highgpu-8g128128N/A128832

1Hyperdisk usage is charged separately frommachine type pricing. For disk pricing, seeHyperdisk pricing.

Disk and capacity limits

You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed 512 TiB for all Hyperdisks.

For details about the capacity limits, seeHyperdisk size and attachment limits.

The A3 machine series

The A3 machine series has up to 224 vCPUs, and 2,944 GB of memory. This machineseries is optimized for compute and memory intensive, network bound ML training,and HPC workloads. The A3 machine series is available in A3 Ultra, A3 Mega,A3 High, and A3 Edge machine types.

VM instances created by using the A3 machine types provide the followingfeatures:

FeatureA3 UltraA3 Mega, High, Edge
GPU acceleration

NVIDIA H200 SXM GPUs attached, which offers 141 GB GPU memory per GPU and provides larger and faster memory for supporting large language models and HPC workloads.

NVIDIA H100 SXM GPUs attached, which offers 80 GB GPU memory per GPU and is ideal for large transformer-based language models, databases, and HPC.

Intel Xeon Scalable Processors

5th Generation Intel Xeon Scalable processor (Emerald Rapids) and offers up to 4.0 GHz sustained single-core max turbo frequency. For more information about this processor, seeCPU platform.

4th Generation Intel Xeon Scalable processor (Sapphire Rapids) and offers up to 3.3 GHz sustained single-core max turbo frequency. For more information about this processor, seeCPU platform.

Industry-leading NVLink scalability

NVIDIA H200 GPUs provide peakGPU NVLink bandwidth of 900 GB/s, unidirectionally.

With all-to-all NVLink topology between 8 GPUs in a system, the aggregate NVLink Bandwidth is up to 7.2 TB/s.

NVIDIA H100 GPUs provide peakGPU NVLink bandwidth of 450 GB/s, unidirectionally.

With all-to-all NVLink topology between 8 GPUs in a system, the aggregate NVLink Bandwidth is up to 7.2 TB/s.

Enhanced networkingFor this machine type, RDMA over Converged Ethernet (RoCE) increases the network performance by combiningNVIDIA ConnectX-7 network interface cards (NICs) with our datacenter-wide four-way rail-aligned network. By leveraging RDMA over Converged Ethernet (RoCE), thea3-ultragpu-8g machine type achieves much higher throughput between instances in a cluster when compared to other A3 machine types.Note: Because of the difference in network topology between A3 Ultra and the previous A3 series (A3 Mega, High, and Edge), you can't move workloads between instances that run on A3 Ultra and the previous A3 series.
  • For theA3 Mega machine types, GPUDirect-TCPXO further improves on GPUDirect-TCPX by offloading TCP protocol. By leveraging GPUDirect-TCPXO, thea3-megagpu-8g machine type doubles the network bandwidth when compared to the A3 High and A3 Edge machine types.
  • For theA3 Edge (a3-edgegpu-8g) and A3 High (a3-highgpu-8g) machine types, GPUDirect-TCPX increases the network performance by allowing data packet payloads to transfer directly from GPU memory to the network interface. By leveraging GPUDirect-TCPX, these machine type achieve much higher throughput between instances in a cluster when compared to the A2 or G2 accelerator-optimized machine types.
Improved networking speeds

Offers up to 4x networking speeds when compared to the previous generation A2 machine series.

For more information about networking, seeNetwork bandwidths and GPUs.

Offers up to 2.5X networking speeds when compared to the previous generation A2 machine series.

For more information about networking, seeNetwork bandwidths and GPUs.

Virtualization optimizations

The Peripheral Component Interconnect Express (PCIe) topology of A3 instances provides more accurate locality information that workloads can use to optimize data transfers.

The GPUs also expose Function Level Reset (FLR) for graceful recovery from failures and atomic operations support for concurrency improvements in certain scenarios.

Disk support

A3 instances support Local SSD for fast scratch disks, which is useful for feeding data into GPUs while preventing I/O bottlenecks. For durable storage, you can attach Persistent Disk and Hyperdisk volumes.

Local SSD is attached as follows:

  • 12,000 GiB of Local SSD is automatically added to A3 Ultra instances.
  • 6,000 GiB of Local SSD is automatically added to A3 Mega, High, and Edge instances.

For workloads that require durable block storage, you can also attach up to 512 TiB of Persistent Disk and Hyperdisk to machine types in these series. For select machine types, up to 257 TiB of Persistent Disk is also supported. For more information about disk types, seeChoose a disk type.

Compact placement policy support

Provides you with more control over the physical placement of your instances within data centers.

This enables lower-latency and higher bandwidth for instances that are located within a single availability zone.

For more information, see About compact placement policies.

Caution: By default, you can't apply compact placement policies with a max distance valueto A3 VMs in Compute Engine. To request access to this feature, contact your assignedTechnical Account Manager (TAM) or theSales team.

A3 Ultra machine type

A3 Ultramachine types haveNVIDIA H200 SXM GPUs(nvidia-h200-141gb) attached and provides the highest networkperformance in the A3 series. A3 Ultra machine types are ideal for foundation model training andserving.

Note: When provisioning A3 Ultra machinetypes, you must reserve capacity to create instances or clusters, use Spot VMs, useFlex-start VMs, or create a resize request in a MIG. For more information about theparameters to set when creating an A3 Ultra instance, seeCreate an A3 Ultra or A4 instance.
Attached NVIDIA H200 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a3-ultragpu-8g2242,95212,000103,60081128

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 Ultra limitations

  • You can only request capacity by using thesupported consumption optionsfor an A3 Ultra machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A3 Ultra machine type.
  • You can only use an A3 Ultra machine type in certainregionsand zones.
  • You can't use Persistent Disk (regional or zonal). You can only useGoogle Cloud Hyperdisk.
  • The A3 Ultra machine type is only available on theEmerald Rapids CPU platform.
  • Machine type changes aren't supported for A3 Ultra machine type. To switch to or from this machine type, you must create a new instance.
  • You can't run Windows operating systems on an A3 Ultra machine type.
  • A3 Ultra machine types don't supportsole-tenancy.
  • For A3 Ultra instances, when you useethtool -S to monitor GPU networking,physical port counters that end in_phy don't update. This is expected behavior forinstances that use the MRDMA Virtual Function (VF) architecture.For more information, seeMRDMA functions and network monitoring tools.

A3 Mega machine type

A3 Megamachine types haveNVIDIA H100 SXM GPUsand are ideal for large model training and multi-host inference.Note: When provisioninga3-megagpu-8g machine types, we recommend using a cluster of these instances and deployingwith a scheduler such as Google Kubernetes Engine (GKE) or Slurm. For detailed instructions on either ofthese options, review the following:
Attached NVIDIA H100 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3)
a3-megagpu-8g2081,8726,00091,8008640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 Mega limitations

A3 High machine type

A3 Highmachine types haveNVIDIA H100 SXM GPUsand are well-suited for both large model inference and model fine tuning.Note: When provisioninga3-highgpu-1g,a3-highgpu-2g, ora3-highgpu-4g machine types,you must create instances by using Spot VMs orFlex-start VMs. For detailed instructions on these options, review the following:
Attached NVIDIA H100 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3)
a3-highgpu-1g26234750125180
a3-highgpu-2g524681,5001502160
a3-highgpu-4g1049363,00011004320
a3-highgpu-8g2081,8726,00051,0008640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 High limitations

A3 Edge machine type

A3 Edgemachine types haveNVIDIA H100 SXM GPUsand are designed specifically for serving and are available inalimited set of regions.Note: To get started with A3 Edge instances, seeCreate an A3 VM with GPUDirect-TCPX enabled.
Attached NVIDIA H100 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3)
a3-edgegpu-8g2081,8726,0005
  • 600:for asia-south1 and northamerica-northeast2
  • 400:for all otherA3 Edge regions
8640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 Edge limitations

Supported disk types for A3 instances

A3 Ultra

A3 Ultra instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supported for the boot disk
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance1
Machine
types
All HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached
Local SSD
disks
a3-ultragpu-8g128128128N/AN/A832

1Hyperdisk usage is charged separately frommachine type pricing. For disk pricing, seeHyperdisk pricing.

A3 Mega

A3 Mega instances can use the following block storage types:

  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Hyperdisk Balanced (hyperdisk-balanced)
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance1
Machine
types
All HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached
Local SSD
disks
a3-megagpu-8g12832326464816

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.

A3 High

A3 High instances can use the following block storage types:

  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Hyperdisk Balanced (hyperdisk-balanced)
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance1
Machine
types
All HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached
Local SSD
disks
a3-highgpu-1g12832326464N/A2
a3-highgpu-2g12832326464N/A4
a3-highgpu-4g1283232646488
a3-highgpu-8g12832326464816

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.

A3 Edge

A3 Edge instances can use the following block storage types:

  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Hyperdisk Balanced (hyperdisk-balanced)
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached Local SSD
a3-edgegpu-8g12832326464816

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.

Disk and capacity limits

If supported by the machine type, you can attach a mixture ofHyperdisk and Persistent Disk volumes to an instance, but the followingrestrictions apply:

  • The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per instance.
  • The maximum total disk capacity (in TiB) across all disk types can't exceed:

    • For machine types with less than 32 vCPUs:

      • 257 TiB for all Hyperdisk or all Persistent Disk
      • 257 TiB for a mixture of Hyperdisk and Persistent Disk
    • For machine types with 32 or more vCPUs:

      • 512 TiB for all Hyperdisk
      • 512 TiB for a mixture of Hyperdisk and Persistent Disk
      • 257 TiB for all Persistent Disk

For details about the capacity limits, see Hyperdisk size and attachment limits andPersistent Disk maximum capacity.

The A2 machine series

The A2 machine series is available in A2 Standard and A2 Ultra machine types.These machine types have 12 to 96 vCPUs, and up to 1,360 GB of memory.

VM instances created by using the A2 machine types provide the followingfeatures:

  • GPU acceleration: each A2 instance hasNVIDIA A100 GPUs.These are available in both A100 40GB and A100 80GB options.

  • Industry-leading NVLink scale that provides peak GPU to GPU NVLink bandwidth of 600 GBps.For example, systems with 16 GPUs have an aggregate NVLink bandwidth of upto 9.6 TBps. These 16 GPUs can be used as a single high performanceaccelerator with unified memory space to deliver up to 10 petaFLOPS ofcompute power and up to 20 petaFLOPS of inference compute power that can beused for artificial intelligence, deep learning, and machine learning workloads.

  • Improved computing speeds: the attached NVIDIA A100 GPUs offer upto 10x improvements in computing speed when compared to previous generationNVIDIA V100 GPUs.

    With the A2 machine series, you can get up to 100 Gbps network bandwidth.

  • Disk support: A2 instances support Local SSD for fast scratch disks,which is useful for feeding data into GPUs while preventing I/Obottlenecks. For durable storage, you can attach Persistent Disk andHyperdisk volumes.

    Local SSD is supported as follows:

    • For A2 Standard machine types, you can add up to 3,000 GiB ofLocal SSD when you create an instance.
    • For A2 Ultra machine types, Local SSD is automatically attached when youcreate an instance.

    For workloads that require durable block storage, you can attachup to 257 TiB of Persistent Disk and 512 TiB of Hyperdiskvolumes to A2 instances. For more information about disk types, seeChoose a disk type.

  • Compact placement policy support: provides you with more control over thephysical placement of your instances within data centers. This enableslower-latency and higher bandwidth for instances that are located within asingle availability zone. For more information, seeReduce latency by using compact placement policies.

The following machine types are available for the A2 machine series.

A2 Ultra machine types

These machine types have afixed number of A100 80GB GPUs.Local SSD is automatically attached to instances created by using the A2 Ultramachine types.

Attached NVIDIA A100 80GB GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Maximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM2e)
a2-ultragpu-1g1217037524180
a2-ultragpu-2g24340750322160
a2-ultragpu-4g486801,500504320
a2-ultragpu-8g961,3603,0001008640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A2 Ultra limitations

  • You can only request capacity by using thesupported consumption optionsfor an A2 Ultra machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A2 Ultra machine type.
  • You can only use an A2 Ultra machine type in certainregionsand zones.
  • The A2 Ultra machine type is only available on theCascade Lake platform.
  • If your instance uses an A2 Ultra machine type, you can't change the machine type.If you need to use a different A2 Ultra machine type, or any other machine type, you must create anew instance.
  • You can't change any other machine type to an A2 Ultra machine type. If you need ainstance that uses an A2 Ultra machine type, you must create a new instance.
  • You can't do a quick format of the attached Local SSDs on Windows instances that use A2 Ultramachine types. To format these Local SSDs, you must do a full format by using thediskpartutility and specifyingformat fs=ntfs label=tmpfs.

A2 Standard machine types

These machine types have afixed number of A100 40GB GPUs.You can also add Local SSD disks when creating an A2 Standard instance. For thenumber of disks you can attach, seeMachine types that require you to choose a number of Local SSD disks.

Attached NVIDIA A100 40GB GPUs
Machine typevCPU count1Instance memory (GB)Local SSD supportedMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM2)
a2-highgpu-1g1285Yes24140
a2-highgpu-2g24170Yes32280
a2-highgpu-4g48340Yes504160
a2-highgpu-8g96680Yes1008320
a2-megagpu-16g961,360Yes10016640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A2 Standard limitations

  • You can only request capacity by using thesupported consumption optionsfor an A2 Standard machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A2 Standard machine type.
  • You can only use an A2 Standard machine type in certainregionsand zones.
  • The A2 Standard machine type is only available on theCascade Lake platform.
  • If your instance uses an A2 Standard machine type, you can only switch from one A2 Standard machine typetype to another A2 Standard machine type. You can't change to any other machine type.For more information, seeModify accelerator-optimized instances.
  • You can't use the Windows operating system with thea2-megagpu-16g machine type.When using a Windows operating system, choose a different A2 Standard machine type.
  • You can't do a quick format of the attached Local SSDs on Windows instances that use A2 Standard machine types.To format these Local SSDs, you must do a full format by using thediskpartutility and specifyingformat fs=ntfs label=tmpfs.

Supported disk types for A2 instances

A2 instances can use the following block storage types:

  • Hyperdisk ML (hyperdisk-ml)
  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Standard Persistent Disk (pd-standard)
  • Local SSD: which is automatically attached to instances created by using theA2 Ultra machine types.

A2 Ultra

Maximum number of disks per instance1
Machine typesAll disks2Hyperdisk MLAttached Local SSD
a2-ultragpu-1g128321
a2-ultragpu-2g128482
a2-ultragpu-4g128644
a2-ultragpu-8g128648

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.
2This limit applies to Persistent Disk and Hyperdisk, but doesn't includeLocal SSD disks.

A2 Standard

Maximum number of disks per instance1
Machine typesAll disks2Hyperdisk MLLocal SSD
a2-highgpu-1g128328
a2-highgpu-2g128488
a2-highgpu-4g128648
a2-highgpu-8g128648
a2-megagpu-16g128648

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.
2This limit applies to Persistent Disk and Hyperdisk, but doesn't includeLocal SSD disks.

If supported by the machine type, you can attach a mixture ofHyperdisk and Persistent Disk volumes to an instance, but the followingrestrictions apply:

  • The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per instance.
  • The maximum total disk capacity (in TiB) across all disk types can't exceed:

    • For machine types with less than 32 vCPUs:

      • 257 TiB for all Hyperdisk or all Persistent Disk
      • 257 TiB for a mixture of Hyperdisk and Persistent Disk
    • For machine types with 32 or more vCPUs:

      • 512 TiB for all Hyperdisk
      • 512 TiB for a mixture of Hyperdisk and Persistent Disk
      • 257 TiB for all Persistent Disk

For details about the capacity limits, see Hyperdisk size and attachment limits andPersistent Disk maximum capacity.

The G4 machine series

The G4 machine series uses the AMD EPYC Turin CPU platform and featuresNVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. This machine series offerssignificant improvements over the previous-generation G2 machine series,with considerably more GPU memory, increased GPU memory bandwidth, andhigher networking bandwidth.

G4 instances have up to 384 vCPUs, 1,440 GB of memory, and 12 TiB ofTitanium SSD disks attached. G4 instances also provide up to400 Gbps of standard network performance.

This machine series is particularly intended for workloads such as NVIDIAOmniverse simulation workloads, graphics-intensive applications, video transcoding, andvirtual desktops. The G4 machine series also provide a low-cost solution forperforming single host inference and model tuning compared with A series machinetypes.

Instances that use the G4 machine type provide the following features:

  • GPU acceleration with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs:G4 instances automatically attachNVIDIA RTX PRO 6000 Blackwell Server Edition GPUs,which offer 96 GB GPU memory per GPU.

  • 5th Generation AMD EPYC Turin CPU Platform: this platform offers up to4.1 GHz sustained max boost frequency. For more information about thisprocessor, seeCPU platform.

  • Next generation graphics performance: the NVIDIA RTX PRO 6000 GPUs providesignificant performance and feature upgrades over the NVIDIA L4 GPUs thatare attached to the G2 machine series. Thesed upgrades are as follows:

    • 5th-Generation Tensor Cores: these cores introduce support for FP4precision and DLSS 4 Multi Frame Generation. By using these 5th Generationtensor cores, NVIDIA RTX PRO 6000 GPUs offers improved performance toaccelerate tasks like local LLM development and content creation, comparedto NVIDIA L4 GPUs.
    • 4th-Generation RT Cores: these cores deliver up to twice the ray-tracingperformance of the previous generation NVIDIA L4 GPUs, acceleratingrendering for design and manufacturing workloads.
    • Core count: the NVIDIA RTX PRO 6000 GPU includes 24,064 CUDA cores,752 5th-gen Tensor cores, and 188 4th-gen RT cores. This update representsa substantial increase over prior generations like the L4 GPU, which has7,680 CUDA cores and 240 Tensor cores.
  • Multi-Instance GPU (MIG): this feature allows a single GPU to partitioninto up to four fully isolated GPU instances on a single VM instance. For moreinformation about NVIDIA MIG, seeNVIDIA Multi-Instance GPUin the NVIDIA documentation.

  • Peripheral Component Interconnect Express (PCIe) Gen 5 support: G4instances supports PCI Express Gen 5, which improves the data transferspeed from CPU memory to GPU compared to PCIe Gen 3 used by G2 instances.

  • Disk support: G4 instances supportTitanium SSD forfast scratch disks, which is useful for feeding data into GPUs whilepreventing I/O bottlenecks. For durable storage, you can attachHyperdisk volumes.

    G4 instances support attaching up to 12,000 GiB of Titanium SSD.For workloads that require durable block storage, G4 instances alsosupport attaching up to 512 TiB of Hyperdisk. For moreinformation about disk types, seeChoose a disk type.

  • GPU Peer-to-Peer (P2P) communication: G4 instances support GPU P2Pcommunication, enabling direct data transfer between GPUs within the sameinstance. This can significantly improve performance for multi-GPU workloadsby reducing data transfer latency and freeing up CPU resources. For moreinformation, seeG4 GPU peer-to-peer (P2P) communication.

G4 machine types

G4 accelerator-optimized machine types use NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs (nvidia-rtx-pro-6000) and are suitable for NVIDIA Omniverse simulation workloads, graphics-intensive applications, video transcoding, and virtual desktops. G4 machine types also provide a low-cost solution for performing single host inference and model tuning compared with A series machine types.

Note: To get started with G4 instances, seeCreate a G4 instance.
Attached NVIDIA RTX PRO 6000 GPUs
Machine typevCPU count1Instance memory (GB)Maximum Titanium SSD supported (GiB)2Physical NIC countMaximum network bandwidth (Gbps)3GPU countGPU memory4
(GB GDDR7)
g4-standard-48481801,500150196
g4-standard-96963603,00011002192
g4-standard-1921927206,00012004384
g4-standard-3843841,44012,00024008768

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2You can add Titanium SSD disks when creating a G4 instance. For the number of disksyou can attach, seeMachine types that require you to choose a number of Local SSD disks.
3Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.SeeNetwork bandwidth.
4GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

G4 limitations

  • You can only request capacity by using thesupported consumption optionsfor a G4 machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use a G4 machine type.
  • You can only use a G4 machine type in certainregionsand zones.
  • You can't usePersistent Disk (regional or zonal)on an instance that uses a G4 machine type.
  • The G4 machine type is only available on theAMD EPYC Turin 5th Generation platform.
  • You can't create Confidential VM instances that use a G4 machine type.
  • You can't create G4 instances on sole-tenant nodes.
  • You can't use Windows operating systems ong4-standard-384 instances.
  • You can't attach Hyperdisk ML disks that were created before February 4, 2026 to G4 machine types.

Supported disk types for G4 instances

G4 instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supportedfor the boot disk
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Titanium SSD: you can add Titanium SSD to instances created byusing the G4 machine types.

Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ExtremeHyperdisk MLHyperdisk ThroughputTitanium SSD
g4-standard-48323232032324
g4-standard-96323232832328
g4-standard-1926464648646416
g4-standard-384128128128812812832

1Hyperdisk usage is charged separately frommachine type pricing. For disk pricing, seeHyperdisk pricing.

You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed 512 TiB for all Hyperdisks.

For details about the capacity limits, seeHyperdisk size and attachment limits.

G4 peer-to-peer (P2P) communication

G4 instances enhance multi-GPU workload performance by using direct GPUpeer-to-peer (P2P) communication. This capability allows GPUs that attach to thesame G4 instance to exchange data directly over the PCIe bus, bypassing the needto transfer data through the CPU's main memory. This direct path reduceslatency, lowers CPU utilization, and increases the effective bandwidth betweenGPUs. P2P communication significantly accelerates multi-GPU applications such asmachine learning (ML) training and high performance computing (HPC).

This feature typically requires no modifications to your application code. Youonly need to configure NCCL to use P2P. To configure NCCL, before you run yourworkloads, set theNCCL_P2P_LEVEL environmentvariableon your G4 instance based on the machine type:

  • For G4 instances with 2 or 4 GPUs (g4-standard-96,g4-standard-192): setNCCL_P2P_LEVEL=PHB
  • For G4 instances with 8 GPUs (g4-standard-384): setNCCL_P2P_LEVEL=SYS

Set the environment variable using one of the following options:

  • On the command line, run the appropriate export command (for example,export NCCL_P2P_LEVEL=SYS) in the shell session where you plan to run yourapplication. To make this setting persistent, add this command to yourshell's startup script (for example,~/.bashrc).
  • Add the appropriate setting (for example,NCCL_P2P_LEVEL=SYS) to the NCCLconfiguration file located at/etc/nccl.conf.

Key benefits and performance

  • Accelerates multi-GPU workloads on G4 instances with two or more GPUs:provides faster runtimes for applications running ong4-standard-96,g4-standard-192, andg4-standard-384 machine types.
  • Provides high-bandwidth communication: enables high data transfer speedsbetween GPUs.
  • Improves NCCL performance: provides significant performance improvements forapplications that use the NVIDIA Collective Communication Library (NCCL)when compared to communication that doesn't use P2P. Google's hypervisorsecurely isolates this P2P communication within your instances.

    • On four GPU instances (g4-standard-192), all GPUs are on a single NUMAnode, allowing for the most efficient P2P communication. This can lead toperformance improvements of up to2.04x for collectives such asAllgather,Allreduce, andReduceScatter.
    • On eight GPU instances (g4-standard-384), GPUs are distributed acrosstwo NUMA nodes. P2P communication is accelerated for traffic both within andbetween these nodes, with performance improvements of up to2.19x for thesame collectives.

The G2 machine series

The G2 machine series is available in standard machine types that have 4 to 96vCPUs, and up to 432 GB of memory. This machine series is optimized forinference and graphics workloads. The G2 machine series is available in a singlestandard machine type with multiple configurations.

Instances created by using the G2 machine types provide the following features:

  • GPU acceleration: each G2 machine type hasNVIDIA L4 GPUs.

  • Improved inference rates: the G2 machine type provides support for theFP8 (8-bit floating point)data type which speeds up ML inference rates andreduces memory requirements.

  • Next generation graphics performance: NVIDIA L4 GPUs provideup to 3X improvement in graphics performance by using third-generationRT coresandNVIDIA DLSS 3 (Deep Learning Super Sampling)technology.

  • High performance network bandwidth: with the G2 machine types, you canget up to 100 Gbps network bandwidth.

  • Disk support: G2 instances support Local SSD for fast scratch disks,which is useful for feeding data into GPUs while preventing I/Obottlenecks. For durable storage, you can attach Persistent Disk andHyperdisk volumes.

    You can add up to 3,000 GiB of Local SSD to G2 instances.For workloads that require durable block storage, you can attachHyperdisk and Persistent Disk volumes toG2 instances. The maximum storage capacity depends on the number of vCPUsthe instance has. For more information about disk types, seeChoose a disk type.

  • Compact placement policy support: provides you with more control over thephysical placement of your instances within data centers. This enableslower-latency and higher bandwidth for instances that are located within asingle availability zone. For more information, seeReduce latency by using compact placement policies.

G2 machine types

G2 accelerator-optimizedmachine types haveNVIDIA L4 GPUsattached and are ideal for cost-optimized inference, graphics-intensive andhigh performance computing workloads.

Each G2 machine type also has a default memory and a custommemory range. The custom memory range defines the amount of memory thatyou can allocate to your instance for each machine type. You can also add LocalSSD disks when creating a G2 instance. For the number of disksyou can attach, seeMachine types that require you to choose a number of Local SSD disks.

Attached NVIDIA L4 GPUs
Machine typevCPU count1Default instance memory (GB)Custom instance memory range (GB)Max Local SSD supported (GiB)Maximum network bandwidth (Gbps)2GPU countGPU memory3 (GB GDDR6)
g2-standard-441616 to 3237510124
g2-standard-883232 to 5437516124
g2-standard-12124848 to 5437516124
g2-standard-16166454 to 6437532124
g2-standard-24249696 to 10875032248
g2-standard-323212896 to 12837532124
g2-standard-4848192192 to 2161,50050496
g2-standard-9696384384 to 4323,0001008192

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

G2 limitations

  • You can only request capacity by using thesupported consumption optionsfor a G2 machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use a G2 machine type.
  • You can only use a G2 machine type in certainregionsand zones.
  • The G2 machine type is only available on theCascade Lake platform.
  • Standard Persistent Disk (pd-standard) isn't supported on instances that use theG2 machine type. For supported disk types, seeSupported disk types for G2.
  • You can't createMulti-InstanceGPUs on an instance that uses a G2 machine type.
  • If you need to change the machine type of a G2 instance, reviewModify accelerator-optmized instances.
  • You can't useDeep Learning VM Images as boot disksfor instances that use the G2 machine type.
  • The current default driver for Container-Optimized OS doesn't support L4 GPUs running onG2 machine types. Also, Container-Optimized OS only supports a select set of drivers.If you want to use Container-Optimized OS on G2 machine types, review the following notes:
    • Use a Container-Optimized OS version that supports the minimum recommended NVIDIA driver version525.60.13 or later. For more information, review theContainer-Optimized OS release notes.
    • When youinstall the driver, specify the latest available version that works for the L4 GPUs. For example,sudo cos-extensions install gpu -- -version=525.60.13.
  • You must use the Google Cloud CLI or REST tocreate G2 instancesfor the following scenarios:
    • You want to specify custom memory values.
    • You want to customize the number of visible CPU cores.

Supported disk types for G2 instances

G2 instances can use the following block storage types:

  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Local SSD: you can add Local SSD to instances created by using the G2 machinetypes.

Maximum number of disks per instance1
Machine typesAll disks2Hyperdisk MLHyperdisk ThroughputLocal SSD
g2-standard-412824241
g2-standard-812832321
g2-standard-1212832321
g2-standard-1612848481
g2-standard-2412848482
g2-standard-3212864641
g2-standard-4812864644
g2-standard-9612864648

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.
2This limit applies to Persistent Disk and Hyperdisk, but doesn't includeLocal SSD disks.

If supported by the machine type, you can attach a mixture ofHyperdisk and Persistent Disk volumes to an instance, but the followingrestrictions apply:

  • The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per instance.
  • The maximum total disk capacity (in TiB) across all disk types can't exceed:

    • For machine types with less than 32 vCPUs:

      • 257 TiB for all Hyperdisk or all Persistent Disk
      • 257 TiB for a mixture of Hyperdisk and Persistent Disk
    • For machine types with 32 or more vCPUs:

      • 512 TiB for all Hyperdisk
      • 512 TiB for a mixture of Hyperdisk and Persistent Disk
      • 257 TiB for all Persistent Disk

For details about the capacity limits, see Hyperdisk size and attachment limits andPersistent Disk maximum capacity.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.