Accelerator-optimized machine family

This document describes the accelerator-optimized machine family, whichprovides you with virtual machine (VM) instances that have pre-attached NVIDIAGPUs. These instances are designed specifically for artificial intelligence (AI), machine learning (ML), high performance computing (HPC),and graphics-intensive applications.

The accelerator-optimized machine family is available in the following machineseries: A4X, A4, A3, A2, G4, and G2. Each machine type within a serieshas a specific model and number of NVIDIA GPUs attached. You can also attachsome GPU modelsto N1 general-purpose machine types.

Recommended machine series by workload type

The following section provides the recommended machine series based on yourGPU workloads:

Workload typeRecommended machine type
Pre-training modelsA4X, A4, A3 Ultra, A3 Mega, A3 High, and A2

To identify the best fit, seeRecommendations for pre-training models in the AI Hypercomputer documentation.

Fine-tuning modelsA4X, A4, A3 Ultra, A3 Mega, A3 High, A2, and G4

To identify the best fit, seeRecommendations for fine-tuning models in the AI Hypercomputer documentation.

Serving inferenceA4X, A4, A3 Ultra, A3 Mega, A3 High, A3 Edge, A2, and G4

To identify the best fit, seeRecommendations for serving inference in the AI Hypercomputer documentation.

Graphics-intensive workloadsG4, G2, and N1+T4
High performance computingFor high performance computing workloads, any accelerator-optimized machine series works well. The best fit depends on the amount of computation that must be offloaded to the GPU.

For more information, seeRecommendations for HPC in the AI Hypercomputer documentation.

Pricing and consumption options

Consumption options refers to the ways to get and use compute resources.Google Cloud bills accelerator-optimized machine types for their attached GPUs,predefined vCPU, memory, and bundled Local SSD (if applicable). Discountsfor accelerator-optimized instances vary based on the consumption option you use.For more pricing information for accelerator-optimized instances, see theAccelerator-optimized machine type familysection on the VM instance pricing page.

Discounts for accelerator-optimized instances vary based on the consumptionoption you choose:

  • On-demand: You can receivecommitted use discounts (CUDs)for some resources by purchasing resource-based commitments. However, GPUsand Local SSD disks that you use with the on-demand option areineligible for CUDs. To receive CUDs for GPUs and Local SSD disks, use oneof the reservation options instead.
  • Spot: Spot VMs automatically receive discounts throughSpot VMs pricing.
  • Flex-start: Instances provisioned by using the Flex-start consumptionoption automatically receive discounts throughDynamic Workload Scheduler pricing.
  • Reservations: You can receive CUDs for your accelerator-optimized machine typeresources by purchasing resource-based commitments. Commitments forGPUs and Local SSD disksrequire attached reservations for those resources.

Consumption option availability by machine type

The following table summarizes the availability of each consumption option bymachine types. For more information about how to choose a consumptionoption, seeChoose a consumption modelin the AI Hypercomputer documentation.

Note: Before you create and submit a future reservation request for asupported GPU machine type, you must contact youraccount team or thesales team to discussyour request. Otherwise, Google Cloud is likely to decline it.
Machine typeOn-demandSpotFlex-startOn-demand reservationsFuture reservationsFuture reservations in calendar modeFuture reservations in AI Hypercomputer

The A4X machine series

Caution: TheCompute Engine Service Level Agreement (SLA)doesn't apply to the A4X machine series.

The A4X machine series runs on an exascale platform based onNVIDIA GB200 NVL72rack-scale architecture and has up to 140 vCPUs, and 884 GB of memory. Thismachine series is optimized for compute and memory intensive, network bound MLtraining, and HPC workloads. The A4X machine series is available in a singlemachine type.

VM instances created by using the A4X machine type provide the followingfeatures:

A4X machine type

A4X accelerator-optimized machine types use NVIDIA GB200 Grace Blackwell Superchips (nvidia-gb200) and are ideal for foundation model training and serving.

A4X is an exascale platform based onNVIDIA GB200 NVL72. Each machine has two sockets with NVIDIA Grace CPUs with Arm Neoverse V2 cores. These CPUs are connected to four NVIDIA B200 Blackwell GPUs with fast chip-to-chip (NVLink-C2C) communication.

Tip: When provisioning A4X instances, you mustreserve capacity to create instances and cluster. You can then create instances that use the features and services available from AI Hypercomputer. For more information, seeDeployment options overview in the AI Hypercomputer documentation.

Attached NVIDIA GB200 Grace Blackwell Superchips
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a4x-highgpu-4g14088412,00062,0004744

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A4X limitations

Supported disk types for A4X instances

A4X instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supported for the boot disk
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Local SSD: which is automatically added to instances that are created by using any of the A4X machine types
Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached Local SSD
a4x-highgpu-4g128128N/AN/A832

1Hyperdisk usage is charged separately frommachine type pricing. For disk pricing, seeHyperdisk pricing.

Disk and capacity limits

You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed 512 TiB for all Hyperdisks.

For details about the capacity limits, seeHyperdisk size and attachment limits.

The A4 machine series

The A4 machine series offers machine types with up to 224 vCPUs, and3,968 GB of memory. A4 instances provide up to 3x performance of previousGPU instance types for most GPU accelerated workloads. A4 is recommended for MLtraining workloads especially at large scales—for example,hundreds or thousands of GPUs. The A4 machine series is available in a singlemachine type.

VM instances created by using the A4 machine type provide the followingfeatures:

A4 machine type

A4 accelerator-optimizedmachine types haveNVIDIA B200 Blackwell GPUs(nvidia-b200) attached and are ideal for foundation modeltraining and serving.

Tip: When provisioning A4 machine types, you mustreserve capacity to create instances or clusters, use Spot VMs, useFlex-start VMs, or create a resize request in a MIG. For instructions on how to create A4instances, seeCreate an A3 Ultra or A4 instance. .
Attached NVIDIA B200 Blackwell GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a4-highgpu-8g2243,96812,000103,60081,440

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth, seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A4 limitations

  • You can only request capacity by using thesupported consumption optionsfor an A4 machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A4 machine type.
  • You can only use an A4 machine type in certainregionsand zones.
  • You can't usePersistent Disk (regional or zonal) on an instance that uses an A4 machine type.
  • The A4 machine type is only available on theEmerald Rapids CPU platform.
  • You can't change the machine type of an existing instance to an A4 machine type. You can only create new A4 instances. After creating an instance using an A4 machine type, you can't change the machine type.
  • A4 machine types don't supportsole-tenancy.
  • You can't run Windows operating systems on an A4 machine type.

Supported disk types for A4 instances

A4 instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supported for the boot disk
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Local SSD: which is automatically added to instances that are created by using any of the A4 machine types
Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached Local SSD
a4-highgpu-8g128128N/AN/A832

1Hyperdisk usage is charged separately frommachine type pricing. For disk pricing, seeHyperdisk pricing.

Disk and capacity limits

You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed 512 TiB for all Hyperdisks.

For details about the capacity limits, seeHyperdisk size and attachment limits.

The A3 machine series

The A3 machine series has up to 224 vCPUs, and 2,944 GB of memory. This machineseries is optimized for compute and memory intensive, network bound ML training,and HPC workloads. The A3 machine series is available in A3 Ultra, A3 Mega,A3 High, and A3 Edge machine types.

VM instances created by using the A3 machine types provide the followingfeatures:

FeatureA3 UltraA3 Mega, High, Edge
GPU acceleration

NVIDIA H200 SXM GPUs attached, which offers 141 GB GPU memory per GPU and provides larger and faster memory for supporting large language models and HPC workloads.

NVIDIA H100 SXM GPUs attached, which offers 80 GB GPU memory per GPU and is ideal for large transformer-based language models, databases, and HPC.

Intel Xeon Scalable Processors

5th Generation Intel Xeon Scalable processor (Emerald Rapids) and offers up to 4.0 GHz sustained single-core max turbo frequency. For more information about this processor, seeCPU platform.

4th Generation Intel Xeon Scalable processor (Sapphire Rapids) and offers up to 3.3 GHz sustained single-core max turbo frequency. For more information about this processor, seeCPU platform.

Industry-leading NVLink scalability

NVIDIA H200 GPUs provide peakGPU NVLink bandwidth of 900 GB/s, unidirectionally.

With all-to-all NVLink topology between 8 GPUs in a system, the aggregate NVLink Bandwidth is up to 7.2 TB/s.

NVIDIA H100 GPUs provide peakGPU NVLink bandwidth of 450 GB/s, unidirectionally.

With all-to-all NVLink topology between 8 GPUs in a system, the aggregate NVLink Bandwidth is up to 7.2 TB/s.

Enhanced networkingFor this machine type, RDMA over Converged Ethernet (RoCE) increases the network performance by combiningNVIDIA ConnectX-7 network interface cards (NICs) with our datacenter-wide four-way rail-aligned network. By leveraging RDMA over Converged Ethernet (RoCE), thea3-ultragpu-8g machine type achieves much higher throughput between instances in a cluster when compared to other A3 machine types.Note: Because of the difference in network topology between A3 Ultra and the previous A3 series (A3 Mega, High, and Edge), you can't move workloads between instances that run on A3 Ultra and the previous A3 series.
  • For theA3 Mega machine types, GPUDirect-TCPXO further improves on GPUDirect-TCPX by offloading TCP protocol. By leveraging GPUDirect-TCPXO, thea3-megagpu-8g machine type doubles the network bandwidth when compared to the A3 High and A3 Edge machine types.
  • For theA3 Edge (a3-edgegpu-8g) and A3 High (a3-highgpu-8g) machine types, GPUDirect-TCPX increases the network performance by allowing data packet payloads to transfer directly from GPU memory to the network interface. By leveraging GPUDirect-TCPX, these machine type achieve much higher throughput between instances in a cluster when compared to the A2 or G2 accelerator-optimized machine types.
Improved networking speeds

Offers up to 4x networking speeds when compared to the previous generation A2 machine series.

For more information about networking, seeNetwork bandwidths and GPUs.

Offers up to 2.5X networking speeds when compared to the previous generation A2 machine series.

For more information about networking, seeNetwork bandwidths and GPUs.

Virtualization optimizations

The Peripheral Component Interconnect Express (PCIe) topology of A3 instances provides more accurate locality information that workloads can use to optimize data transfers.

The GPUs also expose Function Level Reset (FLR) for graceful recovery from failures and atomic operations support for concurrency improvements in certain scenarios.

Local SSD, Persistent Disk, and Hyperdisk support

Local SSD can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks. Local SSD is attached as follows:

  • 12,000 GiB of Local SSD is automatically added to A3 Ultra instances.
  • 6,000 GiB of Local SSD is automatically added to A3 Mega, High, and Edge instances.

You can also attach up to 512 TiB of Persistent Disk and Hyperdisk to machine types in these series for applications that require higher storage performance. For select machine types, up to 257 TiB of Persistent Disk is also supported.

Compact placement policy support

Provides you with more control over the physical placement of your instances within data centers.

This enables lower-latency and higher bandwidth for instances that are located within a single availability zone.

For more information, see About compact placement policies.

Caution: By default, you can't apply compact placement policies with a max distance valueto A3 VMs in Compute Engine. To request access to this feature, contact your assignedTechnical Account Manager (TAM) or theSales team.

A3 Ultra machine type

A3 Ultramachine types haveNVIDIA H200 SXM GPUs(nvidia-h200-141gb) attached and provides the highest networkperformance in the A3 series. A3 Ultra machine types are ideal for foundation model training andserving.

Tip: When provisioning A3 Ultra machinetypes, you must reserve capacity to create instances or clusters, use Spot VMs, useFlex-start VMs, or create a resize request in a MIG. For more information about theparameters to set when creating an A3 Ultra instance, seeCreate an A3 Ultra or A4 instance.
Attached NVIDIA H200 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a3-ultragpu-8g2242,95212,000103,60081128

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 Ultra limitations

  • You can only request capacity by using thesupported consumption optionsfor an A3 Ultra machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A3 Ultra machine type.
  • You can only use an A3 Ultra machine type in certainregionsand zones.
  • You can't usePersistent Disk (regional or zonal) on an instance that uses an A3 Ultra machine type.
  • The A3 Ultra machine type is only available on theEmerald Rapids CPU platform.
  • You can't change the machine type of an existing instance to an A3 Ultra machine type. You can only create new A3-ultra instances. After creating an instance using an A3 Ultra machine type, you can't change the machine type.
  • You can't run Windows operating systems on an A3 Ultra machine type.
  • A3 Ultra machine types don't supportsole-tenancy.

A3 Mega machine type

A3 Megamachine types haveNVIDIA H100 SXM GPUsand are ideal for large model training and multi-host inference.Tip: When provisioninga3-megagpu-8g machine types, we recommend using a cluster of these instances and deployingwith a scheduler such as Google Kubernetes Engine (GKE) or Slurm. For detailed instructions on either ofthese options, review the following:
Attached NVIDIA H100 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3)
a3-megagpu-8g2081,8726,00091,8008640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 Mega limitations

  • You can only request capacity by using thesupported consumption optionsfor an A3 Mega machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A3 Mega machine type.
  • You can only use an A3 Mega machine type in certainregionsand zones.
  • You can't useregional Persistent Disk on an instance that uses an A3 Mega machine type.
  • The A3 Mega machine type is only available on theSapphire Rapids CPU platform.
  • You can't change the machine type of an existing instance to an A3 Mega machine type. You can only create new A3-mega instances. After creating an instance using an A3 Mega machine type, you can't change the machine type.
  • You can't run Windows operating systems on an A3 Mega machine type.

A3 High machine type

A3 Highmachine types haveNVIDIA H100 SXM GPUsand are well-suited for both large model inference and model fine tuning.Tip: When provisioninga3-highgpu-1g,a3-highgpu-2g, ora3-highgpu-4g machine types,you must create instances by using Spot VMs orFlex-start VMs. For detailed instructions on these options, review the following:
Attached NVIDIA H100 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3)
a3-highgpu-1g26234750125180
a3-highgpu-2g524681,5001502160
a3-highgpu-4g1049363,00011004320
a3-highgpu-8g2081,8726,00051,0008640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 High limitations

A3 Edge machine type

A3 Edgemachine types haveNVIDIA H100 SXM GPUsand are designed specifically for serving and are available inalimited set of regions.Tip: To get started with A3 Edge instances, seeCreate an A3 VM with GPUDirect-TCPX enabled.
Attached NVIDIA H100 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3)
a3-edgegpu-8g2081,8726,0005
  • 800:for asia-south1 and northamerica-northeast2
  • 400:for all otherA3 Edge regions
8640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 Edge limitations

  • You can only request capacity by using thesupported consumption optionsfor an A3 Edge machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A3 Edge machine type.
  • You can only use an A3 Edge machine type in certainregionsand zones.
  • You can't useregional Persistent Disk on an instance that uses an A3 Edge machine type.
  • The A3 Edge machine type is only available on theSapphire Rapids CPU platform.
  • You can't change the machine type of an existing instance to an A3 Edge machine type. You can only create new A3-edge instances. After creating an instance using an A3 Edge machine type, you can't change the machine type.
  • You can't run Windows operating systems on an A3 Edge machine type.
  • A3 Edge machine types don't supportsole-tenancy.

Supported disk types for A3 instances

A3 Ultra

A3 Ultra instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supported for the boot disk
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance1
Machine
types
All HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached
Local SSD
disks
a3-ultragpu-8g128128128N/AN/A832

1Hyperdisk usage is charged separately frommachine type pricing. For disk pricing, seeHyperdisk pricing.

A3 Mega

A3 Mega instances can use the following block storage types:

  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Hyperdisk Balanced (hyperdisk-balanced)
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance1
Machine
types
All HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached
Local SSD
disks
a3-megagpu-8g12832326464816

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.

A3 High

A3 High instances can use the following block storage types:

  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Hyperdisk Balanced (hyperdisk-balanced)
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance1
Machine
types
All HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached
Local SSD
disks
a3-highgpu-1g12832326464N/A2
a3-highgpu-2g12832326464N/A4
a3-highgpu-4g1283232646488
a3-highgpu-8g12832326464816

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.

A3 Edge

A3 Edge instances can use the following block storage types:

  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Hyperdisk Balanced (hyperdisk-balanced)
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Local SSD: which is automatically added to instances that are created by using any of the A3 machine types
Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ThroughputHyperdisk MLHyperdisk ExtremeAttached Local SSD
a3-edgegpu-8g12832326464816

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.

Disk and capacity limits

If supported by the machine type, you can attach a mixture ofHyperdisk and Persistent Disk volumes to an instance, but the followingrestrictions apply:

  • The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per instance.
  • The maximum total disk capacity (in TiB) across all disk types can't exceed:

    • For machine types with less than 32 vCPUs:

      • 257 TiB for all Hyperdisk or all Persistent Disk
      • 257 TiB for a mixture of Hyperdisk and Persistent Disk
    • For machine types with 32 or more vCPUs:

      • 512 TiB for all Hyperdisk
      • 512 TiB for a mixture of Hyperdisk and Persistent Disk
      • 257 TiB for all Persistent Disk

For details about the capacity limits, see Hyperdisk size and attachment limits andPersistent Disk maximum capacity.

The A2 machine series

The A2 machine series is available in A2 Standard and A2 Ultra machine types.These machine types have 12 to 96 vCPUs, and up to 1,360 GB of memory.

VM instances created by using the A2 machine types provide the followingfeatures:

  • GPU acceleration: each A2 instance hasNVIDIA A100 GPUs.These are available in both A100 40GB and A100 80GB options.

  • Industry-leading NVLink scale that provides peak GPU to GPU NVLink bandwidth of 600 GBps.For example, systems with 16 GPUs have an aggregate NVLink bandwidth of upto 9.6 TBps. These 16 GPUs can be used as a single high performanceaccelerator with unified memory space to deliver up to 10 petaFLOPS ofcompute power and up to 20 petaFLOPS of inference compute power that can beused for artificial intelligence, deep learning, and machine learning workloads.

  • Improved computing speeds: the attached NVIDIA A100 GPUs offer upto 10x improvements in computing speed when compared to previous generationNVIDIA V100 GPUs.

    With the A2 machine series, you can get up to 100 Gbps network bandwidth.

  • Local SSD, Persistent Disk, and Hyperdisk support: forfast scratch disks or for feeding data into the GPUs whilepreventing I/O bottlenecks, the A2 machine types support Local SSD as follows:

    • For the A2 Standard machine types, you can add up to 3,000 GiB of Local SSDwhen you create an instance.
    • For the A2 Ultra machine types, Local SSD is automatically attachedwhen you create an instance.

    For applications that require higher storage performance, you can also attachup to 257 TiB of Persistent Disk and 512 TiB of Hyperdiskvolumes to A2 instances.

  • Compact placement policy support: provides you with more control over thephysical placement of your instances within data centers. This enableslower-latency and higher bandwidth for instances that are located within asingle availability zone. For more information, seeReduce latency by using compact placement policies.

The following machine types are available for the A2 machine series.

A2 Ultra machine types

These machine types have afixed number of A100 80GB GPUs.Local SSD is automatically attached to instances created by using the A2 Ultramachine types.

Attached NVIDIA A100 80GB GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Maximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM2e)
a2-ultragpu-1g1217037524180
a2-ultragpu-2g24340750322160
a2-ultragpu-4g486801,500504320
a2-ultragpu-8g961,3603,0001008640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A2 Ultra limitations

  • You can only request capacity by using thesupported consumption optionsfor an A2 Ultra machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A2 Ultra machine type.
  • You can only use an A2 Ultra machine type in certainregionsand zones.
  • The A2 Ultra machine type is only available on theCascade Lake platform.
  • If your instance uses an A2 Ultra machine type, you can't change the machine type.If you need to use a different A2 Ultra machine type, or any other machine type, you must create anew instance.
  • You can't change any other machine type to an A2 Ultra machine type. If you need ainstance that uses an A2 Ultra machine type, you must create a new instance.
  • You can't do a quick format of the attached Local SSDs on Windows instances that use A2 Ultramachine types. To format these Local SSDs, you must do a full format by using thediskpartutility and specifyingformat fs=ntfs label=tmpfs.

A2 Standard machine types

These machine types have afixed number of A100 40GB GPUs.You can also add Local SSD disks when creating an A2 Standard instance. For thenumber of disks you can attach, seeMachine types that require you to choose a number of Local SSD disks.

Attached NVIDIA A100 40GB GPUs
Machine typevCPU count1Instance memory (GB)Local SSD supportedMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM2)
a2-highgpu-1g1285Yes24140
a2-highgpu-2g24170Yes32280
a2-highgpu-4g48340Yes504160
a2-highgpu-8g96680Yes1008320
a2-megagpu-16g961,360Yes10016640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A2 Standard limitations

  • You can only request capacity by using thesupported consumption optionsfor an A2 Standard machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use an A2 Standard machine type.
  • You can only use an A2 Standard machine type in certainregionsand zones.
  • The A2 Standard machine type is only available on theCascade Lake platform.
  • If your instance uses an A2 Standard machine type, you can only switch from one A2 Standard machine typetype to another A2 Standard machine type. You can't change to any other machine type.For more information, seeModify accelerator-optimized instances.
  • You can't use the Windows operating system with thea2-megagpu-16g machine type.When using a Windows operating system, choose a different A2 Standard machine type.
  • You can't do a quick format of the attached Local SSDs on Windows instances that use A2 Standard machine types.To format these Local SSDs, you must do a full format by using thediskpartutility and specifyingformat fs=ntfs label=tmpfs.

Supported disk types for A2 instances

A2 instances can use the following block storage types:

  • Hyperdisk ML (hyperdisk-ml)
  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Standard Persistent Disk (pd-standard)
  • Local SSD: which is automatically attached to instances created by using theA2 Ultra machine types.

A2 Ultra

Maximum number of disks per instance1
Machine typesAll disks2Hyperdisk MLAttached Local SSD
a2-ultragpu-1g128321
a2-ultragpu-2g128482
a2-ultragpu-4g128644
a2-ultragpu-8g128648

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.
2This limit applies to Persistent Disk and Hyperdisk, but doesn't includeLocal SSD disks.

A2 Standard

Maximum number of disks per instance1
Machine typesAll disks2Hyperdisk MLLocal SSD
a2-highgpu-1g128328
a2-highgpu-2g128488
a2-highgpu-4g128648
a2-highgpu-8g128648
a2-megagpu-16g128648

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.
2This limit applies to Persistent Disk and Hyperdisk, but doesn't includeLocal SSD disks.

If supported by the machine type, you can attach a mixture ofHyperdisk and Persistent Disk volumes to an instance, but the followingrestrictions apply:

  • The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per instance.
  • The maximum total disk capacity (in TiB) across all disk types can't exceed:

    • For machine types with less than 32 vCPUs:

      • 257 TiB for all Hyperdisk or all Persistent Disk
      • 257 TiB for a mixture of Hyperdisk and Persistent Disk
    • For machine types with 32 or more vCPUs:

      • 512 TiB for all Hyperdisk
      • 512 TiB for a mixture of Hyperdisk and Persistent Disk
      • 257 TiB for all Persistent Disk

For details about the capacity limits, see Hyperdisk size and attachment limits andPersistent Disk maximum capacity.

The G4 machine series

The G4 machine series uses the AMD EPYC Turin CPU platform and featuresNVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. This machine series offerssignificant improvements over the previous-generation G2 machine series,with considerably more GPU memory, increased GPU memory bandwidth, andhigher networking bandwidth.

G4 instances have up to 384 vCPUs, 1,440 GB of memory, and 12 TiB ofTitanium SSD disks attached. G4 instances also provide up to400 Gbps of standard network performance.

This machine series is particularly intended for workloads such as NVIDIAOmniverse simulation workloads, graphics-intensive applications, video transcoding, andvirtual desktops. The G4 machine series also provide a low-cost solution forperforming single host inference and model tuning compared with A series machinetypes.

Instances that use the G4 machine type provide the following features:

  • GPU acceleration with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs:G4 instances automatically attachNVIDIA RTX PRO 6000 Blackwell Server Edition GPUs,which offer 96 GB GPU memory per GPU.

  • 5th Generation AMD EPYC Turin CPU Platform: this platform offers up to4.1 GHz sustained max boost frequency. For more information about thisprocessor, seeCPU platform.

  • Next generation graphics performance: the NVIDIA RTX PRO 6000 GPUs providesignificant performance and feature upgrades over the NVIDIA L4 GPUs thatare attached to the G2 machine series. Thesed upgrades are as follows:

    • 5th-Generation Tensor Cores: these cores introduce support for FP4precision and DLSS 4 Multi Frame Generation. By using these 5th Generationtensor cores, NVIDIA RTX PRO 6000 GPUs offers improved performance toaccelerate tasks like local LLM development and content creation, comparedto NVIDIA L4 GPUs.
    • 4th-Generation RT Cores: these cores deliver up to twice the ray-tracingperformance of the previous generation NVIDIA L4 GPUs, acceleratingrendering for design and manufacturing workloads.
    • Core count: the NVIDIA RTX PRO 6000 GPU includes 24,064 CUDA cores,752 5th-gen Tensor cores, and 188 4th-gen RT cores. This update representsa substantial increase over prior generations like the L4 GPU, which has7,680 CUDA cores and 240 Tensor cores.
  • Multi-Instance GPU (MIG): this feature allows a single GPU to partitioninto up to four fully isolated GPU instances on a single VM instance. For moreinformation about NVIDIA MIG, seeNVIDIA Multi-Instance GPUin the NVIDIA documentation.

  • Peripheral Component Interconnect Express (PCIe) Gen 5 support: G4instances supports PCI Express Gen 5, which improves the data transferspeed from CPU memory to GPU compared to PCIe Gen 3 used by G2 instances.

  • Titanium SSD and Hyperdisk support: G4 instances supportattaching up to 12,000 GiB ofTitanium SSD.Titanium SSD provides fast scratch disks or feeds data into the GPUs,which helps avoid I/O bottlenecks.

    For workloads that require durable block storage, G4 instances alsosupport attaching up to 512 TiB of Hyperdisk. For moreinformation about disk types, seeDisk types.

  • GPU Peer-to-Peer (P2P) communication: G4 instances support GPU P2Pcommunication, enabling direct data transfer between GPUs within the sameinstance. This can significantly improve performance for multi-GPU workloadsby reducing data transfer latency and freeing up CPU resources. For moreinformation, seeG4 GPU peer-to-peer (P2P) communication.

G4 machine types

G4 accelerator-optimized machine types use NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs (nvidia-rtx-pro-6000) and are suitable for NVIDIA Omniverse simulation workloads, graphics-intensive applications, video transcoding, and virtual desktops. G4 machine types also provide a low-cost solution for performing single host inference and model tuning compared with A series machine types.

Important: For information on how to get started withG4 machine types, contact your Google account team.
Attached NVIDIA RTX PRO 6000 GPUs
Machine typevCPU count1Instance memory (GB)Maximum Titanium SSD supported (GiB)2Physical NIC countMaximum network bandwidth (Gbps)3GPU countGPU memory4
(GB GDDR7)
g4-standard-48481801,500150196
g4-standard-96963603,00011002192
g4-standard-1921927206,00012004384
g4-standard-3843841,44012,00024008768

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2You can add Titanium SSD disks when creating a G4 instance. For the number of disksyou can attach, seeMachine types that require you to choose a number of Local SSD disks.
3Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.SeeNetwork bandwidth.
4GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

G4 limitations

Supported disk types for G4 instances

G4 instances can use the following block storage types:

  • Hyperdisk Balanced (hyperdisk-balanced): this is the only disk type that is supportedfor the boot disk
  • Hyperdisk Balanced High Availability (hyperdisk-balanced-high-availability)
  • Hyperdisk Extreme (hyperdisk-extreme)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Titanium SSD: you can add Titanium SSD to instances created byusing the G4 machine types.

Maximum number of disks per instance1
Machine typesAll HyperdiskHyperdisk BalancedHyperdisk Balanced High AvailabilityHyperdisk ExtremeHyperdisk ThroughputTitanium SSD
g4-standard-483232320324
g4-standard-963232328328
g4-standard-19264646486416
g4-standard-384128128128812832

1Hyperdisk usage is charged separately frommachine type pricing. For disk pricing, seeHyperdisk pricing.

You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed 512 TiB for all Hyperdisks.

For details about the capacity limits, seeHyperdisk size and attachment limits.

G4 peer-to-peer (P2P) communication

G4 instances enhance multi-GPU workload performance by using direct GPUpeer-to-peer (P2P) communication. This capability allows GPUs that attach to thesame G4 instance to exchange data directly over the PCIe bus, bypassing the needto transfer data through the CPU's main memory. This direct path reduceslatency, lowers CPU utilization, and increases the effective bandwidth betweenGPUs. P2P communication significantly accelerates multi-GPU applications such asmachine learning (ML) training and high performance computing (HPC).

This feature typically requires no modifications to your application code. Youonly need to configure NCCL to use P2P. To configure NCCL, before you run yourworkloads, set theNCCL_P2P_LEVEL environmentvariableon your G4 instance based on the machine type:

  • For G4 instances with 2 or 4 GPUs (g4-standard-96,g4-standard-192): setNCCL_P2P_LEVEL=PHB
  • For G4 instances with 8 GPUs (g4-standard-384): setNCCL_P2P_LEVEL=SYS

Set the environment variable using one of the following options:

  • On the command line, run the appropriate export command (for example,export NCCL_P2P_LEVEL=SYS) in the shell session where you plan to run yourapplication. To make this setting persistent, add this command to yourshell's startup script (for example,~/.bashrc).
  • Add the appropriate setting (for example,NCCL_P2P_LEVEL=SYS) to the NCCLconfiguration file located at/etc/nccl.conf.

Key benefits and performance

  • Accelerates multi-GPU workloads on G4 instances with two or more GPUs:provides faster runtimes for applications running ong4-standard-96,g4-standard-192, andg4-standard-384 machine types.
  • Provides high-bandwidth communication: enables high data transfer speedsbetween GPUs.
  • Improves NCCL performance: provides significant performance improvements forapplications that use the NVIDIA Collective Communication Library (NCCL)when compared to communication that doesn't use P2P. Google's hypervisorsecurely isolates this P2P communication within your instances.

    • On four GPU instances (g4-standard-192), all GPUs are on a single NUMAnode, allowing for the most efficient P2P communication. This can lead toperformance improvements of up to2.04x for collectives such asAllgather,Allreduce, andReduceScatter.
    • On eight GPU instances (g4-standard-384), GPUs are distributed acrosstwo NUMA nodes. P2P communication is accelerated for traffic both within andbetween these nodes, with performance improvements of up to2.19x for thesame collectives.

The G2 machine series

The G2 machine series is available in standard machine types that have 4 to 96vCPUs, and up to 432 GB of memory. This machine series is optimized forinference and graphics workloads. The G2 machine series is available in a singlestandard machine type with multiple configurations.

Instances created by using the G2 machine types provide the following features:

  • GPU acceleration: each G2 machine type hasNVIDIA L4 GPUs.

  • Improved inference rates: the G2 machine type provides support for theFP8 (8-bit floating point)data type which speeds up ML inference rates andreduces memory requirements.

  • Next generation graphics performance: NVIDIA L4 GPUs provideup to 3X improvement in graphics performance by using third-generationRT coresandNVIDIA DLSS 3 (Deep Learning Super Sampling)technology.

  • High performance network bandwidth: with the G2 machine types, you canget up to 100 Gbps network bandwidth.

  • Local SSD, Persistent Disk, and Hyperdisk support: youcan add up to 3,000 GiB of Local SSD to G2 instances. This can be used forfast scratch disks or for feeding data into the GPUs while preventing I/Obottlenecks.

    You can also attach Hyperdisk and Persistent Disk volumes toG2 instances, for applications that require more persistent storage. The maximumstorage capacity depends on the number of vCPUs the instance has. For details,seeSupported disk types.

  • Compact placement policy support: provides you with more control over thephysical placement of your instances within data centers. This enableslower-latency and higher bandwidth for instances that are located within asingle availability zone. For more information, seeReduce latency by using compact placement policies.

G2 machine types

G2 accelerator-optimizedmachine types haveNVIDIA L4 GPUsattached and are ideal for cost-optimized inference, graphics-intensive andhigh performance computing workloads.

Each G2 machine type also has a default memory and a custommemory range. The custom memory range defines the amount of memory thatyou can allocate to your instance for each machine type. You can also add LocalSSD disks when creating a G2 instance. For the number of disksyou can attach, seeMachine types that require you to choose a number of Local SSD disks.

Attached NVIDIA L4 GPUs
Machine typevCPU count1Default instance memory (GB)Custom instance memory range (GB)Max Local SSD supported (GiB)Maximum network bandwidth (Gbps)2GPU countGPU memory3 (GB GDDR6)
g2-standard-441616 to 3237510124
g2-standard-883232 to 5437516124
g2-standard-12124848 to 5437516124
g2-standard-16166454 to 6437532124
g2-standard-24249696 to 10875032248
g2-standard-323212896 to 12837532124
g2-standard-4848192192 to 2161,50050496
g2-standard-9696384384 to 4323,0001008192

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

G2 limitations

  • You can only request capacity by using thesupported consumption optionsfor a G2 machine type.
  • You don't receivesustaineduse discounts andflexible committed usediscounts for instances that use a G2 machine type.
  • You can only use a G2 machine type in certainregionsand zones.
  • The G2 machine type is only available on theCascade Lake platform.
  • Standard Persistent Disk (pd-standard) isn't supported on instances that use theG2 machine type. For supported disk types, seeSupported disk types for G2.
  • You can't createMulti-InstanceGPUs on an instance that uses a G2 machine type.
  • If you need to change the machine type of a G2 instance, reviewModify accelerator-optmized instances.
  • You can't useDeep Learning VM Images as boot disksfor instances that use the G2 machine type.
  • The current default driver for Container-Optimized OS doesn't support L4 GPUs running onG2 machine types. Also, Container-Optimized OS only supports a select set of drivers.If you want to use Container-Optimized OS on G2 machine types, review the following notes:
    • Use a Container-Optimized OS version that supports the minimum recommended NVIDIA driver version525.60.13 or later. For more information, review theContainer-Optimized OS release notes.
    • When youinstall the driver, specify the latest available version that works for the L4 GPUs. For example,sudo cos-extensions install gpu -- -version=525.60.13.
  • You must use the Google Cloud CLI or REST tocreate G2 instancesfor the following scenarios:
    • You want to specify custom memory values.
    • You want to customize the number of visible CPU cores.

Supported disk types for G2 instances

G2 instances can use the following block storage types:

  • Balanced Persistent Disk (pd-balanced)
  • SSD (performance) Persistent Disk (pd-ssd)
  • Hyperdisk ML (hyperdisk-ml)
  • Hyperdisk Throughput (hyperdisk-throughput)
  • Local SSD: you can add Local SSD to instances created by using the G2 machinetypes.

Maximum number of disks per instance1
Machine typesAll disks2Hyperdisk MLHyperdisk ThroughputLocal SSD
g2-standard-412824241
g2-standard-812832321
g2-standard-1212832321
g2-standard-1612848481
g2-standard-2412848482
g2-standard-3212864641
g2-standard-4812864644
g2-standard-9612864648

1Hyperdisk and Persistent Disk usage are charged separately frommachine type pricing. For disk pricing, seePersistent Disk and Hyperdisk pricing.
2This limit applies to Persistent Disk and Hyperdisk, but doesn't includeLocal SSD disks.

If supported by the machine type, you can attach a mixture ofHyperdisk and Persistent Disk volumes to an instance, but the followingrestrictions apply:

  • The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per instance.
  • The maximum total disk capacity (in TiB) across all disk types can't exceed:

    • For machine types with less than 32 vCPUs:

      • 257 TiB for all Hyperdisk or all Persistent Disk
      • 257 TiB for a mixture of Hyperdisk and Persistent Disk
    • For machine types with 32 or more vCPUs:

      • 512 TiB for all Hyperdisk
      • 512 TiB for a mixture of Hyperdisk and Persistent Disk
      • 257 TiB for all Persistent Disk

For details about the capacity limits, see Hyperdisk size and attachment limits andPersistent Disk maximum capacity.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-17 UTC.