Networking and GPU machines

Higher network bandwidths can improve the performance of your GPU instances tosupport distributed workloads that are running on Compute Engine.

The maximum network bandwidth that is available for instances with attached GPUson Compute Engine is as follows:

  • For A4X accelerator-optimized instances, you can get a maximum networkbandwidth of up to 2,000 Gbps, based on the machine type.
  • For A4 and A3 accelerator-optimized instances, you can get a maximum networkbandwidth of up to 3,600 Gbps, based on the machine type.
  • For G4 accelerator-optimized instances, you can get amaximum network bandwidth of up to 400 Gbps, based on the machine type.
  • For A2 and G2 accelerator-optimized instances, you can get amaximum network bandwidth of up to 100 Gbps, based on the machine type.
  • For N1 general-purpose instances that have P100 and P4 GPUs attached, a maximumnetwork bandwidth of 32 Gbps is available. This is similar to the maximumrate available to N1 instances that don't have GPUs attached. For more informationabout network bandwidths, seemaximum egress data rate.
  • For N1 general-purpose instances that have T4 and V100 GPUs attached, you can get amaximum network bandwidth of up to 100 Gbps, based on the combination ofGPU and vCPU count.

Review network bandwidth and NIC arrangement

Use the following section to review the network arrangement and bandwidth speedfor each GPU machine type.

A4X machine types

The A4X machine types have NVIDIA GB200 Superchips attached. These Superchipshave NVIDIA B200 GPUs.

This machine type has four NVIDIA ConnectX-7 (CX-7) network interface cards(NICs) and two Titanium NICs. The four CX-7 NICs deliver a total networkbandwidth of 1,600 Gbps. These CX-7 NICs are dedicated for onlyhigh-bandwidth GPU to GPU communication and can't be used for other networkingneeds such as public internet access. The two Titanium NICs are smart NICs thatprovide an additional 400 Gbps of network bandwidth for general purposenetworking requirements. Combined, the network interface cards provide a totalmaximum network bandwidth of 2,000 Gbps for these machines.

A4X is an exascale platform based on NVIDIA GB200 NVL72 rack-scale architectureand introduces the NVIDIA Grace Hopper Superchip architecture which deliversNVIDIA Hopper GPUs and NVIDIA Grace CPUs that are connected with high bandwidthNVIDIA NVLink Chip-to-Chip (C2C) interconnect.

The A4X networking architecture uses a rail-aligned design, which is a topologywhere the corresponding network card of one Compute Engine instance isconnected to the network card of another. The four CX-7 NICs on each instanceare physically isolated on a 4-way rail-aligned network topology, which allowsA4X to scale out in groups of 72 GPUs to thousands of GPUs in a singlenon-blocking cluster. This hardware-integrated approach provides predictable,low-latency performance essential for large-scale, distributed workloads.

Network architecture for A4X showing four CX-7 NICs for GPU    communication and two Titanium NICs for general networking.
Figure 1. Network architecture for A4X

To use these multiple NICs, you need to create 3 Virtual Private Cloud networksas follows:

  • 2 VPC networks: each gVNIC must attach to a differentVPC network
  • 1 VPC network with the RDMA network profile: all four CX-7NICs share the same VPC network

To set up these networks,seeCreate VPC networksin the AI Hypercomputer documentation.

Tip: When provisioning A4X instances, you mustreserve capacity to create instances and cluster. You can then create instances that use the features and services available from AI Hypercomputer. For more information, seeDeployment options overview in the AI Hypercomputer documentation.
Attached NVIDIA GB200 Grace Blackwell Superchips
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a4x-highgpu-4g14088412,00062,0004744

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A4 and A3 Ultra machine types

The A4 machine types have NVIDIA B200 GPUs attached and A3 Ultra machinetypes have NVIDIA H200 GPUs attached.

These machine types provide eight NVIDIA ConnectX-7 (CX-7) network interfacecards (NICs) and two Google virtual NICs (gVNIC). The eight CX-7 NICs deliver atotal network bandwidth of 3,200 Gbps. These NICs are dedicated foronly high-bandwidth GPU to GPU communication and can't be used for othernetworking needs such as public internet access. As outlined in the followingdiagram, each CX-7 NIC is aligned with one GPU to optimize non-uniform memoryaccess (NUMA). All eight GPUs can rapidly communicate with each other byusing the all to all NVLink bridge that connects them. The two other gVNICnetwork interface cards are smart NICs that provide an additional 400 Gbpsof network bandwidth for general purpose networking requirements. Combined, thenetwork interface cards provide a total maximum network bandwidth of3,600 Gbps for these machines.

Network architecture for A4 and A3 Ultra showing eight CX-7 NICs for GPU    communication and two gVNICs for general networking.
Figure 2. Network architecture for A4 and A3 Ultra

To use these multiple NICs, you need to create 3 Virtual Private Cloud networksas follows:

  • 2 regular VPC networks: each gVNIC must attach to a different VPC network
  • 1 RoCE VPC network: all eight CX-7NICs share the same RoCE VPC network

To set up these networks,seeCreate VPC networksin the AI Hypercomputer documentation.

A4 VMs

Tip: When provisioning A4 machine types, you mustreserve capacity to create instances or clusters, use Spot VMs, useFlex-start VMs, or create a resize request in a MIG. For instructions on how to create A4instances, seeCreate an A3 Ultra or A4 instance. .

Attached NVIDIA B200 Blackwell GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a4-highgpu-8g2243,96812,000103,60081,440

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth, seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 Ultra VMs

Tip: When provisioning A3 Ultra machinetypes, you must reserve capacity to create instances or clusters, use Spot VMs, useFlex-start VMs, or create a resize request in a MIG. For more information about theparameters to set when creating an A3 Ultra instance, seeCreate an A3 Ultra or A4 instance.

Attached NVIDIA H200 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3e)
a3-ultragpu-8g2242,95212,000103,60081128

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A3 Mega, High, and Edge machine types

These machine types have H100 GPUs attached. Each of these machine typeshave a fixed GPU count, vCPU count, and memory size.

  • Single NIC A3 VMs: For A3 VMs with 1 to 4 GPUs attached, only asingle physical network interface card (NIC) is available.
  • Multi-NIC A3 VMs: For A3 VMs with 8 GPUS attached,multiple physical NICs are available. For these A3 machine types the NICs are arranged as follows ona Peripheral Component Interconnect Express (PCIe) bus:
    • For theA3 Mega machine type: a NIC arrangement of 8+1 is available.With this arrangement, 8 NICs share the same PCIe bus, and 1 NIC resides on a separate PCIe bus.
    • For theA3 High machine type: a NIC arrangement of 4+1 is available. With this arrangement, 4 NICs share the same PCIe bus, and 1 NIC resides on a separate PCIe bus.
    • For theA3 Edge machine type machine type: a NIC arrangement of 4+1 is available.With this arrangement, 4 NICs share the same PCIe bus, and 1 NIC resides on a separate PCIe bus.These 5 NICs provide a total network bandwidth of 400 Gbps for each VM.

    NICs that share the same PCIe bus, have a non-uniform memory access (NUMA) alignment of one NICper two NVIDIA H100 GPUs. These NICs are ideal for dedicated high bandwidth GPU to GPUcommunication. The physical NIC that resides on a separate PCIe bus is ideal for other networkingneeds. For instructions on how to setup networking for A3 High and A3 Edge VMs, seeset up jumbo frame MTU networks.

A3 Mega

A3 High

Tip: When provisioninga3-highgpu-1g,a3-highgpu-2g, ora3-highgpu-4g machine types,you must create instances by using Spot VMs orFlex-start VMs. For detailed instructions on these options, review the following:
Attached NVIDIA H100 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3)
a3-highgpu-1g26234750125180
a3-highgpu-2g524681,5001502160
a3-highgpu-4g1049363,00011004320
a3-highgpu-8g2081,8726,00051,0008640

A3 Edge

Tip: To get started with A3 Edge instances, seeCreate an A3 VM with GPUDirect-TCPX enabled.
Attached NVIDIA H100 GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Physical NIC countMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM3)
a3-edgegpu-8g2081,8726,0005
  • 800:for asia-south1 and northamerica-northeast2
  • 400:for all otherA3 Edge regions
8640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

A2 machine types

Each A2 machine type has a fixed number of NVIDIA A100 40GB or NVIDIA A10080 GB GPUs attached. Each machine type also has a fixed vCPU count andmemory size.

A2 machine series are available in two types:

  • A2 Ultra: these machine types have A100 80GB GPUs and Local SSD disks attached.
  • A2 Standard: these machine types have A100 40GB GPUs attached.

A2 Ultra

Attached NVIDIA A100 80GB GPUs
Machine typevCPU count1Instance memory (GB)Attached Local SSD (GiB)Maximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM2e)
a2-ultragpu-1g1217037524180
a2-ultragpu-2g24340750322160
a2-ultragpu-4g486801,500504320
a2-ultragpu-8g961,3603,0001008640

A2 Standard

Attached NVIDIA A100 40GB GPUs
Machine typevCPU count1Instance memory (GB)Local SSD supportedMaximum network bandwidth (Gbps)2GPU countGPU memory3
(GB HBM2)
a2-highgpu-1g1285Yes24140
a2-highgpu-2g24170Yes32280
a2-highgpu-4g48340Yes504160
a2-highgpu-8g96680Yes1008320
a2-megagpu-16g961,360Yes10016640

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

G4 machine types

Preview

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

G4 accelerator-optimized machine types use NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs (nvidia-rtx-pro-6000) and are suitable for NVIDIA Omniverse simulation workloads, graphics-intensive applications, video transcoding, and virtual desktops. G4 machine types also provide a low-cost solution for performing single host inference and model tuning compared with A series machine types.

Important: For information on how to get started withG4 machine types, contact your Google account team.
Attached NVIDIA RTX PRO 6000 GPUs
Machine typevCPU count1Instance memory (GB)Maximum Titanium SSD supported (GiB)2Physical NIC countMaximum network bandwidth (Gbps)3GPU countGPU memory4
(GB GDDR7)
g4-standard-48481801,500150196
g4-standard-96963603,00011002192
g4-standard-1921927206,00012004384
g4-standard-3843841,44012,00024008768

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2You can add Titanium SSD disks when creating a G4 instance. For the number of disksyou can attach, seeMachine types that require you to choose a number of Local SSD disks.
3Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.SeeNetwork bandwidth.
4GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

G2 machine types

G2 accelerator-optimizedmachine types haveNVIDIA L4 GPUsattached and are ideal for cost-optimized inference, graphics-intensive andhigh performance computing workloads.

Each G2 machine type also has a default memory and a custommemory range. The custom memory range defines the amount of memory thatyou can allocate to your instance for each machine type. You can also add LocalSSD disks when creating a G2 instance. For the number of disksyou can attach, seeMachine types that require you to choose a number of Local SSD disks.

To get the higher network bandwidth rates (50 Gbps or higher) appliedto most GPU instances, it is recommended that you use Google Virtual NIC (gVNIC).For more information about creating GPU instances that use gVNIC, seeCreating GPU instances that use higher bandwidths.

Attached NVIDIA L4 GPUs
Machine typevCPU count1Default instance memory (GB)Custom instance memory range (GB)Max Local SSD supported (GiB)Maximum network bandwidth (Gbps)2GPU countGPU memory3 (GB GDDR6)
g2-standard-441616 to 3237510124
g2-standard-883232 to 5437516124
g2-standard-12124848 to 5437516124
g2-standard-16166454 to 6437532124
g2-standard-24249696 to 10875032248
g2-standard-323212896 to 12837532124
g2-standard-4848192192 to 2161,50050496
g2-standard-9696384384 to 4323,0001008192

1A vCPU is implemented as a single hardware hyper-thread on one ofthe availableCPU platforms.
2Maximum egress bandwidth cannot exceed the number given. Actualegress bandwidth depends on the destination IP address and other factors.For more information about network bandwidth,seeNetwork bandwidth.
3GPU memory is the memory on a GPU device that can be used fortemporary storage of data. It is separate from the instance's memory and isspecifically designed to handle the higher bandwidth demands of yourgraphics-intensive workloads.

N1 + GPU machine types

For N1 general-purpose instances that have T4 and V100 GPUs attached, you can get amaximum network bandwidth of up to 100 Gbps, based on the combination ofGPU and vCPU count. For all other N1 GPU instances, seeOverview.

Review the following section to calculate the maximum network bandwidth thatis available for your T4 and V100 instances based on the GPU model, vCPU, and GPU count.

Less than 5 vCPUs

For T4 and V100 instances that have 5 vCPUs or less, a maximum network bandwidthof 10 Gbps is available.

More than 5 vCPUs

For T4 and V100 instances that have more than 5 vCPUs, maximum network bandwidthis calculated based on the number of vCPUs and GPUs for that VM.

To get the higher network bandwidth rates (50 Gbps or higher) appliedto most GPU instances, it is recommended that you use Google Virtual NIC (gVNIC).For more information about creating GPU instances that use gVNIC, seeCreating GPU instances that use higher bandwidths.

GPU modelNumber of GPUsMaximum network bandwidth calculation
NVIDIA V1001min(vcpu_count * 2, 32)
2min(vcpu_count * 2, 32)
4min(vcpu_count * 2, 50)
8min(vcpu_count * 2, 100)
NVIDIA T41min(vcpu_count * 2, 32)
2min(vcpu_count * 2, 50)
4min(vcpu_count * 2, 100)

MTU settings and GPU machine types

To maximize network bandwidth, set a highermaximum transmission unit (MTU) value for yourVPC networks. Higher MTU values increase the packet size andreduce the packet-header overhead, which in turn increases payload data throughput.

For GPU machine types, we recommend thefollowing MTU settings for your VPC networks.

GPU machine typeRecommended MTU (in bytes)
VPC networkVPC network with RDMA profiles
  • A4X
  • A4
  • A3 Ultra
88968896
  • A3 Mega
  • A3 High
  • A3 Edge
8244N/A
  • A2 Standard
  • A2 Ultra
  • G4
  • G2
  • N1 machine types that support GPUs
8896N/A

When setting the MTU value, note the following:

  • 8192 is two 4 KB pages.
  • 8244 is recommended in A3 Mega, A3 High, and A3 Edge VMs for GPU NICs thathave header split enabled.
  • Use a value of 8896 unless otherwise indicated in the table.

Create high bandwidth GPU machines

To create GPU instances that use higher network bandwidths, use one of thefollowing methods based on the machine type:

What's next?

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.