Machine families resource and comparison guide

This document describes the machine families, machine series, and machine typesthat you can choose from to create a virtual machine (VM) instance or bare metalinstance with the resources that you need. When you create a compute instance,you select a machine type from a machine family that determines the resourcesavailable to that instance.

There are several machine families you can choose from. Each machine family isfurther organized into machine series and predefined machine types within eachseries. For example, within the N2 machine series in the general-purposemachine family, you can select then2-standard-4 machine type.

For information about machine series that supportSpot VMs (and preemptible VMs), seeCompute Engine instances provisioning models.

Note: This is a listof Compute Engine machine families. For a detailed explanationof each machine family, see the following pages:
  • General-purpose—best price-performance ratio for a variety of workloads.
  • Storage-optimized—best for workloads that are low in core usage and high in storage density.
  • Compute-optimized—designed for high performance computing (HPC) solutions and compute-intensive workloads; offers high performance per core on Compute Engine.
  • Memory-optimized—ideal for memory-intensive workloads, offering more memory per core than other machine families, with up to 12 TB of memory.
  • Accelerator-optimized—ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This family is the best option for workloads that require GPUs.

Compute Engine terminology

This documentation uses the following terms:

  • Machine family: A curated set of processor and hardware configurationsoptimized for specific workloads, for example, General-purpose,Accelerator-optimized, or Memory-optimized.

  • Machine series: Machine families are further classified by series,generation, and processor type.

    • Each series focuses on a different aspect of computing power orperformance. For example, the E series offers efficient VMs at a low cost,while the C series offer better performance.

    • The generation is denoted by an ascending number. For example, the N1 serieswithin the general-purpose machine family is the older version of the N2series.A higher generation or series number usually indicates newer underlyingCPU platforms or technologies. For example, the M3 series, which runs onIntel Xeon Scalable Processor 3rd Generation (Ice Lake), is a newergeneration than the M2 series, which runs on Intel Xeon Scalable Processor2nd Generation (Cascade Lake).

    GenerationIntelAMDArm
    4th generation machine seriesN4, C4, X4, M4, A4C4D, G4, N4D, H4DN4A (Preview), C4A, A4X
    3rd generation machine seriesC3, H3, Z3, M3, A3C3DN/A
    2nd generation machine seriesN2, E2, C2, M2, A2, G2N2D, C2D, T2D, E2T2A

  • Machine type: Every machine series offers at least one machine type. Eachmachine type provides a set of resources for your compute instance, such asvCPUs, memory, disks, and GPUs. If a predefined machine type does not meetyour needs, you can also create acustom machine type forsome machine series.

The following sections describe the different machine types.

Predefined machine types

Predefined machine types come with a non-configurable amount of memory andvCPUs. Predefined machine types use a variety of vCPU to memory ratios:

  • highcpu — from 1 to 3 GB memory per vCPU; typically,2 GB memory per vCPU.
  • standard — from 3 to 7 GB memory per vCPU; typically,4 GB memory per vCPU.
  • highmem — from 7 to 12 GB memory per vCPU; typically,8 GB memory per vCPU.
  • megamem — from 12 to 15 GB memory per vCPU; typically,14 GB memory per vCPU.
  • ultramem — from 24 to 31 GB memory per vCPU.

  • hypermem — from 15 to 24 GB memory per vCPU; typically,16 GB memory per vCPU.

For example, ac3-standard-22 machine type has 22 vCPUs, and as astandard machine type, it also has 88 GB of memory.

Local SSD machine types

Local SSD machine types are special predefined machine types. The machine typenames includelssd. When you create a compute instance using one of thefollowing machine types,Titanium SSD or Local SSD disksare automatically attached to the instance:

  • -lssd: Available with the C4, C4A, C4D, C3, C3D, and H4Dmachine series, these machine types attach a predetermined number of375 GiB Titanium SSD or LocalSSD disks to the instance. Examples of this machine type includec4a-standard-4-lssd,c3-standard-88-lssd, andc3d-highmem-360-lssd.
  • -standardlssd: Available with the storage-optimized Z3 machine series, thesemachine types provide up to 350 GiB of Titanium SSD disk capacity pervCPU. These machine types are recommended for high performance search and dataanalysis for medium sized data sets. An example of this machine type isz3-highmem-22-standardlssd.
  • -highlssd: Available with the Z3 machine series, these machine types providebetween of 350 GiB and 600 GiB of Titanium SSD disk capacityper vCPU. These machine types offer high performance and are recommended forstorage intensive streaming and data analysis for large sized data sets. Anexample of this machine type isz3-highmem-88-highlssd.

Other machine series also support Local SSD disks but don't use a machine typename that includeslssd. For a list of all the machine types that youcan use with Titanium SSD or Local SSD disks, seeChoose a valid number of Local SSD disks.

Bare metal machine types

Bare metal machine types are a special predefined machine type. The machine typename includes-metal. When you create a compute instance using one of thesemachine types, there is no hypervisor installed on the instance. You can attachdisks to a bare metal instance, just as you would with a VMinstance. Bare metal instances can be used in VPC networks and subnetworks inthe same way as VM instances.

Note: Compute Engine bare metal instances aren't related toBare Metal Solution.

For more information, seeBare metal instances on Compute Engine.

Custom machine types

If none of the predefined machine types match your workload needs, you cancreate a VM instance with a custom machine type for theN and E machine seriesin the general-purpose machine family.

Custom machine types cost slightly more to use compared to an equivalentpredefined machine type. Also, there are limitations in the amount of memory andvCPUs that you can select for a custom machine type. The on-demand prices forcustom machine types include a 5% premium over the on-demand and commitmentprices for predefined machine types.

When creating a custom machine type, you can use the extended memory feature.Instead of using the default memory size based on the number of vCPUs youselect, you can specify an amount of memory, up to the limit for the machineseries.

For more information, seeCreate a VM with a custom machine type.

Shared-core machine types

The E2 and N1 series contain shared-core machine types.These machine types timeshare a physical core which can be acost-effective method for running small, non-resource intensive apps.

  • E2: offerse2-micro,e2-small, ande2-medium shared-core machine types with 2 vCPUs for short periods of bursting.

  • N1: offersf1-micro andg1-small shared-core machine types which have up to 1 vCPU available for short periods of bursting.

For more information, seeCPU bursting.

Machine family and series recommendations

The following tables provide recommendations for different workloads.

General-purpose workloads
N4, N4A (Preview), N4D, N2, N2D, N1C4, C4A, C4D, C3, C3DE2Tau T2D, Tau T2A
Balanced price/performance across a wide range of machine typesConsistently high performance for a variety of workloadsDay-to-day computing at a lower costBest per-core performance/cost for scale-out workloads
  • Medium traffic web and app servers
  • Containerized microservices
  • Business intelligence apps
  • Virtual desktops
  • CRM applications
  • Development and test environments
  • Batch processing
  • Storage and archive
  • High traffic web and app servers
  • Databases
  • In-memory caches
  • Ad servers
  • Game Servers
  • Data analytics
  • Media streaming and transcoding
  • CPU-based ML training and inference
  • Low-traffic web servers
  • Back office apps
  • Containerized microservices
  • Microservices
  • Virtual desktops
  • Development and test environments
  • Scale-out workloads
  • Web serving
  • Containerized microservices
  • Media transcoding
  • Large-scale Java applications

Optimized workloads
Storage-optimizedCompute-optimizedMemory-optimizedAccelerator-optimized
Z3H4D, H3, C2 and C2DX4, M4, M3, M2, M1A4X, A4, A3, A2, G4, G2
Highest block storage to compute ratios for storage-intensive workloadsHighest performance and lower cost for high performance computing (HPC), multi-node and compute-bound workloadsHighest memory to compute ratios for memory-intensive workloadsOptimized for accelerated high performance computing workloads
  • SQL, NoSQL, and vector databases
  • Data analytics and data warehouses
  • Search
  • Media streaming
  • Large distributed parallel file systems
  • Manufacturing, weather forecasting, electronic design automation (EDA), High-performance web servers
  • Healthcare and life sciences, scientific computing
  • Seismic processing and structural mechanics applications
  • Modeling and simulation workloads, AI/ML
  • High-performance web servers, Game Servers
  • Small to extra-large SAP HANA in-memory databases
  • In-memory data stores, such as Redis
  • Simulation
  • High Performance databases such as Microsoft SQL Server, MySQL
  • Electronic design automation
  • Generative AI models such as the following:
    • Large Language Models (LLM)
    • Diffusion Models
    • Generative Adversarial Networks (GAN)
  • CUDA-enabled ML training and inference
  • High-performance computing (HPC)
  • Massively parallelized computation
  • BERT natural language processing
  • Deep learning recommendation model (DLRM)
  • Video transcoding
  • Remote visualization workstation

After you create a compute instance, you can userightsizing recommendationsto optimize resource utilization based on your workload. For more information,seeApplying machine type recommendations for VMs.

General-purpose machine family guide

Thegeneral-purpose machine familyoffers several machine series with the best price-performance ratio for avariety of workloads.

Compute Engine offers general-purpose machine series that run oneither x86 or Arm architecture.

x86

  • The C4 machine series is available on the Intel Granite Rapids and EmeraldRapids CPU platforms and powered byTitanium. C4 machine types are optimized to deliverconsistently high performance and scale up to 288 vCPUs, 2.2 TB ofDDR5 memory, and 18 TiB of Local SSD. C4 is available inhighcpu(2 GB memory per vCPU),standard (3.75 GB memory per vCPU), andhighmem (7.75 GB memory per vCPU) configurations. C4 instances arealigned with the underlyingnon-uniform memory access (NUMA)architecture to offer optimal, reliable, and consistent performance.
  • The C4D machine series is available on the AMD EPYC Turin CPU platform andpowered byTitanium. C4D has a greater max boost frequency ascompared with C3D, with improved Instructions Per Clock (IPC) for fasterdatabase transactions. By leveraging Hyperdisk storage andTitanium networking, C4D demonstrates up to 55% higher queries persecond on Cloud SQL for MySQL and 35% better performance onMemorystore for Redis workloads as compared to C3D. C4D instances areavailable with up to 384 vCPUs, 3 TB of DDR5 memory, and 12 TiB ofLocal SSD.C4D is available inhighcpu (1.875 GB memory per vCPU),standard(3.875 GB memory per vCPU), andhighmem (7.875 GB memory pervCPU) configurations. C4D instances are aligned with the underlyingNUMA architecture to offer optimal, reliable, and consistent performance.
  • The N4 machine series isavailable on the Intel Emerald Rapids CPU platform and powered byTitanium. N4 machine types areoptimized for flexibility and cost with both predefined and custom shapesand can scale up to 80 vCPUs at 640 GB of DDR5 memory. N4 is availableinhighcpu (2 GB per vCPU),standard (4 GB per vCPU), andhighmem (8 GB per vCPU) configurations.
  • The N4D machine series is available on the AMD EPYC Turin CPU platform andpowered by Titanium. N4D machine types are built for flexibilityand cost optimization through an efficient architecture, and next generationdynamic resource management,making better use of resources on host machines. You can create N4D VMsusing predefined machine types with up to 96 vCPUs and 768 GB ofDDR5 memory, or you can create N4D VMs using custom machine types thatallow you to choose varied combinations of compute andmemory to optimize costs and reduce resource waste.N4D is available inhighcpu (2 GB per vCPU),standard(4 GB per vCPU), andhighmem (8 GB per vCPU) configurations.
  • The N2 machine series has up to 128 vCPUs, 8 GB of memory per vCPU, andis available on the Intel Ice Lake and Intel Cascade Lake CPU platforms.
  • The N2D machine series has up to 224 vCPUs, 8 GB of memory per vCPU,and is available on the third generation AMD EPYC Milan platform.
  • The C3 machine series offers up to 176 vCPUs and 2, 4, or 8 GB ofmemory per vCPU on the Intel Sapphire Rapids CPU platform andTitanium. C3 instances are alignedwith the underlying NUMA architecture to offer optimal, reliable, andconsistent performance.
  • The C3D machine series offers up to 360 vCPUs and 2, 4, or 8 GB ofmemory per vCPU on the AMD EPYC Genoa CPU platform andTitanium. C3D instances are alignedwith the underlying NUMA architecture to offer optimal, reliable, andconsistent performance.
  • The E2 machine series has up to 32 virtual cores (vCPUs) with up to128 GB of memory with a maximum of 8 GB per vCPU, and the lowestcost of all machine series. The E2 machine series has a predefined CPUplatform, running either an Intel processor or an AMD processor. Theprocessor is selected for you when you create the instance. This machineseries provides a variety of compute resources for the lowest price onCompute Engine, especially when paired withcommitted use discounts.
  • The Tau T2D machine series provides an optimized feature set for scaling out.Each VM instance can have up to 60 vCPUs, 4 GB of memory per vCPU,and is available on third generation AMD EPYC Milan processors. The Tau T2Dmachine series doesn't use cluster-threading, so a vCPU isequivalent to an entire core.
  • The N1 machine series VMs can have up to 96 vCPUs, up to 6.5 GB ofmemory per vCPU, and are available on Intel Sandy Bridge, Ivy Bridge,Haswell, Broadwell, and Skylake CPU platforms.

Arm

  • N4A VMs(Preview)are powered by Google's custom-designed Axionprocessor, built on Arm Neoverse N3 compute core and powered byTitanium IPU. They are engineered to be our mostefficient and flexible Arm-based VMs, delivering exceptionalprice-performance for a wide range of general-purpose and scale-outworkloads.

    Ideal use cases include web and application servers, microservices,containerized applications usingGoogle Kubernetes Engine (GKE), open-source databases, and development and testingenvironments.

  • The C4A machine series is powered by Google Axion, and built on Arm NeoverseV2 compute core, whichsupports Arm V9 architecture. C4A instances are powered byTitanium IPU with disk and network offloads, thisimproves instance performance by reducing on-host processing.

    C4A instances provide up to 72 vCPUs with up to 8 GB of memory per vCPUin a singleUniform Memory Access (UMA) domain.C4A offers-lssd machine types that come with up to 6 TiB ofTitanium SSD capacity.C4A instances don't use simultaneous multithreading (SMT). A vCPU in aC4A instance is equivalent to an entire physical core.

  • The Tau T2A machine series is the first machine series in Google Cloudbuilt on Arm Neoverse N1 core compute. Tau T2A machines are optimized todeliver compelling price for performance. Each VM can have up to 48 vCPUswith 4 GB of memory per vCPU. The Tau T2A machine series runs on a64 core Ampere Altra processor with an Arm instruction set and an all-corefrequency of 3 GHz. Tau T2A machine types support a single NUMAnode and a vCPU is equivalent to an entire core.

Storage-optimized machine family guide

Thestorage-optimized machine familyis best suited for high-performance and flash-optimized workloads such as SQL,NoSQL, and vector databases, scale-out data analytics, data warehouses andsearch, and distributed file systems that need fast access to large amounts ofdata stored in local storage. The storage-optimized machine family is designedto provide high local storage throughput and IOPS at sub-millisecond latency.

  • Z3standardlssd instances can have up to 176 vCPUs,1,408 GB of memory, and 36 TiB of Titanium SSD.
  • Z3highlssd instances can have up to 88 vCPUs, 704 GB of memory,and 36 TiB of Titanium SSD.
  • Z3 bare metal instances have 192 vCPUs,1,536 GB of memory, and 72 TiB of local Titanium SSD.

Z3 runs on the Intel Xeon Scalable processor (code name Sapphire Rapids) withDDR5 memory andTitanium offload processors. Z3 brings together compute, networking, andstorage innovations into one platform. Z3 instances are aligned with theunderlying NUMA architecture to offer optimal, reliable, and consistentperformance.

Compute-optimized machine family guide

Thecompute-optimized machine familyis optimized for running high performance computing (HPC), multi-node, andcompute-bound applications by providing high performance per core.

  • H4D instances offer 192 vCPUs and 720 GB of DDR5 memory. H4D instancesrun on the AMD EPYC Turin CPU platform, withTitanium offload andCloud RDMA support. H4Dinstances are aligned with the underlying NUMAarchitecture to offer optimal, reliable, and consistent performance. H4Ddelivers improved scalability for multi-node workloads and HPC workloads.Cloud RDMA is a networking infrastructure component that lets youbuild a true cloud HPC platform that can run scientific computations andML/AI workloads. Cloud RDMA delivers priceperformance ratios comparable to on-premise infrastructure.
  • H3 instances offer 88 vCPUs and 352 GB of DDR5 memory. H3 instances runon the Intel Sapphire Rapids CPU platform and Titanium offloadprocessors. H3 instances are aligned with the underlying NUMA architectureto offer optimal, reliable, and consistent performance. H3 deliversperformance improvements for a wide variety of HPC workloads such asmolecular dynamics, computational geoscience, financial risk analysis,weather modeling, frontend and backend EDA, and computational fluiddynamics.
  • C2 instances offer up to 60 vCPUs, 4 GB of memory per vCPU, and areavailable on the Intel Cascade Lake CPU platform. C2 instances are alignedwith the underlying NUMA architecture to offer optimal, reliable, andconsistent performance.
  • C2D instances offer up to 112 vCPUs, up to 8 GB of memory per vCPU, andare available on the third generation AMD EPYC Milan platform. C2D instancesare aligned with the underlying NUMA architecture to offer optimal,reliable, and consistent performance.

Memory-optimized machine family guide

Thememory-optimized machine familyhas machine series that are ideal for OLAP and OLTP SAP workloads, genomicmodeling, electronic design automation, and memory intensive HPCworkloads. This family offers more memory per core thanany other machine family, with up to 32 TBof memory.

  • X4 bare metal instances offer up to 1,920 vCPUs, with either 12.8 or17 GB of memory per vCPU. X4 has machine types with 6, 8, 12, 16, 24,and 32 TB of memory, and is available on the Intel Sapphire RapidsCPU platform.
  • M4 instances offer up to 224 vCPUs, with up to 26.5 GB of memory pervCPU, and are available on the Intel Emerald Rapids CPU platform.
  • M3 instances offer up to 128 vCPUs, with up to 30.5 GB of memory pervCPU, and are available on the Intel Ice Lake CPU platform.
  • M2 instances are available as 6 TB, 9 TB, and 12 TB machinetypes, and are available on the Intel Cascade Lake CPU platform.
  • M1 instances offer up to 160 vCPUs, 14.9 GB to 24 GB of memory pervCPU, and are available on the Intel Skylake and Broadwell CPU platforms.

Accelerator-optimized machine family guide

Theaccelerator-optimized machinefamily is ideal formassivelyparallelized Compute Unified Device Architecture (CUDA) computeworkloads, such as machine learning(ML) and high performance computing (HPC). This machine family is the optimalchoice for workloads that require GPUs.

Google also offers AI Hypercomputer for creating clusters ofaccelerator-optimized VMs with inter-GPU communication, which are designed forrunning very intensive AI and ML workloads. For more information, seeAI Hypercomputer overview.

Arm

  • A4X instances offer up to 140 vCPUs and up to 884 GB of memory. Each A4X machine type has 4 NVIDIA B200 GPUs attached to 2 NVIDIA Grace CPUs. A4X instances have a maximum network bandwidth of up to 2,000 GBps.

    Important: TheCompute Engine Service Level Agreement (SLA) doesn't apply to the A4X machine series.

x86

  • A4 instances offer up to 224 vCPUs and up to 3,968 GB of memory. Each A4 machine type has 8 NVIDIA B200 GPUs attached. A4 instances have a maximum network bandwidth of up to 3,600 Gbps and are available on the Intel Emerald Rapids CPU platform.
  • A3 instances offer up to 224 vCPUs and up to 2,952 GB of memory. Each A3 machine type has either 1, 2, 4, or 8 NVIDIA H100 or 8 H200 GPUs attached. A3 instances have a maximum network bandwidth of up to 3,200 Gbps and are available on the following CPU platforms:
    • Intel Emerald Rapids - A3 Ultra
    • Intel Sapphire Rapids - A3 Mega, High, and Edge
  • A3 instances are available with the A3 Edge machine type (a3-edgegpu-8g-nolssd), which offers 208 vCPUs, 1,872 GB of memory, and 8 NVIDIA H100 GPUs, on the Intel Sapphire Rapids CPU platform andTitanium.
  • A2 instances offer 12 to 96 vCPUs, and up to 1,360 GB of memory. Each A2 machine type has either 1, 2, 4, 8, or 16 NVIDIA A100 GPUs attached. A2 instances have a maximum network bandwidth of up to 100 Gbps and are available on the Intel Cascade Lake CPU platform.
  • G4 instances offer 48 to 384 vCPUs and up to 1,440 GB of memory. Each G4 instance has either 1, 2, 4, or 8 NVIDIA RTX PRO 6000 GPUs attached. G4 instances have a maximum network bandwidth of up to 400 Gbps and are available on the AMD EPYC Turin CPU platform.
  • G2 instances offer 4 to 96 vCPUs and up to 432 GB of memory. Each G2 machine type has either 1, 2, 4, or 8 NVIDIA L4 GPUs attached. G2 instances have a maximum network bandwidth of up to 100 Gbps and are available on the Intel Cascade Lake CPU platform.

Machine series comparison

Use the following table to compare each machine family and determinewhich one is appropriate for your workload. If, after reviewing this section,you are still unsure which family is best for your workload, start with thegeneral-purpose machine family. For details about all supportedprocessors, seeCPU platforms.

To learn how your selection affects the performance of disk volumesattached to your compute instances, see:

Compare the characteristics of different machine series, from C4 to G2. You can select specific properties in theChoose instance properties to compare field to compare those properties across all machine series in the following table.

General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose Cost optimized Storage optimized Compute optimized Compute optimized Compute optimized Compute optimized Memory optimized Memory optimized Memory optimized Memory optimized Memory optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized
VM and bare metal VM VM and bare metal VM and bare metal VM VM VM VM VM VM VM VM VM VM VM and bare metal VM VM VM VM Bare metal VM VM VM VM VM VM VM VM VM VM VM VM
Intel Emerald Rapids and Granite Rapids Google Axion AMD EPYC Turin Intel Sapphire Rapids AMD EPYC Genoa Intel Emerald Rapids Google Axion AMD EPYC Turin Intel Cascade Lake and Ice Lake AMD EPYC Rome and EPYC Milan Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge AMD EPYC Milan Ampere Altra Intel Skylake, Broadwell, and Haswell, AMD EPYC Rome and EPYC Milan Intel Sapphire Rapids AMD EPYC Turin Intel Sapphire Rapids Intel Cascade Lake AMD EPYC Milan Intel Sapphire Rapids Intel Emerald Rapids Intel Ice Lake Intel Cascade Lake Intel Skylake and Broadwell Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge NVIDIA Grace Intel Emerald Rapids Intel Emerald Rapids Intel Sapphire Rapids Intel Cascade Lake AMD EPYC Turin Intel Cascade Lake
x86 Arm x86 x86 x86 x86 Arm x86 x86 x86 x86 x86 Arm x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 Arm x86 x86 x86 x86 x86 x86
2 to 288 1 to 72 2 to 384 4 to 176 4 to 360 2 to 80 1 to 64 2 to 96 2 to 128 2 to 224 1 to 96 1 to 60 1 to 48 0.25 to 32 8 to 192 192 88 4 to 60 2 to 112 480 to 1,920 16 to 224 32 to 128 208 to 416 40 to 160 1 to 96 140 224 224 208 12 to 96 48 to 384 4 to 96
Thread Core Thread Thread Thread Thread Core Thread Thread Thread Thread Core Core Thread Thread Core Core Thread Thread Thread Thread Thread Thread Thread Thread Core Thread Thread Thread Thread Thread Thread
2 to 2,232 GB 2 to 576 GB 3 to 3,072 GB 8 to 1,408 GB 8 to 2,880 GB 2 to 640 GB 2 to 512 GB 2 to 768 GB 2 to 864 GB 2 to 896 GB 1.8 to 624 GB 4 to 240 GB 4 to 192 GB 1 to 128 GB 64 to 1,536 GB 720 to 1,488 GB 352 GB 16 to 240 GB 4 to 896 GB 6,144 to 32,768 GB 248 to 5,952 GB 976 to 3,904 GB 5,888 to 11,776 GB 961 to 3,844 GB 3.75 to 624 GB 884 GB 3,968 GB 2,952 GB 1,872 GB 85 to 1,360 GB 180 to 1,440 GB 16 to 432 GB
Intel TDX AMD SEV-SNP Intel TDX, NVIDIA Confidential Computing
NVMe NVMe NVMe NVMe NVMe NVMe NVMe NVMe SCSI (PD and Local SSD)

NVMe (Local SSD)
SCSI (PD and Local SSD)

NVMe (Local SSD)
SCSI (PD and Local SSD)

NVMe (Local SSD)
SCSI (PD and Local SSD)

NVMe (Local SSD)
NVMe SCSI NVMe NVMe NVMe SCSI (PD and Local SSD)

NVMe (Local SSD)
SCSI (PD and Local SSD)

NVMe (Local SSD)
NVMe NVMe NVMe SCSI SCSI (PD and Local SSD)

NVMe (Local SSD)
SCSI (PD and Local SSD)

NVMe (Local SSD)
NVMe NVMe NVMe NVMe SCSI (PD and Local SSD)

NVMe (Local SSD)
NVMe NVMe
18 TiB 6 TiB 12 TiB 12 TiB 12 TiB 0 0 0 9 TiB 9 TiB 9 TiB 0 0 0 36 TiB (VM), 72 TiB (Metal) 3 TiB 0 3 TiB 3 TiB 0 0 3 TiB 0 3 TiB 9 TiB 12 TiB 12 TiB 12 TiB 6 TiB 3 TiB 12 TiB 3 TiB
Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal and Regional Zonal
Zonal Zonal Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal Zonal Zonal Zonal and Regional Zonal Zonal Zonal
Zonal Zonal Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal Zonal Zonal Zonal and Regional Zonal Zonal Zonal
gVNIC and IDPF gVNIC gVNIC and IDPF gVNIC and IDPF gVNIC gVNIC gVNIC gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC gVNIC and VirtIO-Net gVNIC and IDPF gVNIC, IRDMA gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net IDPF gVNIC gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and MRDMA gVNIC and MRDMA gVNIC and MRDMA gVNIC gVNIC and VirtIO-Net gVNIC gVNIC and VirtIO-Net
10 to 100 Gbps 10 to 50 Gbps 10 to 100 Gbps 23 to 100 Gbps 20 to 100 Gbps 10 to 50 Gbps Up to 50 Gbps 10 to 50 Gbps 10 to 32 Gbps 10 to 32 Gbps 2 to 32 Gbps 10 to 32 Gbps 10 to 32 Gbps 1 to 16 Gbps 23 to 100 Gbps up to 200 Gbps up to 200 Gbps 10 to 32 Gbps 10 to 32 Gbps up to 100 Gbps 16 to 100 Gbps up to 32 Gbps up to 32 Gbps up to 32 Gbps 2 to 32 Gbps up to 2,000 GBps up to 3,600 Gbps up to 3,200 Gbps up to 1,800 Gbps 24 to 100 Gbps 50 to 400 Gbps 10 to 100 Gbps
50 to 200 Gbps 50 to 100 Gbps 50 to 200 Gbps 50 to 200 Gbps 50 to 200 Gbps 50 to 100 Gbps 50 to 100 Gbps 50 to 200 Gbps 50 to 100 Gbps 50 to 100 Gbps 50 to 100 Gbps 50 to 100 Gbps
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 4 8 8 8 16 8 8
discounts discounts discounts discounts discounts discountsOnly at GA discounts discounts discounts discounts discounts discounts discounts discounts discountsOnly at GA discounts discounts discounts discounts discounts discounts discounts discounts discounts discounts discounts discounts discounts discounts discounts discounts discounts
discounts discounts discounts discounts discounts discountsOnly at GA discounts discounts discounts discounts discounts discounts discounts discounts discounts Onlyat GA andfor the new CUD model discountsOnly for the new CUD model discounts discounts discounts discountsOnly for the new CUD model discountsOnly for the new CUD model discountsOnly for the new CUD model discountsOnly for the new CUD model discounts discounts discounts discounts discounts discounts discounts discounts discounts

GPUs and compute instances

GPUs are used to accelerate workloads, and are supported for A4X, A4, A3, A2,G4, G2, and N1 instances. For instances that use A4X, A4, A3, A2, G4, orG2 machine types, the GPUs are automatically attached when you create theinstance. For instances that use N1 machine types, you can attach GPUs to theinstance during or after instance creation. GPUs can't be used with any othermachine series.

Accelerator-optimized instances have a fixed number of GPUs, vCPUs and memoryper machine type, with the exception of G2 machines that offer a custom memoryrange. N1 instances with fewer GPUs attached are limited to a maximum number ofvCPUs. In general, a higher number of GPUs lets you create instances with ahigher number of vCPUs and memory.

For more information, seeGPUs on Compute Engine.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.