Compute-optimized machine family for Compute Engine Stay organized with collections Save and categorize content based on your preferences.
Compute-optimized instances are ideal for compute-intensive and highperformance computing (HPC) workloads. Compute-optimized instances offer thehighest performance per core and are built on architecture that utilizesfeatures like non-uniform memory access (NUMA) for optimal, reliable, anduniform performance.
Note: For the C3, C3D, C4, C4D, or C4A machine series, seeGeneral-purpose machine family.| Machine | Workloads |
|---|---|
| H4D machine series |
|
| H3 machine series |
|
| C2D machine series |
|
| C2 machine series |
|
The following machine series are available in this machine family:
- H4D instances are powered byTitaniumand fifth generation AMD EPYC Turin processors which have a basefrequency of 2.7 GHz and a maximum frequency of 4.1 GHz. H4Dinstances have 192 cores (vCPUs) and up to 1,488 GB of memory. H4Dinstances can be used with Local SSD storage and Cloud RDMAnetworking.
- H3 instances are powered byTitaniumand two fourth generation Intel Xeon Scalable processors (code-namedSapphire Rapids) which have an all-core frequency of3.0 GHz. H3 instances have 88 vCPUs and 352 GB of DDR5 memory.
- C2D instances run on the third generation AMD EPYC Milan processor and offerup to 3.5 GHz max boost frequency. C2D instances have flexible sizingbetween 2 to 112 vCPUs and 2 to 8 GB of memory per vCPU.
- C2 instances run on the second generation Intel Xeon Scalable processor(Cascade Lake) which offers up to 3.9 GHz sustained single-core maxturbo frequency. C2 offers instances with 4 to 60 vCPUs and 4 GB ofmemory per vCPU.
H4D machine series
H4D instances are powered by the AMD EPYC Turin 5th Generation processors andTitanium offload processors.
H4D instances deliver high performance, low cost, and scalability formulti-node workloads. H4D instances are single-threaded and are optimized fortightly-coupled applications that scale across multiple nodes. Leveragingtechnologies like Titanium SSD, RDMA-enabled 200 Gbps networking andcluster management capabilities, these instances prioritize performance andworkload-specific optimizations. Additionally, you can useDynamic Workload Schedulerfor scheduled or immediate cluster deployment, making H4D ideal for HPCbursty workload needs.
An H4D instance uses all the vCPUs on an entire host server. H4D instances canuse the entire host network bandwidth and come with a default network bandwidthrate of up to 200 Gbps. However, the bandwidth from the instance to theinternet is limited to 1 Gbps.
Simultaneous multithreading (SMT) is disabled for H4D instances and can't beenabled. There is also no overcommitting to ensure optimal performanceconsistency.
H4D instances are available on-demand, or with one- and three-year committed usediscounts (CUDs). To compare these methods, seeCompute Engine instances provisioning models.
H4D Limitations
The H4D machine series has the following restrictions:
- The H4D machine types are only available in a predefined machine type. Custom machine types aren't available.
- You can't use GPUs with H4D instances.
- Outbound data transfer is limited to 1 Gbps.
- You can't create machine images from H4D instances.
- H4D machine images can't be used tocreate disks.
- You can't share disks between instances, either inmulti-writer mode orread-only mode.
- Hyperdisk Balanced performance is capped at 15,000 IOPS and 240 MBps throughput.
- Live migration isn't supported for H4D instances.
H4D machine types
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 |
|---|---|---|---|---|
h4d-standard-192 | 192 | 720 | Not supported | Up to 200 Gbps |
h4d-highmem-192 | 192 | 1,488 | Not supported | Up to 200 Gbps |
h4d-highmem-192-lssd | 192 | 1,488 | (10 x 375 GiB) 3,750 GiB | Up to 200 Gbps |
1 A vCPU represents an entire core—no simultaneous multithreading (SMT).
2 Default egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
Supported disk types for H4D
H4D instances can use the following block storage types:
- Hyperdisk Balanced (
hyperdisk-balanced) - Local Titanium SSD
Disk and capacity limits
The following restrictions apply:
- The number of Hyperdisk volumes can't exceed64 per VM.
- The maximum total disk capacity across all disks can'texceed 512 TiB.
For details about the capacity limits, seeHyperdisk capacity limits per VM.
H4D storage limits are described in the following table:
| Maximum number of disks per instance | ||||
|---|---|---|---|---|
| Machine types | All Hyperdisk types | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
h4d-standard-192 | 64 | 8 | 0 | 0 |
h4d-highmem-192 | 64 | 8 | 0 | 0 |
h4d-highmem-192-lssd | 64 | 8 | 0 | 0 |
Network support for H4D instances
H4D instances requiregVNIC network interfaces.H4D supports up to 200 Gbps network bandwidth forstandard networking. Instance to Internet egress bandwidth is limited to1 Gbps.
If using Cloud RDMA, you must configure at least two network interfaces (vNICs) when you create each instance:
- GVNIC: This vNIC uses the gVNIC driver and is used for normal networking communication. It is fully connected to the Google network and can connect to the Internet.
- IRDMA: The other vNIC uses an Intel iDPF/iRDMA driver and is used only for Cloud RDMA communication. This network interface doesn't connect to the Internet.
Before migrating to H4D or creating H4D instances,make sure that theoperating system imagethat you use is fully supported for H4D.Fully supported images include support for200 Gbps network bandwidth.If you are using Cloud RDMA, then the OS image must also support theIRDMA network interface type.If your H4D instance is using an operating system that is not fullysupported or has earlier versions of the network drivers, then your instancemight not be able to achieve the maximum network bandwidth for H4Dinstances.
Maintenance experience for H4D instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The H4D machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance | Simulate maintenance |
|---|---|---|---|---|---|
h4d-standard-192 | Minimum of 30 days | Terminate | 7 days | Yes | No |
h4d-highmem-192 | Minimum of 30 days | Terminate | 7 days | Yes | No |
h4d-highmem-192-lssd | Minimum of 30 days | Terminates with Local SSD data persistence | 7 days | Yes | No |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
H3 machine series
H3 instances are powered by the fourth generation Intel Xeon Scalable processors(code-named Sapphire Rapids), DDR5 memory, andTitanium offload processors.
H3 instances offer the best price performance for compute-intensive highperformance computing (HPC) workloads in Compute Engine. H3 instances aresingle-threaded and are ideal for a variety ofmodeling and simulation workloads including computational fluid dynamics, crashsafety, genomics, financial modeling, and general scientific and engineeringcomputing. H3 instances support compact placement, which is optimized fortightly-coupled applications that scale across multiple nodes.
The H3 series is available in one size, comprising an entire host server.To save on licensing costs, you can customize the number of visible cores, butyou are charged the same price for the instance. H3 instances can use the entirehost network bandwidth and come with a default network bandwidth rate of upto 200 Gbps. However, the bandwidth from the instance to theinternet is limited to 1 Gbps.
Simultaneous multithreading (SMT) is disabled for H3 instances and can't beenabled. There is also no overcommitting to ensure optimal performanceconsistency.
H3 instances are available on-demand, or with one- and three-year committed usediscounts (CUDs). H3 instances can be used with Google Kubernetes Engine.
H3 Limitations
The H3 machine series has the following restrictions:
- The H3 machine series is only available in a predefined machine type.Custom machine shapes aren't available.
- You can't use GPUs with H3 instances.
- Outbound data transfer is limited to 1 Gbps.
- Persistent Disk and Google Cloud Hyperdisk performance is capped at 15,000 IOPS and240 MBps throughput.
- H3 instances don't supportmachine images.
- H3 instances support only the NVMestorage interface.
- H3 instance images can't be used tocreate disks.
- H3 instances don't support sharing disks between instances, either inmulti-writer modeorread-only mode.
H3 machine types
H3 instances are available as a predefined configuration with 88 vCPUsand 352 GB of memory.
| Machine types | vCPUs1 | Memory (GB) | Local SSD | Default egress bandwidth (Gbps)2 |
|---|---|---|---|---|
h3-standard-88 | 88 | 352 | Not supported | Up to 200 Gbps |
1 A vCPU represents an entire core—no simultaneous multithreading (SMT).
2 Default egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
Supported disk types for H3
H3 instances can use the following block storage types:
- Balanced Persistent Disk (
pd-balanced) - Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Throughput (
hyperdisk-throughput)
Disk and capacity limits
If supported by the machine type, you can attach a mixture ofHyperdisk and Persistent Disk volumes to an instance, but the followingrestrictions apply:
- The combined number of both Hyperdisk and Persistent Disk volumes can't exceed 128 per instance.
The maximum total disk capacity (in TiB) across all disk types can't exceed:
- 512 TiB for all Hyperdisk
- 512 TiB for a mixture of Hyperdisk and Persistent Disk
- 257 TiB for all Persistent Disk
For details about the capacity limits, see Hyperdisk size and attachment limits andPersistent Disk maximum capacity.
H3 storage limits are described in the following table:
| Maximum number of disks per instance | |||||
|---|---|---|---|---|---|
| Machine types | All disk types1 | All Hyperdisk types | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
h3-standard-88 | 128 | 64 | 8 | 64 | 0 |
1 This limit applies to Persistent Disk and Hyperdisk, but doesn't include Local SSD disks.
Network support for H3 instances
H3 instances requiregVNIC network interfaces.H3 supports up to 200 Gbps network bandwidth forstandard networking.
Before migrating to H3 or creating H3 instances,make sure that the operating system imagethat you use supports the gVNIC driver. To get the best possible performance onH3 instances, on theNetworking featurestab of the OS details table, choose an OS image that supports both"Tier_1 Networking" and "200 Gbps network bandwidth". These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your H3 instance is using an operating system with an olderversion of the gVNIC driver, this is still supported but the instance mightexperience suboptimal performance such as less network bandwidth or higherlatency.
If you use a custom OS image with the H3 machine series, you canmanuallyinstall the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with H3instances. Google recommends using the latest gVNIC driver version to benefitfrom additional features and bug fixes.
Maintenance experience for H3 instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The H3 machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance | Simulate maintenance |
|---|---|---|---|---|---|
h3-standard-88 | Minimum of 30 days | Live migrate | 7 days | Yes | Yes |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
C2D machine series
The C2D machine series provides the largest instance sizes and are best-suited forhigh performance computing (HPC). The C2D series also has the largest availablelast-level cache (LLC) cache per core.
The C2D machine series comes in different machine types ranging from 2 to 112vCPUs, and offer up to 896 GB of memory. You can attach up to 3 TiB ofLocal SSD storage to these machine types for applications that require higherstorage performance.
- C2D standard and C2D high-cpu machines serve existing compute-bound workloadsincluding high-performance web servers, media transcoding, and gaming.
- C2D high-memory machines serve specialized workloads such as HPC and EDA,which need more memory.
The C2D series supports these compute-bound workloads by using the thirdgeneration AMD EPYC Milan platform.
The C2D series supportsConfidential VM.
C2D Limitations
The C2D machine series has the following restrictions:
- You can't attachregional persistent disksto a C2D instance.
- The C2D machine series is subject to differentdisk performance limitsthan the general-purpose and memory-optimized machine families.
- The C2D machine series is available only inselect zones and regions on specificCPU processors.
- The C2D machine series doesn't support GPUs.
- The C2D machine series doesn't support sole-tenant nodes.
C2D machine types
C2D instances are available as predefined configurations in sizes ranging from2 vCPUs to 112 vCPUs and up to 896 GB of memory.
- standard: 4 GB memory per vCPU
- highcpu: 2 GB memory per vCPU
highmem: 8 GB memory per vCPU
C2D standard
| Machine types | vCPUs1 | Memory (GB) | Local SSD2 | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c2d-standard-2 | 2 | 8 | 1, 2, 4, or 8 | Up to 10 | N/A |
c2d-standard-4 | 4 | 16 | 1, 2, 4, or 8 | Up to 10 | N/A |
c2d-standard-8 | 8 | 32 | 1, 2, 4, or 8 | Up to 16 | N/A |
c2d-standard-16 | 16 | 64 | 1, 2, 4, or 8 | Up to 32 | N/A |
c2d-standard-32 | 32 | 128 | 2, 4, or 8 | Up to 32 | Up to 50 |
c2d-standard-56 | 56 | 224 | 4 or 8 | Up to 32 | Up to 50 |
c2d-standard-112 | 112 | 448 | 8 | Up to 32 | Up to 100 |
1 A vCPU represents a single logical CPU thread. SeeCPU platforms.
2 Number of 375 GiB Local SSD disks that you can choose to add when creating the instance.
3 Default egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
C2D high-cpu
| Machine types | vCPUs1 | Memory (GB) | Local SSD2 | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c2d-highcpu-2 | 2 | 4 | 1, 2, 4, or 8 | Up to 10 | N/A |
c2d-highcpu-4 | 4 | 8 | 1, 2, 4, or 8 | Up to 10 | N/A |
c2d-highcpu-8 | 8 | 16 | 1, 2, 4, or 8 | Up to 16 | N/A |
c2d-highcpu-16 | 16 | 32 | 1, 2, 4, or 8 | Up to 32 | N/A |
c2d-highcpu-32 | 32 | 64 | 2, 4, or 8 | Up to 32 | Up to 50 |
c2d-highcpu-56 | 56 | 112 | 4 or 8 | Up to 32 | Up to 50 |
c2d-highcpu-112 | 112 | 224 | 8 | Up to 32 | Up to 100 |
1 A vCPU represents a single logical CPU thread. SeeCPU platforms.
2 Number of 375 GiB Local SSD disks that you can choose to add when creating the instance.
3 Default egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
C2D high-mem
| Machine types | vCPUs1 | Memory (GB) | Local SSD2 | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c2d-highmem-2 | 2 | 16 | 1, 2, 4, or 8 | Up to 10 | N/A |
c2d-highmem-4 | 4 | 32 | 1, 2, 4, or 8 | Up to 10 | N/A |
c2d-highmem-8 | 8 | 64 | 1, 2, 4, or 8 | Up to 16 | N/A |
c2d-highmem-16 | 16 | 128 | 1, 2, 4, or 8 | Up to 32 | N/A |
c2d-highmem-32 | 32 | 256 | 2, 4, or 8 | Up to 32 | Up to 50 |
c2d-highmem-56 | 56 | 448 | 4 or 8 | Up to 32 | Up to 50 |
c2d-highmem-112 | 112 | 896 | 8 | Up to 32 | Up to 100 |
1 A vCPU represents a single logical CPU thread. SeeCPU platforms.
2 Number of 375 GiB Local SSD disks that you can choose to add when creating the instance.
3 Default egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
Supported disk types for C2D
C2D instances can use the following block storage types:
- Standard Persistent Disk (
pd-standard) - Balanced Persistent Disk (
pd-balanced) - SSD (performance) Persistent Disk (
pd-ssd)
Each C2D instance can have a maximum of 128 Persistent Disk volumes (including theboot disk) attached to the instance, and a total of 257 GiB disk capacity.
C2D instances with Confidential Computing running Microsoft Windows withthe NVMe disk interface have a disk attachment limitation of 16 disks. SeeKnown issuesfor details.
Note: Persistent Disk usage is charged separately frommachine type pricing.Network support for C2D instances
The C2D machine types support either the VirtIO or gVNIC network driver. C2Dinstances with 32 or more vCPUS support higher network bandwidthsof 50 Gbps and 100 Gbps with gVNIC andper VM Tier_1 networking performance.
Maintenance experience for C2D instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The C2D machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance | Simulate maintenance |
|---|---|---|---|---|---|
| All machine types | Minimum of 30 days | Live migrate | 60 seconds | No | Yes |
| Confidential VM | Minimum of 30 days | Restart in place | 60 seconds | No | Yes |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
C2 machine series
The C2 machine series provides full transparency into the architecture of theunderlying server platforms, letting you fine-tune the performance. Machinetypes in this series offer much more computing power, and are generally morerobust for compute-intensive workloads compared to N1 high-CPU machine types.
The C2 series comes in different machine types ranging from 4 to 60 vCPUs, andoffers up to 240 GB of memory. You can attach up to 3 TiB of Local SSDstorage to these instances for applications that require higher storage performance.
This series also produces a greater than 40% performance improvementcompared to the previous generation N1 machines and offer higher performance perthread and isolation for latency-sensitive workloads.
The C2 series enables the highest performance per core and the highest frequencyfor compute-bound workloads using Intel 3.9 GHz Cascade Lake processors. Ifyou are looking to optimize workloads forsingle thread performance,particularly with respect to floating point, choose a machine type in thisseries to take advantage of AVX-512 capabilities only available on Intel.
C2 Limitations
The C2 machine series has the following restrictions:
- You cannot useregional persistent disks.
- The C2 machine series is subject to differentdisk limitsthan the general-purpose and memory-optimized machine families.
- The C2 machine series is available only inselect zones and regions on specificCPU processors.
- The C2 machine series doesn't support GPUs.
C2 machine types
C2 instances are available as predefined configurations with 4 to 60 vCPUsand 4 GB memory per vCPU.
| Machine types | vCPUs1 | Memory (GB) | Local SSD2 | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c2-standard-4 | 4 | 16 | 1, 2, 4, or 8 | Up to 10 | N/A |
c2-standard-8 | 8 | 32 | 1, 2, 4, or 8 | Up to 16 | N/A |
c2-standard-16 | 16 | 64 | 2, 4, or 8 | Up to 32 | N/A |
c2-standard-30 | 30 | 120 | 4 or 8 | Up to 32 | Up to 50 |
c2-standard-60 | 60 | 240 | 8 | Up to 32 | Up to 100 |
1 A vCPU represents a single logical CPU thread. SeeCPU platforms.
2 Number of 375 GiB Local SSD disks that you can choose to add when creating the instance.
3 Default egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
Supported disk types for C2
C2 instances can use the following block storage types:
- Standard Persistent Disk (
pd-standard) - Balanced Persistent Disk (
pd-balanced) - SSD (performance) Persistent Disk (
pd-ssd)
Each C2 instance can have a maximum of 128 Persistent Disk volumes (including theboot disk) attached to the instance, and a total of 257 GiB disk capacity.
Note: Persistent Disk usage is charged separately frommachine type pricing.Network support for C2 instances
The C2 machine types support either the VirtIO or gVNIC network driver. C2instances with 30 or more vCPUS support higher network bandwidthsof 50 Gbps and 100 Gbps with gVNIC andper VM Tier_1 networking performance.
Maintenance experience for C2 instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The C2 machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance | Simulate maintenance |
|---|---|---|---|---|---|
| All machine types | Minimum of 30 days | Live migrate | 60 seconds | No | Yes |
| Confidential VM | Minimum of 30 days | Restart in place | 60 seconds | No | Yes |
| Sole tenant node VMs | 4 to 6 weeks | Live migrate, restart in place, or migrate with a node group | none | No | Yes |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
What's next
- Learn about the HPC VM image
- Create an instance.
- Create an instance that uses Cloud RDMA
- ReviewCompute Engine instance pricing.
- Configure an instance with a high-bandwidth network.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.