General-purpose machine family for Compute Engine Stay organized with collections Save and categorize content based on your preferences.
This document describes the features of the Compute Enginegeneral-purpose machine family, which has the best price-performance with themost flexible vCPU to memory ratios, and provides features that target moststandard and cloud-native workloads.
The general-purpose machine family has predefined andcustom machine types to align with your workload,depending on your requirements.
C4D is powered by the fifth generation AMD EPYC Turin processor andTitanium. These machine typeshave up to 384 vCPUs and 3,024 GB of DDR5 memory, a max-boostfrequency of 4.1 GHz, and up to 200 Gbps per VM Tier_1 networking performance.C4D also offers Local SSD (-lssd) machine types and bare metal (-metal)machine types.
C4A is powered by Google's Axion processor built on the Arm Neoverse V2 computecore. C4A provides machine types with up to 72 vCPUs,576 GB DDR5 memory, 6 TiB of localTitanium SSD, and up to 100 Gbps with per VM Tier_1 networking performance. C4A alsooffers Local SSD (-lssd) machine types.
C4 is powered by the sixth generation (code-named Granite Rapids) and fifthgeneration (code-named Emerald Rapids)Intel Xeon Scalable processors. C4 instances running on Granite Rapids offer asustained, all-core turbo frequencyof 3.9 GHz and a max turbo frequency of 4.2 GHz, 2.2 TB of DDR5 memory,18 TiB of local SSD, andsupports up to 200 Gbps ofper VM Tier_1 networking performance. C4 also offers Local SSD (-lssd) machine types and baremetal (-metal) machine types.
N4D is powered by the fifth generation AMD EPYC Turin processor andTitanium. Thesemachine types have up to 96 vCPUs and 768 GB of DDR5 memory, and amax-boost frequency of 4.1 GHz. N4D offers 50 Gbps of standard networkbandwidth.
N4A(Preview) is powered by Google'sAxion processor built on the Arm Neoverse N3 compute core. N4Aprovides machine types of up to 64 vCPUs and 512 GB of DDR5 memory.N4A is available in standard, high-mem, high-cpu, and custom machine types withextended memory, and up to 50 Gbps of standard networking.
N4 is powered by the fifth generation Intel Xeon Scalable processor(code-named Emerald Rapids). N4 offers a sustained, all-core turbo frequency of2.9 GHz, 640 GB of DDR5memory, and up to 50 Gbps of standard network bandwidth.
C3 is powered by fourth generation Intel Xeon Scalable processors and offers asustained, all-core turbo frequency of 3.0 GHz, 8 channels of DDR5 memory, andup to 200 Gbps per VM Tier_1 networking performance.
C3D is powered by fourth generation AMD EPYC Genoa processors and offers asustained, all-core turbo frequency of 3.3 GHz, 2,880 GB of DDR5 memory,and up to 200 Gbps per VM Tier_1 networking performance.
For bare metal machine types, choose the C4, C4D, or C3 machine series.
Allthird and fourth generationgeneral-purpose VMs supportTitanium.
E2, E2 shared-core, N2, N2D, Tau T2A, and Tau T2D are second generation machineseries in this family; N1 and its related shared-core machine types are thefirst generation machine series.
| Machine series | Workloads |
|---|---|
| N4,N4A (Preview),N4D,N2,N2D,N1 |
|
| C4A,C4,C4D,C3,C3D |
|
| E2 |
|
| Tau T2A,Tau T2D |
|
To learn how your selection affects the performance of Persistent Disk volumesattached to your VMs, seeConfigure your Persistent Disk and VMs.
C4D machine series
C4D VMs are powered by the fifth generation AMD EPYC Turin processor andTitanium.C4D delivers a 30% performance boost over C3D on the estimatedSPECrate®2017_int_base benchmark, which letsyou scale performance with fewer resources, thereby optimizing your costs.
C4D is designed to run workloads including web, app and game servers, AIinference, video streaming, and data centric applications likeanalytics, as well as relational and in-memory databases.
For databases, C4D delivers 55% more queries per second for MySQL and 35% higheroperations per second for Memorystore for Redis workloadscompared to C3D due to its higher core frequency (up to 4.1 GHz) andimproved Instructions Per Clock (IPC).
Note: C4D doesn't support All Core Turbo Mode setting. C4D instances always runwithout frequency restrictions.For web-serving workloads, AMD EPYC Turin's advancements in L3-cache efficiencyand branch prediction enable up to 80% higher throughput per vCPU with C4D.
In summary, the C4D machine series has the following features:
- Powered by the AMD EPYC Turin CPU and Titanium
- Supports up to 384 vCPUs and 3,024 GB of DDR5 memory
- Supports up to 12 TiB of local Titanium SSD disks
- Offers predefined machine types that range in size from 2 to 384 vCPUs
- Supports up to 3,024 GB of DDR5 memory for VM instances andup to 3,072 GB of memory for bare metal instances
- Supports consumption options like on-demand, Spot VMs,and future reservations
- Supports standard network configuration with up to 100 Gbps bandwidth
- Supports per VM Tier_1 networking performance with up to 200 Gbps bandwidth
- Supports only Hyperdisk volumes
- SupportsConfidential VM with AMD SEV
- Supportsresource-based and flexible committed use discounts (CUDs)
- Supportscompact and spread placement policies
C4D machine types
C4D VMs are available as predefined configurations instandard,highcpu,andhighmem sizes ranging from 2 vCPU to 384 vCPUs and up to 3,024 GBof memory.
To use Titanium SSD with C4D, create your instance using the-lssd variantof the C4D machine types. Selecting this machine type creates an instance of thespecified size with Titanium SSD partitions attached. You can't attachTitanium SSD volumes separately.
To create a bare metal instance with C4D, use one of the following machinetypes:
c4d-standard-384-metalc4d-highcpu-384-metalc4d-highmem-384-metal
C4D
standard
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c4d-standard-2 | 2 | 7 | No | Up to 10 | N/A |
c4d-standard-4 | 4 | 15 | No | Up to 20 | N/A |
c4d-standard-8 | 8 | 31 | No | Up to 20 | N/A |
c4d-standard-16 | 16 | 62 | No | Up to 20 | N/A |
c4d-standard-32 | 32 | 124 | No | Up to 23 | N/A |
c4d-standard-48 | 48 | 186 | No | Up to 34 | Up to 50 |
c4d-standard-64 | 64 | 248 | No | Up to 45 | Up to 75 |
c4d-standard-96 | 96 | 372 | No | Up to 67 | Up to 100 |
c4d-standard-192 | 192 | 744 | No | Up to 100 | Up to 150 |
c4d-standard-384 | 384 | 1,488 | No | Up to 100 | Up to 200 |
c4d-standard-384-metal2 | 384 | 1,536 | No | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 For bare metal instances, the number of vCPUs is equivalent tothe number of hardware threads on the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supportshigh-bandwidth networkingfor larger machine types.
C4D
highcpu
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c4d-highcpu-2 | 2 | 3 | No | Up to 10 | N/A |
c4d-highcpu-4 | 4 | 7 | No | Up to 20 | N/A |
c4d-highcpu-8 | 8 | 15 | No | Up to 20 | N/A |
c4d-highcpu-16 | 16 | 30 | No | Up to 20 | N/A |
c4d-highcpu-32 | 32 | 60 | No | Up to 23 | N/A |
c4d-highcpu-48 | 48 | 90 | No | Up to 34 | Up to 50 |
c4d-highcpu-64 | 64 | 120 | No | Up to 45 | Up to 75 |
c4d-highcpu-96 | 96 | 180 | No | Up to 67 | Up to 100 |
c4d-highcpu-192 | 192 | 360 | No | Up to 100 | Up to 150 |
c4d-highcpu-384 | 384 | 720 | No | Up to 100 | Up to 200 |
c4d-highcpu-384-metal2 | 384 | 768 | No | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 For bare metal instances, the number of vCPUs is equivalent tothe number of hardware threads on the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supportshigh-bandwidth networkingfor larger machine types.
C4D
highmem
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c4d-highmem-2 | 2 | 15 | No | Up to 10 | N/A |
c4d-highmem-4 | 4 | 31 | No | Up to 20 | N/A |
c4d-highmem-8 | 8 | 63 | No | Up to 20 | N/A |
c4d-highmem-16 | 16 | 126 | No | Up to 20 | N/A |
c4d-highmem-32 | 32 | 252 | No | Up to 23 | N/A |
c4d-highmem-48 | 48 | 378 | No | Up to 34 | Up to 50 |
c4d-highmem-64 | 64 | 504 | No | Up to 45 | Up to 75 |
c4d-highmem-96 | 96 | 756 | No | Up to 67 | Up to 100 |
c4d-highmem-192 | 192 | 1,512 | No | Up to 100 | Up to 150 |
c4d-highmem-384 | 384 | 3,024 | No | Up to 100 | Up to 200 |
c4d-highmem-384-metal2 | 384 | 3,072 | No | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 For bare metal instances, the number of vCPUs is equivalent tothe number of hardware threads on the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supportshigh-bandwidth networkingfor larger machine types.
C4D standard
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c4d-standard-8-lssd | 8 | 31 | (1 x 375 GiB) 375 GiB | Up to 20 | N/A |
c4d-standard-16-lssd | 16 | 62 | (1 x 375 GiB) 375 GiB | Up to 20 | N/A |
c4d-standard-32-lssd | 32 | 124 | (2 x 375 GiB) 750 GiB | Up to 23 | N/A |
c4d-standard-48-lssd | 48 | 186 | (4 x 375 GiB) 1,500 GiB | Up to 34 | Up to 50 |
c4d-standard-64-lssd | 64 | 248 | (6 x 375 GiB) 2,250 GiB | Up to 45 | Up to 75 |
c4d-standard-96-lssd | 96 | 372 | (8 x 375 GiB) 3,000 GiB | Up to 67 | Up to 100 |
c4d-standard-192-lssd | 192 | 744 | (16 x 375 GiB) 6,000 GiB | Up to 100 | Up to 150 |
c4d-standard-384-lssd | 384 | 1,488 | (32 x 375 GiB) 12,000 GiB | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 For bare metal instances, the number of vCPUs is equivalent tothe number of hardware threads on the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supportshigh-bandwidth networkingfor larger machine types.
C4D highmem
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps)4 |
|---|---|---|---|---|---|
c4d-highmem-8-lssd | 8 | 63 | (1 x 375 GiB) 375 GiB | Up to 20 | N/A |
c4d-highmem-16-lssd | 16 | 126 | (1 x 375 GiB) 375 GiB | Up to 20 | N/A |
c4d-highmem-32-lssd | 32 | 252 | (2 x 375 GiB) 750 GiB | Up to 23 | N/A |
c4d-highmem-48-lssd | 48 | 378 | (4 x 375 GiB) 1,500 GiB | Up to 34 | Up to 50 |
c4d-highmem-64-lssd | 64 | 504 | (6 x 375 GiB) 2,250 GiB | Up to 45 | Up to 75 |
c4d-highmem-96-lssd | 96 | 756 | (8 x 375 GiB) 3,000 GiB | Up to 67 | Up to 100 |
c4d-highmem-192-lssd | 192 | 1,512 | (16 x 375 GiB) 6,000 GiB | Up to 100 | Up to 150 |
c4d-highmem-384-lssd | 384 | 3,024 | (32 x 375 GiB) 12,000 GiB | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 For bare metal instances, the number of vCPUs is equivalent tothe number of hardware threads on the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 Supportshigh-bandwidth networkingfor larger machine types.
C4D doesn't support custom machine types.
Regional availability for C4D instances
For C4D VMs, you can view the available regions and zones in theAvailable regions and zones table,as follows:
- To view all the zones where you can create a C4D VM, in theSelect a machine series menu, select
C4D. - You can also use theSelect a locationmenu to limit the results to a geographical area.
For regional availability of C4D bare metal instances, seeBare metal instances on Compute Engine.
Supported disk types for C4D
C4D VMs support only the NVMe diskinterface and can use the followingHyperdisk block storage:
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Extreme (
hyperdisk-extreme) - Local Titanium SSD(added automatically with
-lssdmachine types)
C4D doesn't support Persistent Disk.
Disk and capacity limits
You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed:
For machine types with less than 32 vCPUs: 257 TiB for all Hyperdisk
For machine types with 32 or more vCPUs: 512 TiB for all Hyperdisk
For details about the capacity limits, seeHyperdisk size and attachment limits.
C4D storage limits are described in the following table:
C4D standard
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme |
c4d-standard-2 | 4 | 4 | 0 | 0 | 0 |
c4d-standard-4 | 8 | 8 | 0 | 0 | 0 |
c4d-standard-8 | 16 | 16 | 0 | 0 | 0 |
c4d-standard-16 | 32 | 32 | 0 | 0 | 0 |
c4d-standard-32 | 32 | 32 | 0 | 0 | 0 |
c4d-standard-48 | 32 | 32 | 0 | 0 | 0 |
c4d-standard-64 | 32 | 32 | 0 | 0 | 8 |
c4d-standard-96 | 32 | 32 | 0 | 0 | 8 |
c4d-standard-192 | 64 | 64 | 0 | 0 | 8 |
c4d-standard-384 | 128 | 128 | 0 | 0 | 8 |
c4d-standard-384-metal | 128 | 128 | 0 | 0 | 8 |
C4D highcpu
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme |
c4d-highcpu-2 | 4 | 4 | 0 | 0 | 0 |
c4d-highcpu-4 | 8 | 8 | 0 | 0 | 0 |
c4d-highcpu-8 | 16 | 16 | 0 | 0 | 0 |
c4d-highcpu-16 | 32 | 32 | 0 | 0 | 0 |
c4d-highcpu-32 | 32 | 32 | 0 | 0 | 0 |
c4d-highcpu-48 | 32 | 32 | 0 | 0 | 0 |
c4d-highcpu-64 | 32 | 32 | 0 | 0 | 8 |
c4d-highcpu-96 | 32 | 32 | 0 | 0 | 8 |
c4d-highcpu-192 | 64 | 64 | 0 | 0 | 8 |
c4d-highcpu-384 | 128 | 128 | 0 | 0 | 8 |
c4d-highcpu-384-metal | 128 | 128 | 0 | 0 | 8 |
C4D highmem
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme |
c4d-highmem-2 | 4 | 4 | 0 | 0 | 0 |
c4d-highmem-4 | 8 | 8 | 0 | 0 | 0 |
c4d-highmem-8 | 16 | 16 | 0 | 0 | 0 |
c4d-highmem-16 | 32 | 32 | 0 | 0 | 0 |
c4d-highmem-32 | 32 | 32 | 0 | 0 | 0 |
c4d-highmem-48 | 32 | 32 | 0 | 0 | 0 |
c4d-highmem-64 | 32 | 32 | 0 | 0 | 8 |
c4d-highmem-96 | 32 | 32 | 0 | 0 | 8 |
c4d-highmem-192 | 64 | 64 | 0 | 0 | 8 |
c4d-highmem-384 | 128 | 128 | 0 | 0 | 8 |
c4d-highmem-384-metal | 128 | 128 | 0 | 0 | 8 |
C4D standard Local SSD
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme |
c4d-standard-8-lssd | 16 | 16 | 0 | 0 | 0 |
c4d-standard-16-lssd | 32 | 32 | 0 | 0 | 0 |
c4d-standard-32-lssd | 32 | 32 | 0 | 0 | 0 |
c4d-standard-48-lssd | 32 | 32 | 0 | 0 | 0 |
c4d-standard-64-lssd | 32 | 32 | 0 | 0 | 8 |
c4d-standard-96-lssd | 32 | 32 | 0 | 0 | 8 |
c4d-standard-192-lssd | 64 | 64 | 0 | 0 | 8 |
c4d-standard-384-lssd | 128 | 128 | 0 | 0 | 8 |
C4D highmem with Local SSD
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme |
c4d-highmem-8-lssd | 16 | 16 | 0 | 0 | 0 |
c4d-highmem-16-lssd | 32 | 32 | 0 | 0 | 0 |
c4d-highmem-32-lssd | 32 | 32 | 0 | 0 | 0 |
c4d-highmem-48-lssd | 32 | 32 | 0 | 0 | 0 |
c4d-highmem-64-lssd | 32 | 32 | 0 | 0 | 8 |
c4d-highmem-96-lssd | 32 | 32 | 0 | 0 | 8 |
c4d-highmem-192-lssd | 64 | 64 | 0 | 0 | 8 |
c4d-highmem-384-lssd | 128 | 128 | 0 | 0 | 8 |
Network support for C4D instances
The following network interface drivers are required:
- C4D instances requiregVNIC network interfaces.
- C4D bare metal instances require theIntel IDPF LAN PF device driver.
C4D supports up to 100 Gbps network bandwidth for standardnetworking and up to 200 Gbps with per VM Tier_1 networking performance for VM andbare metal instances.
Before migrating to C4D or creating C4D VMs or bare metalinstances, make sure that theoperating system imagethat you use supports the IDPF network driver for bare metal instances or thegVNIC driver for VM instances. To get the best possible performance onC4D VMs, choose an OS image that supports both"Tier_1 Networking" and "200 Gbps network bandwidth". These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your C4D VM isusing an operating system with an older version of gVNIC driver, this is stillsupported but the VM might experience suboptimal performance such as lessnetwork bandwidth or higher latency.
If you use a custom OS image to create a C4D VM, you canmanually install the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with C4DVMs. Google recommends using the latest gVNIC driver version to benefit fromadditional features and bug fixes.
Maintenance experience for C4D instances
During the lifespan of a virtual machine (VM) instance,the host machine that your instance runs undergoes multiple host events.A host event can include the regular maintenance of Compute Engineinfrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The C4D machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance | Simulate maintenance |
|---|---|---|---|---|---|
c4d-*-lssd | Minimum of 30 days | Live migrate | 7 days | Yes | Yes |
c4d-*-384 | Minimum of 30 days | Live migrate | 7 days | Yes | Yes |
| All others | Minimum of 30 days | Live migrate | 7 days | No | Yes |
The maintenance frequencies shown in the previous table are approximations,not guarantees. Compute Engine might occasionally perform maintenancemore frequently.
C4A machine series
C4A VMs are powered by Google's first Arm Axion™ processor.C4A provides machine types with up to 72 vCPUs and 576 GB of DDR5memory, and 6 TiB of localTitanium SSD.C4A is available instandard,highmem, andhighcpu machine types, andalso offers-lssd variants for Titanium SSD. C4A uses Google Cloud's latestgeneration of storage options including Hyperdisk Balanced, Hyperdisk Extreme, and Titanium SSD.C4A offers up to 50 Gbps of standard network performance, and up to100 Gbps per VM Tier_1 networking performance for your instances.
C4A VMs are placed within a single node withUniform Memory Access (UMA)and also support sole tenant nodes to deliver consistent performance.
In summary, the C4A machine series has the following features:
- Is powered by the Google Axion CPU and Titanium.
- Supports up to 72 vCPUs and 576 GB of DDR5 memory.
- Supports up to 6 TiB of local Titanium SSD disks.
- Offers multiple predefined machine types.
- Supports standard network configuration with up to 50 Gbps bandwidth.
- Supports per VM Tier_1 networking performance with up to 100 Gbps bandwidth.
- Supports Hyperdisk only.
- Supports the following discount and consumption options:
- Supports theperformance monitoring unit (PMU).
- Doesn't supportcompact placement policies.
- Doesn't supportsuspendwith C4A instances that have attached Titanium SSD disks.
For information about migrating to Arm VMs, read theArm on Compute document.
C4A machine types
Note: Community supported Arm OSes might be supported. Ifthe OS isn't listed on theOperating system detailspage, test the OS to learn if it is supported.C4A VMs are available as predefined configurations insizes ranging from 1 vCPU to 72 vCPUs and up to 576 GB of memory.
standard: 4 GB memory per vCPUhighcpu: 2 GB memory per vCPUhighmem: 8 GB memory per vCPU
To use Titanium SSD with C4A, create your VM using the-lssd variant ofthe C4A machine types. Selecting this machine type creates a VM of thespecified size with Titanium SSD partitions attached. You can't attachTitanium SSD volumes separately.
C4A
standard
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4a-standard-1 | 1 | 4 | No | Up to 10 | N/A |
c4a-standard-2 | 2 | 8 | No | Up to 10 | N/A |
c4a-standard-4 | 4 | 16 | No | Up to 23 | N/A |
c4a-standard-8 | 8 | 32 | No | Up to 23 | N/A |
c4a-standard-16 | 16 | 64 | No | Up to 23 | N/A |
c4a-standard-32 | 32 | 128 | No | Up to 23 | Up to 50 |
c4a-standard-48 | 48 | 192 | No | Up to 34 | Up to 50 |
c4a-standard-64 | 64 | 256 | No | Up to 45 | Up to 75 |
c4a-standard-72 | 72 | 288 | No | Up to 50 | Up to 100 |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C4A
highcpu
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4a-highcpu-1 | 1 | 2 | No | Up to 10 | N/A |
c4a-highcpu-2 | 2 | 4 | No | Up to 10 | N/A |
c4a-highcpu-4 | 4 | 8 | No | Up to 23 | N/A |
c4a-highcpu-8 | 8 | 16 | No | Up to 23 | N/A |
c4a-highcpu-16 | 16 | 32 | No | Up to 23 | N/A |
c4a-highcpu-32 | 32 | 64 | No | Up to 23 | Up to 50 |
c4a-highcpu-48 | 48 | 96 | No | Up to 34 | Up to 50 |
c4a-highcpu-64 | 64 | 128 | No | Up to 45 | Up to 75 |
c4a-highcpu-72 | 72 | 144 | No | Up to 50 | Up to 100 |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C4A
highmem
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4a-highmem-1 | 1 | 8 | No | Up to 10 | N/A |
c4a-highmem-2 | 2 | 16 | No | Up to 10 | N/A |
c4a-highmem-4 | 4 | 32 | No | Up to 23 | N/A |
c4a-highmem-8 | 8 | 64 | No | Up to 23 | N/A |
c4a-highmem-16 | 16 | 128 | No | Up to 23 | N/A |
c4a-highmem-32 | 32 | 256 | No | Up to 23 | Up to 50 |
c4a-highmem-48 | 48 | 384 | No | Up to 34 | Up to 50 |
c4a-highmem-64 | 64 | 512 | No | Up to 45 | Up to 75 |
c4a-highmem-72 | 72 | 576 | No | Up to 50 | Up to 100 |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C4A standard
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4a-standard-4-lssd | 4 | 16 | (1 x 375 GiB) 375 GiB | Up to 23 | N/A |
c4a-standard-8-lssd | 8 | 32 | (2 x 375 GiB) 750 GiB | Up to 23 | N/A |
c4a-standard-16-lssd | 16 | 64 | (4 x 375 GiB) 1,500 GiB | Up to 23 | N/A |
c4a-standard-32-lssd | 32 | 128 | (6 x 375 GiB) 2,250 GiB | Up to 23 | Up to 50 |
c4a-standard-48-lssd | 48 | 192 | (10 x 375 GiB) 3,750 GiB | Up to 34 | Up to 50 |
c4a-standard-64-lssd | 64 | 256 | (14 x 375 GiB) 5,250 GiB | Up to 45 | Up to 75 |
c4a-standard-72-lssd | 72 | 288 | (16 x 375 GiB) 6,000 GiB | Up to 50 | Up to 100 |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C4A highmem
| Machine types | vCPUs* | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)‡ | Tier_1 egress bandwidth (Gbps)# |
|---|---|---|---|---|---|
c4a-highmem-4-lssd | 4 | 32 | (1 x 375 GiB) 375 GiB | Up to 23 | N/A |
c4a-highmem-8-lssd | 8 | 64 | (2 x 375 GiB) 750 GiB | Up to 23 | N/A |
c4a-highmem-16-lssd | 16 | 128 | (4 x 375 GiB) 1,500 GiB | Up to 23 | N/A |
c4a-highmem-32-lssd | 32 | 256 | (6 x 375 GiB) 2,250 GiB | Up to 23 | Up to 50 |
c4a-highmem-48-lssd | 48 | 384 | (10 x 375 GiB) 3,750 GiB | Up to 34 | Up to 50 |
c4a-highmem-64-lssd | 64 | 512 | (14 x 375 GiB) 5,250 GiB | Up to 45 | Up to 75 |
c4a-highmem-72-lssd | 72 | 576 | (16 x 375 GiB) 6,000 GiB | Up to 50 | Up to 100 |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C4A doesn't support custom machine types.
Supported disk types for C4A
C4A VMs support only the NVMe disk interface and can use the followingHyperdisk block storage:
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability) - Hyperdisk Throughput (
hyperdisk-throughput) - Hyperdisk Extreme (
hyperdisk-extreme) - Local Titanium SSD(added automatically with
-lssdmachine types)
C4A doesn't support Persistent Disk.
Disk and capacity limits
You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed:
For machine types with less than 32 vCPUs: 257 TiB for all Hyperdisk
For machine types with 32 or more vCPUs: 512 TiB for all Hyperdisk
For details about the capacity limits, seeHyperdisk size and attachment limits.
C4A standard
| Maximum number of disks | ||||
|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
c4a-standard-1 | 16 | 16 | 16 | 0 |
c4a-standard-2 | 16 | 16 | 16 | 0 |
c4a-standard-4 | 16 | 16 | 16 | 0 |
c4a-standard-8 | 16 | 16 | 16 | 0 |
c4a-standard-16 | 32 | 32 | 32 | 0 |
c4a-standard-32 | 32 | 32 | 32 | 0 |
c4a-standard-48 | 32 | 32 | 32 | 0 |
c4a-standard-64 | 64 | 64 | 64 | 8 |
c4a-standard-72 | 64 | 64 | 64 | 8 |
C4A highcpu
| Maximum number of disks | ||||
|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
c4a-highcpu-1 | 16 | 8 | 16 | 0 |
c4a-highcpu-2 | 16 | 8 | 16 | 0 |
c4a-highcpu-4 | 16 | 16 | 16 | 0 |
c4a-highcpu-8 | 16 | 16 | 16 | 0 |
c4a-highcpu-16 | 32 | 32 | 32 | 0 |
c4a-highcpu-32 | 32 | 32 | 32 | 0 |
c4a-highcpu-48 | 32 | 32 | 32 | 0 |
c4a-highcpu-64 | 64 | 64 | 64 | 8 |
c4a-highcpu-72 | 64 | 64 | 64 | 8 |
C4A highmem
| Maximum number of disks | ||||
|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
c4a-highmem-1 | 16 | 8 | 16 | 0 |
c4a-highmem-2 | 16 | 8 | 16 | 0 |
c4a-highmem-4 | 16 | 16 | 16 | 0 |
c4a-highmem-8 | 16 | 16 | 16 | 0 |
c4a-highmem-16 | 32 | 32 | 32 | 0 |
c4a-highmem-32 | 32 | 32 | 32 | 0 |
c4a-highmem-48 | 32 | 32 | 32 | 0 |
c4a-highmem-64 | 64 | 64 | 64 | 8 |
c4a-highmem-72 | 64 | 64 | 64 | 8 |
C4A standard
| Maximum number of disks | ||||
|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
c4a-standard-4-lssd | 16 | 16 | 16 | 0 |
c4a-standard-8-lssd | 16 | 16 | 16 | 0 |
c4a-standard-16-lssd | 32 | 32 | 32 | 0 |
c4a-standard-32-lssd | 32 | 32 | 32 | 0 |
c4a-standard-48-lssd | 32 | 32 | 32 | 0 |
c4a-standard-64-lssd | 64 | 64 | 64 | 8 |
c4a-standard-72-lssd | 64 | 64 | 64 | 8 |
C4A highmem
| Maximum number of disks | ||||
|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
c4a-highmem-4-lssd | 16 | 16 | 16 | 0 |
c4a-highmem-8-lssd | 16 | 16 | 16 | 0 |
c4a-highmem-16-lssd | 32 | 32 | 32 | 0 |
c4a-highmem-32-lssd | 32 | 32 | 32 | 0 |
c4a-highmem-48-lssd | 32 | 32 | 32 | 0 |
c4a-highmem-64-lssd | 64 | 64 | 64 | 8 |
c4a-highmem-72-lssd | 64 | 64 | 64 | 8 |
Network support for C4A instances
C4A instances requiregVNIC network interfaces.C4A instances support up to 50 Gbps network bandwidth for standardnetworking and up to 100 Gbps network bandwidth per VM Tier_1 networking performance.
Before migrating to C4A or creating C4A instances,make sure that the operating system imagethat you use supports the gVNIC driver. To get the best possible performance onC4A instances, on theNetworking featurestab of the OS details table, choose an OS image that supports both"Tier_1 Networking" and "200 Gbps network bandwidth". These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your C4A instance is using an operating system with an olderversion of the gVNIC driver, this is still supported but the instance mightexperience suboptimal performance such as less network bandwidth or higherlatency.
If you use a custom OS image with the C4A machine series, you canmanuallyinstall the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with C4Ainstances. Google recommends using the latest gVNIC driver version to benefitfrom additional features and bug fixes.
Maintenance experience for C4A instances
During the lifespan of a virtual machine (VM) instance,the host machine that your instance runs undergoes multiple host events.A host event can include the regular maintenance of Compute Engineinfrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The C4A machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance |
|---|---|---|---|---|
c4a-*-lssd | Minimum of 30 days | Live migrate | 7 days | Yes |
| All others | Minimum of 30 days | Live migrate | 7 days | No |
The maintenance frequencies shown in the previous table are approximations,not guarantees. Compute Engine might occasionally perform maintenancemore frequently.
C4 machine series
C4 VMs are powered by 6th generation (code-named Granite Rapids) or 5thgeneration (code-named Emerald Rapids) Intel Xeon Scalable processors andTitanium. C4 Local SSD (-lssd) and bare metal(-metal) instances, as well asinstances with 144 or 288 vCPUs, use the 6th generation Intel Granite Rapidsprocessor. All other instances use the 5th generation Intel Emerald Rapidsprocessor.
The C4 machine series is designed to deliverprice-performance and enterprise-grade reliability along with a maintenanceexperience for your most demanding workloads. C4 instances are ideal for web andapp serving, game servers, databases andcaches, video streaming, data analytics, network appliances, and CPU-basedML inference.
C4 VMs are designed to achieve maximum performance from single-core turboboosting. For more consistent vCPU performance, disable vCPU boosting and limitthe vCPUs to the sustainable all-core turbo frequency. Youcan do this by settingturboMode=ALL_CORE_MAX in theAdvancedMachineFeaturessettings.
In summary, the C4 machine series:
- Is powered by the 6th generation Intel Granite Rapids or 5th generation IntelEmerald Rapids processor and Titanium IPU.
- Lets you switch between core-boosting performance and steady all-core turboperformance for your vCPUs.
- Supports up to 288 vCPUs and 2.2 TB of DDR5 memory.
- Supports up to 18 TiB of local Titanium SSD disks.
- Supports compact and spread placement policies.
- Offers multiple predefined machine types.
- Supports standard network configuration with up to 100 Gbps bandwidth.
- Supports per VM Tier_1 networking performance with up to 200 Gbps bandwidth.
- SupportsIntel Advanced Matrix Extensions (AMX),a built-in accelerator that significantly improves the performance ofdeep-learning training and inference on the CPU.
- Supports the following discount and consumption options:
- Supports theperformance monitoring unit (PMU).
C4 Limitations
- You can't dynamically add or remove a disk when using Windows Server 25.
- You can't dynamically add or remove multiple disks when using Windows Server25 or Windows 11.
- C4 VM shapes powered by Granite Rapidsmight experience lower networking performance on Windows 11 and Debian 11OS images.
C4 machine types
C4 VMs are available as predefined configurations insizes ranging from 2 vCPUs to 288vCPUs and up to 2,232 GBof memory.
standard: 3.75 GB memory per vCPUhighcpu: 2 GB memory per vCPUhighmem: 7.75 GB memory per vCPU
To use Titanium SSD with C4, create your instance using the-lssd variant ofthe C4 machine types. Selecting this machine type creates an instance of thespecified size with Titanium SSD partitions attached. You can't attachTitanium SSD volumes separately.
To create a bare metal instance with C4,use one of the following machine types:
c4-standard-288-metalc4-standard-288-lssd-metalc4-highmem-288-metalc4-highmem-288-lssd-metal
C4 standard
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4-standard-2 | 2 | 7 | No | Up to 10 | N/A |
c4-standard-4 | 4 | 15 | No | Up to 23 | N/A |
c4-standard-8 | 8 | 30 | No | Up to 23 | N/A |
c4-standard-16 | 16 | 60 | No | Up to 23 | N/A |
c4-standard-24 | 24 | 90 | No | Up to 23 | N/A |
c4-standard-32 | 32 | 120 | No | Up to 23 | N/A |
c4-standard-48 | 48 | 180 | No | Up to 34 | Up to 50 |
c4-standard-96 | 96 | 360 | No | Up to 67 | Up to 100 |
c4-standard-144 | 144 | 540 | No | Up to 100 | Up to 150 |
c4-standard-192 | 192 | 720 | No | Up to 100 | Up to 200 |
c4-standard-288 | 288 | 1,080 | No | Up to 100 | Up to 200 |
c4-standard-288-metal | 288 | 1,080 | No | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networkingfor larger machine types.
C4 highcpu
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4-highcpu-2 | 2 | 4 | No | Up to 10 | N/A |
c4-highcpu-4 | 4 | 8 | No | Up to 23 | N/A |
c4-highcpu-8 | 8 | 16 | No | Up to 23 | N/A |
c4-highcpu-16 | 16 | 32 | No | Up to 23 | N/A |
c4-highcpu-24 | 24 | 48 | No | Up to 23 | N/A |
c4-highcpu-32 | 32 | 64 | No | Up to 23 | N/A |
c4-highcpu-48 | 48 | 96 | No | Up to 34 | Up to 50 |
c4-highcpu-96 | 96 | 192 | No | Up to 67 | Up to 100 |
c4-highcpu-144 | 144 | 288 | No | Up to 100 | Up to 150 |
c4-highcpu-192 | 192 | 384 | No | Up to 100 | Up to 200 |
c4-highcpu-288 | 288 | 576 | No | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networkingfor larger machine types.
C4 highmem
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4-highmem-2 | 2 | 15 | No | Up to 10 | N/A |
c4-highmem-4 | 4 | 31 | No | Up to 23 | N/A |
c4-highmem-8 | 8 | 62 | No | Up to 23 | N/A |
c4-highmem-16 | 16 | 124 | No | Up to 23 | N/A |
c4-highmem-24 | 24 | 186 | No | Up to 23 | N/A |
c4-highmem-32 | 32 | 248 | No | Up to 23 | N/A |
c4-highmem-48 | 48 | 372 | No | Up to 34 | Up to 50 |
c4-highmem-96 | 96 | 744 | No | Up to 67 | Up to 100 |
c4-highmem-144 | 144 | 1,116 | No | Up to 100 | Up to 150 |
c4-highmem-192 | 192 | 1,488 | No | Up to 100 | Up to 200 |
c4-highmem-288 | 288 | 2,232 | No | Up to 100 | Up to 200 |
c4-highmem-288-metal | 288 | 2,232 | No | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networkingfor larger machine types.
C4 standard with Local SSD
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4-standard-4-lssd | 4 | 15 | (1 x 375 GiB) 375 GiB | Up to 23 | N/A |
c4-standard-8-lssd | 8 | 30 | (1 x 375 GiB) 375 GiB | Up to 23 | N/A |
c4-standard-16-lssd | 16 | 60 | (2 x 375 GiB) 750 GiB | Up to 23 | N/A |
c4-standard-24-lssd | 24 | 90 | (4 x 375 GiB) 1,500 GiB | Up to 23 | N/A |
c4-standard-32-lssd | 32 | 120 | (5 x 375 GiB) 1,875 GiB | Up to 23 | N/A |
c4-standard-48-lssd | 48 | 180 | (8 x 375 GiB) 3,000 GiB | Up to 34 | N/A |
c4-standard-96-lssd | 96 | 360 | (16 x 375 GiB) 6,000 GiB | Up to 67 | N/A |
c4-standard-144-lssd | 144 | 540 | (24 x 375 GiB) 9,000 GiB | Up to 100 | N/A |
c4-standard-192-lssd | 192 | 720 | (32 x 375 GiB) 12,000 GiB | Up to 100 | N/A |
c4-standard-288-lssd | 288 | 1,080 | (48 x 375 GiB) 18,000 GiB | Up to 100 | Up to 200 |
c4-standard-288-lssd-metal | 288 | 1,080 | (48 x 375 GiB) 18,000 GiB | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networkingfor larger machine types.
C4 highmem with Local SSD
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|
c4-highmem-4-lssd | 4 | 31 | (1 x 375 GiB) 375 GiB | Up to 23 | N/A |
c4-highmem-8-lssd | 8 | 62 | (1 x 375 GiB) 375 GiB | Up to 23 | N/A |
c4-highmem-16-lssd | 16 | 124 | (2 x 375 GiB) 750 GiB | Up to 23 | N/A |
c4-highmem-24-lssd | 24 | 186 | (4 x 375 GiB) 1,500 GiB | Up to 23 | N/A |
c4-highmem-32-lssd | 32 | 248 | (5 x 375 GiB) 1,875 GiB | Up to 23 | N/A |
c4-highmem-48-lssd | 48 | 372 | (8 x 375 GiB) 3,000 GiB | Up to 34 | N/A |
c4-highmem-96-lssd | 96 | 744 | (16 x 375 GiB) 6,000 GiB | Up to 67 | N/A |
c4-highmem-144-lssd | 144 | 1,116 | (24 x 375 GiB) 9,000 GiB | Up to 100 | N/A |
c4-highmem-192-lssd | 192 | 1,488 | (32 x 375 GiB) 12,000 GiB | Up to 100 | N/A |
c4-highmem-288-lssd | 288 | 2,232 | (48 x 375 GiB) 18,000 GiB | Up to 100 | Up to 200 |
c4-highmem-288-lssd-metal | 288 | 2,232 | (48 x 375 GiB) 18,000 GiB | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networkingfor larger machine types.
C4 doesn't support custom machine types.
Supported disk types for C4
C4 VMs support only the NVMe disk interface and can use the followingHyperdisk block storage:
VM instances
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability) - Hyperdisk Throughput (
hyperdisk-throughput) - Hyperdisk Extreme (
hyperdisk-extreme) - Local SSD (only available with
-lssdmachine types)
Bare metal instances
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Extreme (
hyperdisk-extreme) - Local SSD (only available with
-lssd-metalmachine types)
C4 doesn't support Persistent Disk. When upgrading to a newer machine series, tomigrate your Persistent Disk resources to Hyperdisk, seeMove your workload from an existing VM to a new VM.
Disk and capacity limits
You can attach a mixture of different Hyperdisk types toan instance, but the maximum total disk capacity (in TiB) across all disktypes can't exceed:
For machine types with less than 32 vCPUs: 257 TiB for all Hyperdisk
For machine types with 32 or more vCPUs: 512 TiB for all Hyperdisk
For details about the capacity limits, seeHyperdisk size and attachment limits.
C4 standard
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk Extreme |
c4-standard-2 | 8 | 8 | 8 | 8 | 0 |
c4-standard-4 | 16 | 16 | 16 | 16 | 0 |
c4-standard-8 | 32 | 32 | 32 | 32 | 0 |
c4-standard-16 | 32 | 32 | 32 | 32 | 0 |
c4-standard-24 | 32 | 32 | 32 | 32 | 0 |
c4-standard-32 | 64 | 64 | 32 | 64 | 0 |
c4-standard-48 | 64 | 64 | 32 | 64 | 0 |
c4-standard-96 | 128 | 128 | 64 | 128 | 8 |
c4-standard-144 | 128 | 128 | 64 | 128 | 8 |
c4-standard-192 | 128 | 128 | 128 | 128 | 8 |
c4-standard-288 | 128 | 128 | 128 | 128 | 8 |
c4-standard-288-metal | 128 | 128 | Not supported | Not supported | 8 |
C4 highcpu
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk Extreme |
c4-highcpu-2 | 8 | 8 | 8 | 8 | 0 |
c4-highcpu-4 | 16 | 16 | 16 | 16 | 0 |
c4-highcpu-8 | 32 | 32 | 32 | 32 | 0 |
c4-highcpu-16 | 32 | 32 | 32 | 32 | 0 |
c4-highcpu-24 | 32 | 32 | 32 | 32 | 0 |
c4-highcpu-32 | 64 | 64 | 32 | 64 | 0 |
c4-highcpu-48 | 64 | 64 | 32 | 64 | 0 |
c4-highcpu-96 | 128 | 128 | 64 | 128 | 8 |
c4-highcpu-144 | 128 | 128 | 64 | 128 | 8 |
c4-highcpu-192 | 128 | 128 | 128 | 128 | 8 |
c4-highcpu-288 | 128 | 128 | 128 | 128 | 8 |
C4 highmem
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk Extreme |
c4-highmem-2 | 8 | 8 | 8 | 8 | 0 |
c4-highmem-4 | 16 | 16 | 16 | 16 | 0 |
c4-highmem-8 | 32 | 32 | 32 | 32 | 0 |
c4-highmem-16 | 32 | 32 | 32 | 32 | 0 |
c4-highmem-24 | 32 | 32 | 32 | 32 | 0 |
c4-highmem-32 | 64 | 64 | 32 | 64 | 0 |
c4-highmem-48 | 64 | 64 | 32 | 64 | 0 |
c4-highmem-96 | 128 | 128 | 64 | 128 | 8 |
c4-highmem-144 | 128 | 128 | 64 | 128 | 8 |
c4-highmem-192 | 128 | 128 | 128 | 128 | 8 |
c4-highmem-288 | 128 | 128 | 128 | 128 | 8 |
c4-highmem-288-metal | 128 | 128 | Not supported | Not supported | 8 |
C4 standard with Local SSD
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk Extreme |
c4-standard-4-lssd | 16 | 16 | 16 | 16 | 0 |
c4-standard-8-lssd | 32 | 32 | 32 | 32 | 0 |
c4-standard-16-lssd | 32 | 32 | 32 | 32 | 0 |
c4-standard-24-lssd | 32 | 32 | 32 | 32 | 0 |
c4-standard-32-lssd | 32 | 32 | 32 | 32 | 0 |
c4-standard-48-lssd | 32 | 32 | 32 | 32 | 0 |
c4-standard-96-lssd | 64 | 64 | 64 | 64 | 8 |
c4-standard-144-lssd | 64 | 64 | 64 | 64 | 8 |
c4-standard-192-lssd | 128 | 128 | 128 | 128 | 8 |
c4-standard-288-lssd | 128 | 128 | 128 | 128 | 8 |
c4-standard-288-lssd-metal | 128 | 128 | Not supported | Not supported | 8 |
C4 highmem with Local SSD
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Hyperdisk Extreme |
c4-highmem-4-lssd | 16 | 16 | 16 | 16 | 0 |
c4-highmem-8-lssd | 32 | 32 | 32 | 32 | 0 |
c4-highmem-16-lssd | 32 | 32 | 32 | 32 | 0 |
c4-highmem-24-lssd | 32 | 32 | 32 | 32 | 0 |
c4-highmem-32-lssd | 32 | 32 | 32 | 32 | 0 |
c4-highmem-48-lssd | 32 | 32 | 32 | 32 | 0 |
c4-highmem-96-lssd | 64 | 64 | 64 | 64 | 8 |
c4-highmem-144-lssd | 64 | 64 | 64 | 64 | 8 |
c4-highmem-192-lssd | 128 | 128 | 128 | 128 | 8 |
c4-highmem-288-lssd | 128 | 128 | 128 | 128 | 8 |
c4-highmem-288-lssd-metal | 128 | 128 | Not supported | Not supported | 8 |
Network support for C4 VMs
The following network interface drivers are required:
- C4 instances requiregVNIC network interfaces.
- C4 bare metal instances require theIntel IDPF LAN PF device driver.
C4 supports up to 100 Gbps network bandwidth for standardnetworking and up to 200 Gbps with per VM Tier_1 networking performance for VM andbare metal instances.
Before migrating to C4 or creating C4 VMs or bare metalinstances, make sure that theoperating system imagethat you use supports the IDPF network driver for bare metal instances or thegVNIC driver for VM instances. To get the best possible performance onC4 VMs, choose an OS image that supports both"Tier_1 Networking" and "200 Gbps network bandwidth". These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your C4 VM isusing an operating system with an older version of gVNIC driver, this is stillsupported but the VM might experience suboptimal performance such as lessnetwork bandwidth or higher latency.
If you use a custom OS image to create a C4 VM, you canmanually install the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with C4VMs. Google recommends using the latest gVNIC driver version to benefit fromadditional features and bug fixes.
Maintenance experience for C4 instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The C4 machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance |
|---|---|---|---|---|
c4-*-192 andc4-*-288 | Minimum of 30 days | Live migrate | 7 days | Yes |
c4-*-lssd | Minimum of 30 days | Live migrate | 7 days | Yes |
c4-*-288-metal | Minimum of 30 days | Terminate | 7 days | Yes |
c4-*-288-lssd-metal | Minimum of 30 days | Terminate | 7 days | Yes |
| All others | Minimum of 30 days | Live migrate | 7 days | No |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
N4D machine series
N4D VMs are powered by the fifth generation AMD EPYC processors(code-name Turin) andTitanium. N4D VMs are engineered for flexibility,cost optimization, and enhanced price-performance through their efficientarchitecture. N4D supports next generation dynamic resource management, makingbetter use of resources on host machines.
In summary, the N4D machine series:
- Powered by the AMD EPYC Turin CPU and Titanium.
- Supports up to 96 vCPUs and 768 GB of DDR5 memory.
- Offers predefined machine types that range in size from 2 to 96 vCPUs.
- Supports custom machine types and extended memory.
- Supports consumption options like on-demand, Spot VMs,and future reservations.
- Supports standard network configuration with up to 50 Gbps bandwidth.
- Supports only Hyperdisk volumes.
- Supports resource-based and flexible committed use discounts(CUDs).
- Supportsspread placement policies.
- Doesn't support Local SSD or per per VM Tier_1 networking performance.
N4D machine types
N4D VMs are available as predefined configurations insizes ranging from 2 vCPUs to 96 vCPUs and up to 768 GB of memory.
standard: 4 GB memory per vCPUhighcpu: 2 GB memory per vCPUhighmem: 8 GB memory per vCPU
N4D
standard
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps) | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4d-standard-2 | 2 | 8 | Up to 10 | N/A |
n4d-standard-4 | 4 | 16 | Up to 10 | N/A |
n4d-standard-8 | 8 | 32 | Up to 16 | N/A |
n4d-standard-16 | 16 | 64 | Up to 32 | N/A |
n4d-standard-32 | 32 | 128 | Up to 32 | N/A |
n4d-standard-48 | 48 | 192 | Up to 32 | N/A |
n4d-standard-64 | 64 | 256 | Up to 45 | N/A |
n4d-standard-80 | 80 | 320 | Up to 50 | N/A |
n4d-standard-96 | 96 | 384 | Up to 50 | N/A |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms
.N4D
highcpu
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps) | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4d-highcpu-2 | 2 | 4 | Up to 10 | N/A |
n4d-highcpu-4 | 4 | 8 | Up to 10 | N/A |
n4d-highcpu-8 | 8 | 16 | Up to 16 | N/A |
n4d-highcpu-16 | 16 | 32 | Up to 32 | N/A |
n4d-highcpu-32 | 32 | 64 | Up to 32 | N/A |
n4d-highcpu-48 | 48 | 90 | Up to 32 | N/A |
n4d-highcpu-64 | 64 | 128 | Up to 45 | N/A |
n4d-highcpu-80 | 80 | 160 | Up to 50 | N/A |
n4d-highcpu-96 | 96 | 192 | Up to 50 | N/A |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms
.N4D
highmem
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps) | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4d-highmem-2 | 2 | 16 | Up to 10 | N/A |
n4d-highmem-4 | 4 | 32 | Up to 10 | N/A |
n4d-highmem-8 | 8 | 64 | Up to 16 | N/A |
n4d-highmem-16 | 16 | 128 | Up to 32 | N/A |
n4d-highmem-32 | 32 | 256 | Up to 32 | N/A |
n4d-highmem-48 | 48 | 384 | Up to 32 | N/A |
n4d-highmem-64 | 64 | 512 | Up to 45 | N/A |
n4d-highmem-80 | 80 | 640 | Up to 50 | N/A |
n4d-highmem-96 | 96 | 768 | Up to 50 | N/A |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms
.Supported disk types for N4D
N4D VMs support only the NVMe diskinterface and can use the followingHyperdisk block storage:
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability) - Hyperdisk Throughput (
hyperdisk-throughput)
N4D doesn't support Persistent Disk or Local SSD. ReadMove your workload from an existing VM to a new VMto migrate your Persistent Disk resources to a newer machine series.
Disk and capacity limits
The number of Hyperdisk volumes of all types that you canattach to a VM can't exceed the limits stated in theMax number ofHyperdisk volumes. For details about these limits, seeHyperdisk capacity.
For instances running Microsoft Windows and using the NVMe disk interface, thecombined number of both Hyperdisk and Persistent Diskattached volumes can't exceed a total of 16 disks.SeeKnown issues.Local SSD volumes are excluded from this issue.
N4D storage limits are described in the following table:
N4D standard
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Local SSD |
n4d-standard-2 | 4 | 16 | 16 | 16 | Not supported |
n4d-standard-4 | 8 | 16 | 16 | 16 | Not supported |
n4d-standard-8 | 16 | 16 | 16 | 16 | Not supported |
n4d-standard-16 | 32 | 32 | 32 | 32 | Not supported |
n4d-standard-32 | 64 | 32 | 32 | 32 | Not supported |
n4d-standard-48 | 64 | 32 | 32 | 32 | Not supported |
n4d-standard-64 | 64 | 32 | 32 | 32 | Not supported |
n4d-standard-80 | 64 | 32 | 32 | 32 | Not supported |
n4d-standard-96 | 64 | 32 | 32 | 32 | Not supported |
N4D highcpu
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Local SSD |
n4d-highcpu-2 | 4 | 16 | 16 | 16 | Not supported |
n4d-highcpu-4 | 8 | 16 | 16 | 16 | Not supported |
n4d-highcpu-8 | 16 | 16 | 16 | 16 | Not supported |
n4d-highcpu-16 | 32 | 32 | 32 | 32 | Not supported |
n4d-highcpu-32 | 64 | 32 | 32 | 32 | Not supported |
n4d-highcpu-48 | 64 | 32 | 32 | 32 | Not supported |
n4d-highcpu-64 | 64 | 32 | 32 | 32 | Not supported |
n4d-highcpu-80 | 64 | 32 | 32 | 32 | Not supported |
n4d-highcpu-96 | 64 | 32 | 32 | 32 | Not supported |
N4D highmem
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | Local SSD |
n4d-highmem-2 | 4 | 16 | 16 | 16 | Not supported |
n4d-highmem-4 | 8 | 16 | 16 | 16 | Not supported |
n4d-highmem-8 | 16 | 16 | 16 | 16 | Not supported |
n4d-highmem-16 | 32 | 32 | 32 | 32 | Not supported |
n4d-highmem-32 | 64 | 32 | 32 | 32 | Not supported |
n4d-highmem-48 | 64 | 32 | 32 | 32 | Not supported |
n4d-highmem-64 | 64 | 32 | 32 | 32 | Not supported |
n4d-highmem-80 | 64 | 32 | 32 | 32 | Not supported |
n4d-highmem-96 | 64 | 32 | 32 | 32 | Not supported |
Network support for N4D VMs
N4D instances requiregVNIC network interfaces.N4D instances support up to 50 Gbps network bandwidth for standardnetworking and don't support per VM Tier_1 networking performance.
Before migrating to N4D or creating N4D VMinstances, make sure that theoperating system imagethat you use supports the gVNIC driver for VM instances.These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your N4D VM isusing an operating system with an older version of gVNIC driver, this is stillsupported but the VM might experience suboptimal performance such as lessnetwork bandwidth or higher latency.
If you use a custom OS image to create a N4D VM, you canmanually install the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with N4DVMs. Google recommends using the latest gVNIC driver version to benefit fromadditional features and bug fixes.
Maintenance experience for N4D instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The N4D machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance |
|---|---|---|---|---|
| All N4D machine types | Variable | Live migrate | 60 seconds | No |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
N4A machine series
Preview
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
N4A VMs are the second family of VMs powered by Google's latest custom-designedAxion processor, built on Arm Neoverse N3 compute core and powered byTitanium IPU. N4A VMs are placed within a single node withUniform Memory Access (UMA).They are engineered to be our most efficient and flexible Arm VMs,delivering exceptional price-performance for a wide range of general-purposeand scale-out workloads. N4A uses next generationdynamic resource management,which makes better use of resources on host machines.
Ideal use cases include web and application servers,microservices, containerized applications using Google Kubernetes Engine (GKE), open-sourcedatabases, and development and testing environments.
In summary, the N4A machine series:
- Is powered by the Google Axion Arm processor and Titanium IPU.
- Supports up to 64 vCPUs and 512 GB of DDR5 memory.
- Offers multiple predefined machine types andcustom machine types with extended custom memory upto 512 GB.
- Supports standard network configuration with up to 50 Gbps of bandwidth.
- Supports Hyperdisk only.
- Supports the following discount and consumption options:
- Doesn't support Local SSD or per VM Tier_1 networking performance.
- Confidential VM is not supported by this CPU.
- 32-bit mode EL0 (guest userspace) is not supported due to a hardwarelimitation.
N4A machine types
N4A VMs are available as predefined configurations insizes ranging from 1 vCPUs to 64 vCPUs and up to 512 GB of memory.
standard: 4 GB memory per vCPUhighcpu: 2 GB memory per vCPUhighmem: 8 GB memory per vCPU
For information about custom machine types, seeCustom machine types.
N4A standard
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4a-standard-1 | 1 | 4 | Up to 10 | N/A |
n4a-standard-2 | 2 | 8 | Up to 10 | N/A |
n4a-standard-4 | 4 | 16 | Up to 10 | N/A |
n4a-standard-8 | 8 | 32 | Up to 16 | N/A |
n4a-standard-16 | 16 | 64 | Up to 32 | N/A |
n4a-standard-32 | 32 | 128 | Up to 32 | N/A |
n4a-standard-48 | 48 | 192 | Up to 32 | N/A |
n4a-standard-64 | 64 | 256 | Up to 50 | N/A |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
N4A highcpu
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4a-highcpu-1 | 1 | 2 | Up to 10 | N/A |
n4a-highcpu-2 | 2 | 4 | Up to 10 | N/A |
n4a-highcpu-4 | 4 | 8 | Up to 10 | N/A |
n4a-highcpu-8 | 8 | 16 | Up to 16 | N/A |
n4a-highcpu-16 | 16 | 32 | Up to 32 | N/A |
n4a-highcpu-32 | 32 | 64 | Up to 32 | N/A |
n4a-highcpu-48 | 48 | 96 | Up to 32 | N/A |
n4a-highcpu-64 | 64 | 128 | Up to 50 | N/A |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
N4A highmem
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4a-highmem-1 | 1 | 8 | Up to 10 | N/A |
n4a-highmem-2 | 2 | 16 | Up to 10 | N/A |
n4a-highmem-4 | 4 | 32 | Up to 10 | N/A |
n4a-highmem-8 | 8 | 64 | Up to 16 | N/A |
n4a-highmem-16 | 16 | 128 | Up to 32 | N/A |
n4a-highmem-32 | 32 | 256 | Up to 32 | N/A |
n4a-highmem-48 | 48 | 384 | Up to 32 | N/A |
n4a-highmem-64 | 64 | 512 | Up to 50 | N/A |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
Supported disk types for N4A
N4A VMs support only the NVMe disk interface and can use the followingHyperdisk block storage:
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability) - Hyperdisk Throughput (
hyperdisk-throughput)
N4A doesn't support Persistent Disk or Local SSD. ReadMove your workload from an existing VM to a new VMto migrate your Persistent Disk resources to a newer machine series.
Disk and capacity limits
The number of Hyperdisk volumes of all types that you canattach to a VM can't exceed the limits stated in theMax number ofHyperdisk volumes. For details about these limits, seeHyperdisk capacity.
The combined total number of Hyperdisk Balanced volumes attached to a single VM depends onthe number of vCPUs the VM has, and can't exceed these limits:
N4A storage limits are described in the following table:
N4A standard
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | |
n4a-standard-1 | 4 | 16 | 16 | 16 | |
n4a-standard-2 | 4 | 16 | 16 | 16 | |
n4a-standard-4 | 8 | 16 | 16 | 16 | |
n4a-standard-8 | 16 | 16 | 16 | 16 | |
n4a-standard-16 | 32 | 32 | 32 | 32 | |
n4a-standard-32 | 64 | 32 | 32 | 32 | |
n4a-standard-48 | 64 | 32 | 32 | 32 | |
n4a-standard-64 | 64 | 32 | 32 | 32 | |
N4A highcpu
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | |
n4a-highcpu-1 | 4 | 16 | 16 | 16 | |
n4a-highcpu-2 | 4 | 16 | 16 | 16 | |
n4a-highcpu-4 | 8 | 16 | 16 | 16 | |
n4a-highcpu-8 | 16 | 16 | 16 | 16 | |
n4a-highcpu-16 | 32 | 32 | 32 | 32 | |
n4a-highcpu-32 | 32 | 32 | 32 | 32 | |
n4a-highcpu-48 | 64 | 32 | 32 | 32 | |
n4a-highcpu-64 | 64 | 32 | 32 | 32 | |
N4A highmem
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Balanced High Availability | Hyperdisk Throughput | |
n4a-highmem-1 | 4 | 16 | 16 | 16 | |
n4a-highmem-2 | 4 | 16 | 16 | 16 | |
n4a-highmem-4 | 8 | 16 | 16 | 16 | |
n4a-highmem-8 | 16 | 16 | 16 | 16 | |
n4a-highmem-16 | 32 | 32 | 32 | 32 | |
n4a-highmem-32 | 32 | 32 | 32 | 32 | |
n4a-highmem-48 | 64 | 32 | 32 | 32 | |
n4a-highmem-64 | 64 | 32 | 32 | 32 | |
Network support for N4A VMs
N4A instances requiregVNIC network interfaces.N4A instances support up to 50 Gbps network bandwidth for standardnetworking and don't support per VM Tier_1 networking performance.
Before migrating to N4A or creating N4A VMinstances, make sure that theoperating system imagethat you use supports the gVNIC driver for VM instances.These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your N4A VM isusing an operating system with an older version of gVNIC driver, this is stillsupported but the VM might experience suboptimal performance such as lessnetwork bandwidth or higher latency.
If you use a custom OS image to create a N4A VM, you canmanually install the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with N4AVMs. Google recommends using the latest gVNIC driver version to benefit fromadditional features and bug fixes.
Maintenance experience for N4A instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The N4A machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance |
|---|---|---|---|---|
| All N4A machine types | Variable | Live migrate | 60 seconds | No |
N4 machine series
N4 VMs are powered by the 5th generation Intel Xeon Scalable processors(code-named Emerald Rapids) andTitanium. N4 machine types are builtfrom the ground up for flexibility and cost optimization through an efficientarchitecture of streamlined features, shapes, and next generationdynamic resource management, whichmakes better use of resources on host machines. N4 offers flexible optionslike custom machine types that lets you use choose varied combinations ofcompute and memory to optimize costs and reduce resource waste. N4 is suitedfor a variety of general-purpose workloads that don't require peak processingpower at all times.
In summary, the N4 machine series:
- Is powered by 5th generation Intel Emerald Rapids processor and titaniumprocessors.
- Supports up to 80 vCPUs and 640 GB of DDR5 memory.
- Offers multiple predefined machine types andcustom machine types and extendedcustom memory up to 640 GB.
- Supports standard network configuration with up to 50 Gbps bandwidth
- Supports Intel Advanced Matrix Extensions (AMX), a built-in accelerator thatsignificantly improves the performance of deep-learning training and inference on the CPU.
- Supports the following discount and consumption options:
- Doesn't support Local SSD or per VM Tier_1 networking performance.
N4 machine types
N4 VMs are available as predefined configurations insizes ranging from 2 vCPUs to 80 vCPUs and up to 640 GB of memory.
standard: 4 GB memory per vCPUhighcpu: 2 GB memory per vCPUhighmem: 8 GB memory per vCPU
N4 standard
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps) | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4-standard-2 | 2 | 8 | Up to 10 | N/A |
n4-standard-4 | 4 | 16 | Up to 10 | N/A |
n4-standard-8 | 8 | 32 | Up to 16 | N/A |
n4-standard-16 | 16 | 64 | Up to 32 | N/A |
n4-standard-32 | 32 | 128 | Up to 32 | N/A |
n4-standard-48 | 48 | 192 | Up to 32 | N/A |
n4-standard-64 | 64 | 256 | Up to 45 | N/A |
n4-standard-80 | 80 | 320 | Up to 50 | N/A |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms
.N4 highcpu
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps) | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4-highcpu-2 | 2 | 4 | Up to 10 | N/A |
n4-highcpu-4 | 4 | 8 | Up to 10 | N/A |
n4-highcpu-8 | 8 | 16 | Up to 16 | N/A |
n4-highcpu-16 | 16 | 32 | Up to 32 | N/A |
n4-highcpu-32 | 32 | 64 | Up to 32 | N/A |
n4-highcpu-48 | 48 | 96 | Up to 32 | N/A |
n4-highcpu-64 | 64 | 128 | Up to 45 | N/A |
n4-highcpu-80 | 80 | 160 | Up to 50 | N/A |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms
.N4 highmem
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps) | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
n4-highmem-2 | 2 | 16 | Up to 10 | N/A |
n4-highmem-4 | 4 | 32 | Up to 10 | N/A |
n4-highmem-8 | 8 | 64 | Up to 16 | N/A |
n4-highmem-16 | 16 | 128 | Up to 32 | N/A |
n4-highmem-32 | 32 | 256 | Up to 32 | N/A |
n4-highmem-48 | 48 | 384 | Up to 32 | N/A |
n4-highmem-64 | 64 | 512 | Up to 45 | N/A |
n4-highmem-80 | 80 | 640 | Up to 50 | N/A |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms
.Supported disk types for N4
N4 VMs supports only the NVMe disk interface and can use the followingHyperdisk block storage:
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability) - Hyperdisk Throughput (
hyperdisk-throughput)
N4 doesn't support Persistent Disk or Local SSD. ReadMove your workload from an existing VM to a new VMto migrate your Persistent Disk resources to a newer machine series.
Disk and capacity limits
The number of Hyperdisk volumes of all types that you canattach to a VM can't exceed the limits stated in theMax number ofHyperdisk volumes. For details about these limits, seeHyperdisk capacity.
N4 storage limits are described in the following table:
N4 standard
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme | Local SSD |
n4-standard-2 | 20 | 16 | 16 | 0 | Not supported |
n4-standard-4 | 24 | 16 | 16 | 0 | Not supported |
n4-standard-8 | 32 | 16 | 16 | 0 | Not supported |
n4-standard-16 | 48 | 32 | 32 | 0 | Not supported |
n4-standard-32 | 64 | 32 | 32 | 0 | Not supported |
n4-standard-48 | 64 | 32 | 32 | 0 | Not supported |
n4-standard-64 | 64 | 32 | 32 | 0 | Not supported |
n4-standard-80 | 64 | 32 | 32 | 0 | Not supported |
N4 highcpu
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme | Local SSD |
n4-highcpu-2 | 20 | 16 | 16 | 0 | Not supported |
n4-highcpu-4 | 24 | 16 | 16 | 0 | Not supported |
n4-highcpu-8 | 32 | 16 | 16 | 0 | Not supported |
n4-highcpu-16 | 48 | 32 | 32 | 0 | Not supported |
n4-highcpu-32 | 64 | 32 | 32 | 0 | Not supported |
n4-highcpu-48 | 64 | 32 | 32 | 0 | Not supported |
n4-highcpu-64 | 64 | 32 | 32 | 0 | Not supported |
n4-highcpu-80 | 64 | 32 | 32 | 0 | Not supported |
N4 highmem
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine types | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme | Local SSD |
n4-highmem-2 | 20 | 16 | 16 | 0 | Not supported |
n4-highmem-4 | 24 | 16 | 16 | 0 | Not supported |
n4-highmem-8 | 32 | 16 | 16 | 0 | Not supported |
n4-highmem-16 | 48 | 32 | 32 | 0 | Not supported |
n4-highmem-32 | 64 | 32 | 32 | 0 | Not supported |
n4-highmem-48 | 64 | 32 | 32 | 0 | Not supported |
n4-highmem-64 | 64 | 32 | 32 | 0 | Not supported |
n4-highmem-80 | 64 | 32 | 32 | 0 | Not supported |
Network support for N4 VMs
N4 instances requiregVNIC network interfaces.N4 instances support up to 50 Gbps network bandwidth for standardnetworking and don't support per VM Tier_1 networking performance.
Before migrating to N4 or creating N4 VMinstances, make sure that theoperating system imagethat you use supports the gVNIC driver for VM instances.These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your N4 VM isusing an operating system with an older version of gVNIC driver, this is stillsupported but the VM might experience suboptimal performance such as lessnetwork bandwidth or higher latency.
If you use a custom OS image to create a N4 VM, you canmanually install the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with N4VMs. Google recommends using the latest gVNIC driver version to benefit fromadditional features and bug fixes.
Maintenance experience for N4 instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The N4 machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance |
|---|---|---|---|---|
| All N4 machine types | Variable | Live migrate | 60 seconds | No |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
C3D machine series
C3D VMs are powered by the 4th generation AMD EPYC™ (Genoa) processor witha maximum frequency of 3.7 Ghz. C3D machine types are optimized for theunderlying hardware architecture to deliver optimal, reliable, and consistentperformance.
C3D usesTitanium, which enables higher levels of networking performance, isolation and security. The C3D machine series supports Tier_1 networking bandwidth of up to 100 Gbps and up to 200 Gbps.
In summary, the C3D machine series:
- Is powered by 4th generation AMD EPYC™ processor and Titanium.
- Supports up to 360 vCPUs and 2,880 GB of DDR5 memory.
- Supports standard network configuration with up to 100 Gbps bandwidthand Tier_1 networking with up to 200 Gbps bandwidth.
- Supports the following discount and consumption options:
- SupportsConfidential VMwith AMD SEV.
- In the gcloud CLI and REST, the commitment type values useCompute-optimized as the machine family, even though C3 and C3D are part of the general-purpose machine family.
- In the Google Cloud console, the commitment type values use the correct machine series:General-Purpose.
C3D machine types
C3D VMs are available instandard,highcpu,highmem, andlssdconfigurations in sizes ranging from 4 to 360 vCPUs and up to 2,880 GB ofmemory. Thehighcpu configuration offers the lowest price per performance forcompute-bound workloads that don't require large amounts of memory.
C3D standard
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
c3d-standard-4 | 4 | 16 | Up to 20 | N/A |
c3d-standard-8 | 8 | 32 | Up to 20 | N/A |
c3d-standard-16 | 16 | 64 | Up to 20 | N/A |
c3d-standard-30 | 30 | 120 | Up to 20 | Up to 50 |
c3d-standard-60 | 60 | 240 | Up to 40 | Up to 75 |
c3d-standard-90 | 90 | 360 | Up to 60 | Up to 100 |
c3d-standard-180 | 180 | 720 | Up to 100 | Up to 150 |
c3d-standard-360 | 360 | 1,440 | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C3D highcpu
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
c3d-highcpu-4 | 4 | 8 | Up to 20 | N/A |
c3d-highcpu-8 | 8 | 16 | Up to 20 | N/A |
c3d-highcpu-16 | 16 | 32 | Up to 20 | N/A |
c3d-highcpu-30 | 30 | 59 | Up to 20 | Up to 50 |
c3d-highcpu-60 | 60 | 118 | Up to 40 | Up to 75 |
c3d-highcpu-90 | 90 | 177 | Up to 60 | Up to 100 |
c3d-highcpu-180 | 180 | 354 | Up to 100 | Up to 150 |
c3d-highcpu-360 | 360 | 708 | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C3D highmem
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
c3d-highmem-4 | 4 | 32 | Up to 20 | N/A |
c3d-highmem-8 | 8 | 64 | Up to 20 | N/A |
c3d-highmem-16 | 16 | 128 | Up to 20 | N/A |
c3d-highmem-30 | 30 | 240 | Up to 20 | Up to 50 |
c3d-highmem-60 | 60 | 480 | Up to 40 | Up to 75 |
c3d-highmem-90 | 90 | 720 | Up to 60 | Up to 100 |
c3d-highmem-180 | 180 | 1,440 | Up to 100 | Up to 150 |
c3d-highmem-360 | 360 | 2,880 | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C3D standard with Local SSD
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
c3d-standard-8-lssd | 8 | 32 | Up to 20 | N/A |
c3d-standard-16-lssd | 16 | 64 | Up to 20 | N/A |
c3d-standard-30-lssd | 30 | 120 | Up to 20 | Up to 50 |
c3d-standard-60-lssd | 60 | 240 | Up to 40 | Up to 75 |
c3d-standard-90-lssd | 90 | 360 | Up to 60 | Up to 100 |
c3d-standard-180-lssd | 180 | 720 | Up to 100 | Up to 150 |
c3d-standard-360-lssd | 360 | 1440 | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C3D highmem with Local SSD
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
c3d-highmem-8-lssd | 8 | 64 | Up to 20 | N/A |
c3d-highmem-16-lssd | 16 | 128 | Up to 20 | N/A |
c3d-highmem-30-lssd | 30 | 240 | Up to 20 | Up to 50 |
c3d-highmem-60-lssd | 60 | 480 | Up to 40 | Up to 75 |
c3d-highmem-90-lssd | 90 | 720 | Up to 60 | Up to 100 |
c3d-highmem-180-lssd | 180 | 1440 | Up to 100 | Up to 150 |
c3d-highmem-360-lssd | 360 | 2880 | Up to 100 | Up to 200 |
1 A CPU uses two threads per core, and a vCPU represents asingle thread. SeeCPU platforms.
2 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supportshigh-bandwidth networking for larger machine types.
C3D doesn't support custom machine types.
Supported disk types for C3D
C3D VMs support only the NVMe disk interface and can use the following blockstorage types:
- Balanced Persistent Disk (
pd-balanced) - SSD (performance) Persistent Disk (
pd-ssd) - Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability) - Hyperdisk ML (
hyperdisk-ml) - Hyperdisk Extreme (
hyperdisk-extreme) - Hyperdisk Throughput (
hyperdisk-throughput) - Local SSD (only available with
-lssdmachine types)
To use Local SSD with C3D, create your VM using the-lssd variant of theC3D machine types. Selecting this machine type creates a VM of the specifiedsize with Local SSD partitions attached. You must use a machine type that endsin-lssd to use Local SSD with your C3D VM; you can't attach Local SSD volumesseparately.
Disk and capacity limits
For instances running Microsoft Windows and using the NVMe disk interface, thecombined number of both Hyperdisk and Persistent Diskattached volumes can't exceed a total of 16 disks.SeeKnown issues.Local SSD volumes are excluded from this issue.
C3D storage limits are described in the following table:
C3D standard
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per VM | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3d-standard-4 | 128 | 24 | 16 | 24 | 24 | 0 | Not supported |
c3d-standard-8 | 128 | 32 | 16 | 32 | 32 | 0 | Not supported |
c3d-standard-16 | 128 | 48 | 16 | 48 | 48 | 0 | Not supported |
c3d-standard-30 | 128 | 64 | 16 | 64 | 64 | 0 | Not supported |
c3d-standard-60 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-standard-90 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-standard-180 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-standard-360 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
C3D highcpu
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per VM | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3d-highcpu-4 | 128 | 24 | 16 | 24 | 24 | 0 | Not supported |
c3d-highcpu-8 | 128 | 32 | 16 | 32 | 32 | 0 | Not supported |
c3d-highcpu-16 | 128 | 48 | 16 | 48 | 48 | 0 | Not supported |
c3d-highcpu-30 | 128 | 64 | 16 | 64 | 64 | 0 | Not supported |
c3d-highcpu-60 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-highcpu-90 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-highcpu-180 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-highcpu-360 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
C3D highmem
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per VM | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3d-highmem-4 | 128 | 24 | 16 | 24 | 24 | 0 | Not supported |
c3d-highmem-8 | 128 | 32 | 16 | 32 | 32 | 0 | Not supported |
c3d-highmem-16 | 128 | 48 | 16 | 48 | 48 | 0 | Not supported |
c3d-highmem-30 | 128 | 64 | 16 | 64 | 64 | 0 | Not supported |
c3d-highmem-60 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-highmem-90 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-highmem-180 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3d-highmem-360 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
C3D standard with Local SSD
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per VM | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3d-standard-8-lssd | 128 | 24 | 16 | 24 | 24 | 0 | 1 (375 GiB) |
c3d-standard-16-lssd | 128 | 48 | 16 | 48 | 48 | 0 | 1 (375 GiB) |
c3d-standard-30-lssd | 128 | 64 | 16 | 64 | 64 | 0 | 2 (750 GiB) |
c3d-standard-60-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 4 (1.5 TiB) |
c3d-standard-90-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 8 (3 TiB) |
c3d-standard-180-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 16 (6 TiB) |
c3d-standard-360-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 32 (12 TiB) |
C3D highmem with Local SSD
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per VM | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3d-highmem-8-lssd | 128 | 24 | 16 | 24 | 24 | 0 | 1 (375 GiB) |
c3d-highmem-16-lssd | 128 | 48 | 16 | 48 | 48 | 0 | 1 (375 GiB) |
c3d-highmem-30-lssd | 128 | 64 | 16 | 64 | 64 | 0 | 2 (750 GiB) |
c3d-highmem-60-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 4 (1.5 TiB) |
c3d-highmem-90-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 8 (3 TiB) |
c3d-highmem-180-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 16 (6 TiB) |
c3d-highmem-360-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 32 (12 TiB) |
Network support for C3D VMs
C3D instances requiregVNIC network interfaces.C3D supports up to 100 Gbps network bandwidth for standardnetworking and up to 200 Gbps with per VM Tier_1 networking performance.
Before migrating to C3D or creating C3D instances,make sure that the operating system imagethat you use supports the gVNIC driver. To get the best possible performance onC3D instances, on theNetworking featurestab of the OS details table, choose an OS image that supports both"Tier_1 Networking" and "200 Gbps network bandwidth". These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your C3D instance is using an operating system with an olderversion of the gVNIC driver, this is still supported but the instance mightexperience suboptimal performance such as less network bandwidth or higherlatency.
If you use a custom OS image with the C3D machine series, you canmanuallyinstall the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with C3Dinstances. Google recommends using the latest gVNIC driver version to benefitfrom additional features and bug fixes.
Maintenance experience for C3D instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The C3D machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance |
|---|---|---|---|---|
| C3D with Confidential VM | Minimum of 30 days | Terminate | 7 days | No |
c3d-*-lssd | Minimum of 30 days | Live migrate | 7 days | Yes |
c3d-*-360 | Minimum of 30 days | Live migrate | 7 days | Yes |
| All others | Minimum of 30 days | Live migrate | 7 days | No |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
C3 machine series
C3 VMs are powered by the 4th generation Intel Xeon Scalable processors(code-named Sapphire Rapids), DDR5 memory, andTitanium.C3 machine types are optimized for the underlying NUMA architecture to deliveroptimal, reliable, and consistent performance.
The new C3 machine series is a major leap in our purpose-built infrastructureofferings:
- Leveraging Titanium processors to offload networking from the CPUs
- Delivering high performance block-storage withGoogle Cloud Hyperdisk
- Speeding up ML training and inference withIntel AMX
C3 uses Titanium to enable higher levels of networking performance, isolationand security. The C3 machine series supports a default network bandwidth of upto 100 Gbps and up to 200 Gbps withper VM Tier_1 networking performance.Titanium has been designed from the ground up to enable updates that don'timpact running workloads.
The C3 machine series provides some of the largest general-purpose machinetypes, letting you create VM instances with up to 176 vCPUs and 1.4 TB ofmemory.
C3 has bare metal machine types,which allow you to access all the raw compute resources of the server. You cancreate bare metal instances with 192 vCPUs and up to 1,536 GB of memory.Bare metal instances also provide access to several onboard, function-specificaccelerators and offloads:
- Intel-QAT
- Intel-DLB
- Intel DSA
- Intel IAA
If your organization uses a Shielded VM policy, then you must createa custom org policy that excludes bare metal shapes before you can create abare metal instance.
In summary, the C3 machine series:
- Is powered by Intel 4th Generation Xeon processors and Titanium.
- Supports up to 176 vCPUs and 1.4 TB of DDR5 memory for VMs.
- Supports up to 192 vCPUs and 1,536 GB of memory for bare metalinstances.
- Supports standard network configuration with up to 100 Gbps bandwidthand Tier_1 networking with up to 200 Gbps bandwidth.
- Supports Intel Advanced Matrix Extensions (AMX), a built-in acceleratorthat significantly improves the performance of deep-learning training andinference on the CPU.
- Supports the following discount and consumption options:
- SupportsConfidential VMwith Intel TDX.
- Doesn't offersustained use discounts (SUDs).
- C3 bare metal instances don't support the following:
Caution: When you purchase resource-based commitments for C3 and C3D resources, the machine family that is specified by the commitment type changes depending on the interface:
Make sure to select the correct commitment type value that corresponds to the interface that you're using. For more information, see theresource-based CUDs documentation.
C3 machine types
C3 VMs are available in predefined machine types with sizes ranging from 4 to176 vCPUs and up to 1,408 GB of memory.
To use Local SSD with C3, create your VM using the-lssd variant of theC3 machine types. Selecting this machine type creates a VM of the specifiedsize with Local SSD partitions attached. You must use ac3-standard-*-lssdmachine type to use Local SSD with your VM; you can't attach Local SSD volumesseparately.
To create a bare metal instance with C3, use one of the following machine types:
c3-standard-192-metalc3-highcpu-192-metalc3-highmem-192-metal
C3 standard
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
c3-standard-4 | 4 | 16 | Up to 23 | N/A |
c3-standard-8 | 8 | 32 | Up to 23 | N/A |
c3-standard-22 | 22 | 88 | Up to 23 | N/A |
c3-standard-44 | 44 | 176 | Up to 32 | Up to 50 |
c3-standard-88 | 88 | 352 | Up to 62 | Up to 100 |
c3-standard-176 | 176 | 704 | Up to 100 | Up to 200 |
c3-standard-192-metal | 192† | 768 | Up to 100 | Up to 200 |
1 A vCPU represents a single hardware thread, or logicalcore.
2 For bare metal instances, the number ofvCPUs is equivalent to the number of hardware threadson the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
C3 highcpu
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
c3-highcpu-4 | 4 | 8 | Up to 23 | N/A |
c3-highcpu-8 | 8 | 16 | Up to 23 | N/A |
c3-highcpu-22 | 22 | 44 | Up to 23 | N/A |
c3-highcpu-44 | 44 | 88 | Up to 32 | Up to 50 |
c3-highcpu-88 | 88 | 176 | Up to 62 | Up to 100 |
c3-highcpu-176 | 176 | 352 | Up to 100 | Up to 200 |
c3-highcpu-192-metal | 192† | 512 | Up to 100 | Up to 200 |
1 A vCPU represents a single hardware thread, or logicalcore.
2 For bare metal instances, the number ofvCPUs is equivalent to the number of hardware threadson the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
C3 highmem
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
c3-highmem-4 | 4 | 32 | Up to 23 | N/A |
c3-highmem-8 | 8 | 64 | Up to 23 | N/A |
c3-highmem-22 | 22 | 176 | Up to 23 | N/A |
c3-highmem-44 | 44 | 352 | Up to 32 | Up to 50 |
c3-highmem-88 | 88 | 704 | Up to 62 | Up to 100 |
c3-highmem-176 | 176 | 1408 | Up to 100 | Up to 200 |
c3-highmem-192-metal | 192† | 1536 | Up to 100 | Up to 200 |
1 A vCPU represents a single hardware thread, or logicalcore.
2 For bare metal instances, the number ofvCPUs is equivalent to the number of hardware threadson the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
C3 with Local SSD
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
c3-standard-4-lssd | 4 | 16 | Up to 23 | N/A |
c3-standard-8-lssd | 8 | 32 | Up to 23 | N/A |
c3-standard-22-lssd | 22 | 88 | Up to 23 | N/A |
c3-standard-44-lssd | 44 | 176 | Up to 32 | Up to 50 |
c3-standard-88-lssd | 88 | 352 | Up to 62 | Up to 100 |
c3-standard-176-lssd | 176 | 704 | Up to 100 | Up to 200 |
1 A vCPU represents a single hardware thread, or logicalcore.
2 For bare metal instances, the number ofvCPUs is equivalent to the number of hardware threadson the host server.
3 Default egress bandwidth can't exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
C3 doesn't support custom machine types.
C3 regional availability for bare metal instances
For C3 VMs, you can view the available regions and zones in theAvailable regions and zones table.
- Select
C3in theSelect a machine type drop-down menu to see all thezones where you can create a C3 VM. - You can also use theSelect a locationdrop-down menu to limit the results to a geographical area.
C3 bare metal instances are available in the following regions and zones:
| Zone | High-CPU | Standard | High-mem |
|---|---|---|---|
asia-southeast1-a | |||
asia-southeast1-c | — | — | |
europe-west1-b | |||
europe-west1-c | |||
europe-west4-b | — | ||
europe-west4-c | |||
us-central1-a | — | ||
us-central1-c | — | — | |
us-east1-c | |||
us-east1-d | |||
us-east4-a | |||
us-east4-c | — | ||
us-east5-a | |||
us-east5-b | |||
us-west1-a | |||
us-west1-b |
Supported disk types for C3
C3 VMs support only the NVMe disk interface and can use the following blockstorage types:
VM instances
- Zonal balanced Persistent Disk (
pd-balanced) - Zonal SSD (performance) Persistent Disk (
pd-ssd) - Hyperdisk Extreme (
hyperdisk-extreme)—Requires at least 64 vCPUs - Hyperdisk ML (
hyperdisk-ml) - Hyperdisk Throughput (
hyperdisk-throughput) - Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability) - Local SSD (only available with
-lssdmachine types)
Bare metal instances
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Extreme (
hyperdisk-extreme)
A set amount of Local SSD disks are added to the C3 VM when you use the-lssdmachine type. This is the only way to include Local SSD storage with a C3 VM.You can't use Local SSD disks with bare metal instances.
Disk and capacity limits
For instances running Microsoft Windows and using the NVMe disk interface, thecombined number of both Hyperdisk and Persistent Diskattached volumes can't exceed a total of 16 disks.SeeKnown issues.Local SSD volumes are excluded from this issue.
C3 storage limits are described in the following table:
C3 standard
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per instance | Hyperdisk per instance | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3-standard-4 | 128 | 24 | 16 | 24 | 24 | 0 | Not supported |
c3-standard-8 | 128 | 32 | 16 | 32 | 32 | 0 | Not supported |
c3-standard-22 | 128 | 48 | 32 | 48 | 48 | 0 | Not supported |
c3-standard-44 | 128 | 64 | 32 | 64 | 64 | 0 | Not supported |
c3-standard-88 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3-standard-176 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3-standard-192-metal | 16 (Hyperdisk only) | 16 | 16 | Not supported | Not supported | 16 | Not supported |
C3 highcpu
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per instance | Hyperdisk per instance | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3-highcpu-4 | 128 | 24 | 16 | 24 | 24 | 0 | Not supported |
c3-highcpu-8 | 128 | 32 | 16 | 32 | 32 | 0 | Not supported |
c3-highcpu-22 | 128 | 48 | 32 | 48 | 48 | 0 | Not supported |
c3-highcpu-44 | 128 | 64 | 32 | 64 | 64 | 0 | Not supported |
c3-highcpu-88 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3-highcpu-176 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3-highcpu-192-metal | 16 (Hyperdisk only) | 16 | 16 | Not supported | Not supported | 16 | Not supported |
C3 highmem
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per instance | Hyperdisk per instance | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3-highmem-4 | 128 | 24 | 16 | 24 | 24 | 0 | Not supported |
c3-highmem-8 | 128 | 32 | 16 | 32 | 32 | 0 | Not supported |
c3-highmem-22 | 128 | 48 | 32 | 48 | 48 | 0 | Not supported |
c3-highmem-44 | 128 | 64 | 32 | 64 | 64 | 0 | Not supported |
c3-highmem-88 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3-highmem-176 | 128 | 64 | 32 | 64 | 64 | 8 | Not supported |
c3-highmem-192-metal | 16 (Hyperdisk only) | 16 | 16 | Not supported | Not supported | 16 | Not supported |
C3 with Local SSD
| Maximum number of disks | |||||||
|---|---|---|---|---|---|---|---|
| Machine types | Per VM1 | Hyperdisk per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Extreme | Local SSD |
c3-standard-4-lssd | 128 | 24 | 16 | 24 | 24 | 0 | 1 (375 GiB) |
c3-standard-8-lssd | 128 | 32 | 16 | 32 | 32 | 0 | 2 (750 GiB) |
c3-standard-22-lssd | 128 | 48 | 32 | 48 | 48 | 0 | 4 (1.5 TiB) |
c3-standard-44-lssd | 128 | 64 | 32 | 64 | 64 | 0 | 8 (3 TiB) |
c3-standard-88-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 16 (6 TiB) |
c3-standard-176-lssd | 128 | 64 | 32 | 64 | 64 | 8 | 32 (12 TiB) |
Network support for C3 VMs
The following network interface drivers are required:
- C3 instances requiregVNIC network interfaces.
- C3 bare metal instances require theIntel IDPF LAN PF device driver.
C3 supports up to 100 Gbps network bandwidth for standardnetworking and up to 200 Gbps with per VM Tier_1 networking performance for VM andbare metal instances.
Before migrating to C3 or creating C3 VMs or bare metalinstances, make sure that theoperating system imagethat you use supports the IDPF network driver for bare metal instances or thegVNIC driver for VM instances. To get the best possible performance onC3 VMs, choose an OS image that supports both"Tier_1 Networking" and "200 Gbps network bandwidth". These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your C3 VM isusing an operating system with an older version of gVNIC driver, this is stillsupported but the VM might experience suboptimal performance such as lessnetwork bandwidth or higher latency.
If you use a custom OS image to create a C3 VM, you canmanually install the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with C3VMs. Google recommends using the latest gVNIC driver version to benefit fromadditional features and bug fixes.
Maintenance experience for C3 instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The C3 machine series offers the following features related to hostmaintenance:
| Machine type | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance |
|---|---|---|---|---|
| C3 with Confidential VM | Minimum of 30 days | Terminate | 7 days | No |
c3-*-lssd | Minimum of 30 days | Live migrate | 7 days | Yes |
c3-*-176 | Minimum of 30 days | Live migrate | 7 days | Yes |
c3-*-192-metal | Minimum of 30 days | Terminate | 7 days | Yes |
| All others | Minimum of 30 days | Live migrate | 7 days | No |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
N2D machine series
The N2D machine series can run on either AMD EPYC Milan or AMD EPYCRome processors. The third generationAMD EPYC Milan processor is available only in specificregions and zones.To useAMD Milan as yourminimum CPU platform,request it when you create your VM instance.
The N2D series provides some of the largest general-purpose machine types withup to 224 vCPUs and 896 GB of memory and vCPU to memory ratios of 1:1, 1:4, and1:8. The AMD EPYC Rome processors in this series run with a base frequency of2.25 GHz, an effective frequency of 2.7 GHz, and a max boost frequency of3.3 GHz.
In summary, the N2D series:
- Support up to 224 vCPUs and 896 GB of memory.
- Support 50 Gbps and 100 Gbpshigh-bandwidth network configurations.
- Available in predefined andcustom VMs.
- Offer higher memory-to-core ratios for VMs created with the extended memoryfeature. Using the extended memory feature helps you avoid per-CPU softwarelicensing costs while providing access to more than 8 GB of memory per vCPU.
- Powered by third generation AMD EPYC Milan and second generation AMD EPYCRome processors.
- Supports the following discount and consumption options:
- Doesn't support GPUs or nested virtualization.
- SupportsConfidential VMwith AMD SEV and AMD SEV-SNP.
N2D VMs don't support GPUs or nested virtualization.
N2D machine types
The following table lists the features of the N2D machine series. For somemachine types, certain features are not applicable (N/A).
The amount of memory configured per vCPU differs depending on the machine type:
standard: 4 GB of system memory per vCPUhighmem: 8 GB of system memory per vCPUhighcpu: 1 GB of system memory per vCPU
N2D standard
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
n2d-standard-2 | 2 | 8 | Up to 10 | N/A |
n2d-standard-4 | 4 | 16 | Up to 10 | N/A |
n2d-standard-8 | 8 | 32 | Up to 16 | N/A |
n2d-standard-16 | 16 | 64 | Up to 32 | N/A |
n2d-standard-32 | 32 | 128 | Up to 32 | N/A |
n2d-standard-48 | 48 | 192 | Up to 32 | Up to 50 |
n2d-standard-64 | 64 | 256 | Up to 32 | Up to 50 |
n2d-standard-80 | 80 | 320 | Up to 32 | Up to 50 |
n2d-standard-96 | 96 | 384 | Up to 32 | Up to 100 |
n2d-standard-128 | 128 | 512 | Up to 32 | Up to 100 |
n2d-standard-224 | 224 | 896 | Up to 32 | Up to 100 |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
N2D high-mem
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
n2d-highmem-2 | 2 | 16 | Up to 10 | N/A |
n2d-highmem-4 | 4 | 32 | Up to 10 | N/A |
n2d-highmem-8 | 8 | 64 | Up to 16 | N/A |
n2d-highmem-16 | 16 | 128 | Up to 32 | N/A |
n2d-highmem-32 | 32 | 256 | Up to 32 | N/A |
n2d-highmem-48 | 48 | 384 | Up to 32 | Up to 50 |
n2d-highmem-64 | 64 | 512 | Up to 32 | Up to 50 |
n2d-highmem-80 | 80 | 640 | Up to 32 | Up to 50 |
n2d-highmem-96 | 96 | 768 | Up to 32 | Up to 100 |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
N2D high-cpu
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
n2d-highcpu-2 | 2 | 2 | Up to 10 | N/A |
n2d-highcpu-4 | 4 | 4 | Up to 10 | N/A |
n2d-highcpu-8 | 8 | 8 | Up to 16 | N/A |
n2d-highcpu-16 | 16 | 16 | Up to 32 | N/A |
n2d-highcpu-32 | 32 | 32 | Up to 32 | N/A |
n2d-highcpu-48 | 48 | 48 | Up to 32 | Up to 50 |
n2d-highcpu-64 | 64 | 64 | Up to 32 | Up to 50 |
n2d-highcpu-80 | 80 | 80 | Up to 32 | Up to 50 |
n2d-highcpu-96 | 96 | 96 | Up to 32 | Up to 100 |
n2d-highcpu-128 | 128 | 128 | Up to 32 | Up to 100 |
n2d-highcpu-224 | 224 | 224 | Up to 32 | Up to 100 |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
For details on the pricing information, see the following:
- For machine type pricing, seeVM pricing page.
- Disk usage and network usage is charged separately from machine type pricing.For details, seeDisk and imagepricing andNetwork pricing.
- For per VM Tier_1 network performance billing rates, seeTier_1 higher bandwidth network pricing.
Supported disk types for N2D
N2D VMs can use the following block storage types:
- Zonal and regional standard Persistent Disk (
pd-standard) - Zonal and regional balanced Persistent Disk (
pd-balanced) - Zonal and regional SSD Persistent Disk (
pd-ssd) - Hyperdisk Throughput (
hyperdisk-throughput) - Local SSD
N2D standard
| Machine types | Max number of disks per VM, across all disks1 | Max number of Hyperdisk volumes per VM2 | Max total disk size (TiB) across all disks3 | Local SSD |
|---|---|---|---|---|
n2d-standard-2 | 128 | 20 | 257 | Yes |
n2d-standard-4 | 128 | 24 | 257 | Yes |
n2d-standard-8 | 128 | 32 | 257 | Yes |
n2d-standard-16 | 128 | 48 | 257 | Yes |
n2d-standard-32 | 128 | 64 | 512 | Yes |
n2d-standard-48 | 128 | 64 | 512 | Yes |
n2d-standard-64 | 128 | 64 | 512 | Yes |
n2d-standard-80 | 128 | 64 | 512 | Yes |
n2d-standard-96 | 128 | 64 | 512 | Yes |
n2d-standard-128 | 128 | 64 | 512 | Yes |
n2d-standard-224 | 128 | 64 | 512 | Yes |
1 The maximum size per Persistent Disk volume is 64 TiB.
2The maximum size per Hyperdisk Throughput volume is 32 TiB.
3 The maximum total disk size applies to all Persistent Disk and Hyperdisk disk types attached to the VM.N2D high-mem
| Machine types | Max number of disks per VM per VM1 | Max number of Hyperdisk volumes per VM2 | Max total disk size (TiB) across all disks3 | Local SSD |
|---|---|---|---|---|
n2d-highmem-2 | 128 | 20 | 257 | Yes |
n2d-highmem-4 | 128 | 24 | 257 | Yes |
n2d-highmem-8 | 128 | 32 | 257 | Yes |
n2d-highmem-16 | 128 | 48 | 257 | Yes |
n2d-highmem-32 | 128 | 64 | 512 | Yes |
n2d-highmem-48 | 128 | 64 | 512 | Yes |
n2d-highmem-64 | 128 | 64 | 512 | Yes |
n2d-highmem-80 | 128 | 64 | 512 | Yes |
n2d-highmem-96 | 128 | 64 | 512 | Yes |
1 The maximum size per Persistent Disk volume is 64 TiB.
2The maximum size per Hyperdisk Throughput volume is 32 TiB.
3 The maximum total disk size applies to all Persistent Disk and Hyperdisk disk types attached to the VM.N2D high-cpu
| Machine types | Max number of disks per VM1 | Max number of Hyperdisk volumes per VM2 | Max total disk size (TiB) across all disks3 | Local SSD |
|---|---|---|---|---|
n2d-highcpu-2 | 128 | 20 | 257 | Yes |
n2d-highcpu-4 | 128 | 24 | 257 | Yes |
n2d-highcpu-8 | 128 | 32 | 257 | Yes |
n2d-highcpu-16 | 128 | 48 | 257 | Yes |
n2d-highcpu-32 | 128 | 64 | 512 | Yes |
n2d-highcpu-48 | 128 | 64 | 512 | Yes |
n2d-highcpu-64 | 128 | 64 | 512 | Yes |
n2d-highcpu-80 | 128 | 64 | 512 | Yes |
n2d-highcpu-96 | 128 | 64 | 512 | Yes |
n2d-highcpu-128 | 128 | 64 | 512 | Yes |
n2d-highcpu-224 | 128 | 64 | 512 | Yes |
1 The maximum size per Persistent Disk volume is 64 TiB.
2The maximum size per Hyperdisk Throughput volume is 32 TiB.
3 The maximum total disk size applies to all Persistent Disk and Hyperdisk disk types attached to the VM.N2 machine series
The N2 machine series has flexible sizing between 2 to 128 vCPUs and 0.5 to8 GB of memory per vCPU. Machine types in this series run on thefollowing processors:
Ice Lake—offered in specificregions and zones. It isthe default processor for larger machine types.
Cascade Lake—the default for machine types up to 80 vCPUs. If you wantto create VMs with
Ice Lake, you must set it as theminimum CPU platform.
You can find more details about these two processors on theCPU platforms page.
Workloads that can take advantage of the higher clock frequency are a goodchoice for this series. These workloads can get higher per-threadperformance while benefiting from all the flexibility that the general-purposemachine family offers.
In summary, the N2 machine series:
- Supports up to 128 vCPUs and 864 GB of memory.
- Supports 50 Gbps, 75 Gbps, and 100 Gbpshigh-bandwidth network configurations.
- Is available in predefined andcustom VMs.
- Has higher memory-to-core ratios for VMs created with the extended memoryfeature. Using the extended memory feature helps control per-CPU softwarelicensing costs while providing access to more than 8 GB of memory pervCPU.
- Supports the following discount and consumption options:
N2 machine types
The amount of memory configured per vCPU differs depending on the machine type:
standard: 4 GB of system memory per vCPUhighmem: 8 GB of system memory per vCPUhighcpu: 1 GB of system memory per vCPU
N2 standard
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
n2-standard-2 | 2 | 8 | Up to 10 | N/A |
n2-standard-4 | 4 | 16 | Up to 10 | N/A |
n2-standard-8 | 8 | 32 | Up to 16 | N/A |
n2-standard-16 | 16 | 64 | Up to 32 | N/A |
n2-standard-32 | 32 | 128 | Up to 32 | Up to 50 |
n2-standard-48 | 48 | 192 | Up to 32 | Up to 50 |
n2-standard-64 | 64 | 256 | Up to 32 | Up to 75 |
n2-standard-80 | 80 | 320 | Up to 32 | Up to 100 |
n2-standard-96 | 96 | 384 | Up to 32 | Up to 100 |
n2-standard-128 | 128 | 512 | Up to 32 | Up to 100 |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
N2 high-mem
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
n2-highmem-2 | 2 | 16 | Up to 10 | N/A |
n2-highmem-4 | 4 | 32 | Up to 10 | N/A |
n2-highmem-8 | 8 | 64 | Up to 16 | N/A |
n2-highmem-16 | 16 | 128 | Up to 32 | N/A |
n2-highmem-32 | 32 | 256 | Up to 32 | Up to 50 |
n2-highmem-48 | 48 | 384 | Up to 32 | Up to 50 |
n2-highmem-64 | 64 | 512 | Up to 32 | Up to 75 |
n2-highmem-80 | 80 | 640 | Up to 32 | Up to 100 |
n2-highmem-96 | 96 | 768 | Up to 32 | Up to 100 |
n2-highmem-128 | 128 | 864 | Up to 32 | Up to 100 |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
N2 high-cpu
| Machine types | vCPUs1 | Memory (GB) | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)3 |
|---|---|---|---|---|
n2-highcpu-2 | 2 | 2 | Up to 10 | N/A |
n2-highcpu-4 | 4 | 4 | Up to 10 | N/A |
n2-highcpu-8 | 8 | 8 | Up to 16 | N/A |
n2-highcpu-16 | 16 | 16 | Up to 32 | N/A |
n2-highcpu-32 | 32 | 32 | Up to 32 | Up to 50 |
n2-highcpu-48 | 48 | 48 | Up to 32 | Up to 50 |
n2-highcpu-64 | 64 | 64 | Up to 32 | Up to 75 |
n2-highcpu-80 | 80 | 80 | Up to 32 | Up to 100 |
n2-highcpu-96 | 96 | 96 | Up to 32 | Up to 100 |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 Supports high-bandwidth networking for larger machine types. For Windows OS images, the maximum network bandwidth is limited to 50 Gbps.
For details on the pricing information, see the following:
- For machine type pricing, seeVM pricing page.
- Disk usage and network usage is charged separately from machine type pricing.For details, seeDisk and imagepricing andNetwork pricing.
- For per VM Tier_1 network performance billing rates, seeTier_1 higher bandwidth network pricing.
Supported disk types for N2
N2 VMs can use the following block storage types:
- Zonal and regional standard Persistent Disk (
pd-standard) - Zonal and regional balanced Persistent Disk (
pd-balanced) - Zonal and regional SSD Persistent Disk (
pd-ssd) - Extreme Persistent Disk (
pd-extreme) - Hyperdisk Extreme (
hyperdisk-extreme). Not supported with custom N2 machine types. - Hyperdisk Throughput (
hyperdisk-throughput) - Local SSD
N2 standard
| Machine types | Max number of disks per VM, across all disks* | Max number of Hyperdisk Extreme volumes per VM† | Max number of Hyperdisk Throughput volumes per VM† | Max total disk size (TiB) across all disks‡ | Local SSD |
|---|---|---|---|---|---|
n2-standard-2 | 128 | 0 | 20 | 257 | Yes |
n2-standard-4 | 128 | 0 | 24 | 257 | Yes |
n2-standard-8 | 128 | 0 | 32 | 257 | Yes |
n2-standard-16 | 128 | 0 | 48 | 257 | Yes |
n2-standard-32 | 128 | 0 | 64 | 512 | Yes |
n2-standard-48 | 128 | 0 | 64 | 512 | Yes |
n2-standard-64 | 128 | 0 | 64 | 512 | Yes |
n2-standard-80 | 128 | 8 | 64 | 512 | Yes |
n2-standard-96 | 128 | 8 | 64 | 512 | Yes |
n2-standard-128 | 128 | 8 | 64 | 512 | Yes |
* The maximum size per Persistent Disk volume is 64 TiB.
† The maximum size per Hyperdisk Extreme volumeis 64 TiB. The maximum size per Hyperdisk Throughput volume is 32 TiB.
‡ You can attach a mixture ofHyperdisk and Persistent Disk volumes to a VM, but the totalPersistent Disk capacity can't exceed 257 TiB.
N2 high-mem
| Machine types | Max number of disks per VM, across all disks* | Max number of Hyperdisk Extreme volumes per VM† | Max number of Hyperdisk Throughput volumes per VM† | Max total disk size (TiB) across all disks‡ | Local SSD |
|---|---|---|---|---|---|
n2-highmem-2 | 128 | 0 | 20 | 257 | Yes |
n2-highmem-4 | 128 | 0 | 24 | 257 | Yes |
n2-highmem-8 | 128 | 0 | 32 | 257 | Yes |
n2-highmem-16 | 128 | 0 | 48 | 257 | Yes |
n2-highmem-32 | 128 | 0 | 64 | 512 | Yes |
n2-highmem-48 | 128 | 0 | 64 | 512 | Yes |
n2-highmem-64 | 128 | 0 | 64 | 512 | Yes |
n2-highmem-80 | 128 | 8 | 64 | 512 | Yes |
n2-highmem-96 | 128 | 8 | 64 | 512 | Yes |
n2-highmem-128 | 128 | 8 | 64 | 512 | Yes |
* The maximum size per Persistent Disk volume is 64 TiB.
† The maximum size per Hyperdisk Extreme volumeis 64 TiB. The maximum size per Hyperdisk Throughput volume is 32 TiB.
‡ You can attach a mixture ofHyperdisk and Persistent Disk volumes to a VM, but the totalPersistent Disk capacity can't exceed 257 TiB.
N2 high-cpu
| Machine types | Max number of disks per VM, across all disks* | Max number of Hyperdisk Extreme volumes per VM† | Max number of Hyperdisk Throughput volumes per VM† | Max total disk size (TiB) across all disks‡ | Local SSD |
|---|---|---|---|---|---|
n2-highcpu-2 | 128 | 0 | 20 | 257 | Yes |
n2-highcpu-4 | 128 | 0 | 24 | 257 | Yes |
n2-highcpu-8 | 128 | 0 | 32 | 257 | Yes |
n2-highcpu-16 | 128 | 0 | 48 | 257 | Yes |
n2-highcpu-32 | 128 | 0 | 64 | 512 | Yes |
n2-highcpu-48 | 128 | 0 | 64 | 512 | Yes |
n2-highcpu-64 | 128 | 0 | 64 | 512 | Yes |
n2-highcpu-80 | 128 | 8 | 64 | 512 | Yes |
n2-highcpu-96 | 128 | 8 | 64 | 512 | Yes |
* The maximum size per Persistent Disk volume is 64 TiB.
† The maximum size per Hyperdisk Extreme volumeis 64 TiB. The maximum size per Hyperdisk Throughput volume is 32 TiB.
‡ You can attach a mixture ofHyperdisk and Persistent Disk volumes to a VM, but the totalPersistent Disk capacity can't exceed 257 TiB.
E2 machine series
The cost-optimized E2 machine series have between 2 to 32 vCPUs with aratio of 0.5 GB to 8 GB of memory per vCPU for standard VMs, and 0.25 to 1 vCPUswith 0.5 GB to 8 GB of memory for shared-core E2 machine types. The E2 machineseries offers both Intel and AMD EPYC processors. The processor is selectedfor you at the time of VM creation. Machine types in this series are availablein all regions and zones and support avirtio memory balloondevice.
In summary, the E2 machine series:
- Supports up to 32 vCPUs and 128 GB of memory.
- Supports Intel and AMD EPYC Rome and Milan processors.
- Is available in predefined andcustom VMs.
- Offers the lowest on demand pricing across the general-purpose machine types.
- Supports the following discount and consumption options:
- Doesn't offersustained use discounts (SUDs);however, it provides consistently low on-demand and committed-use pricing.
- Doesn't support GPUs, Local SSDs, sole-tenant nodes, or nestedvirtualization.
Shared-core VMs
E2 shared-core machine types are cost-effective, have a virtio memory balloondevice, and are ideal for small workloads. The E2 machine series shared-coremachine types use context-switching for multi-tasking, and time-share a singlephysical core for a specific fraction of time. Different shared-core machinetypes sustain different amounts of time on a physical core.
e2-microsustains 2 vCPUs, each for 12.5% ofCPU timetotaling 25% CPU time.e2-smallsustains 2 vCPUs, each at 25% ofCPU time, totaling 50% CPU time.e2-mediumsustains 2 vCPUs, each at 50% ofCPU time, totaling 100% CPU time.
Unlike predefined machine types and custom machine types, shared-core machinetypes have a predefined price that includes both vCPUs and memory. For moreinformation, seeVM instance pricing.
CPU bursting
Shared-core machine types offer bursting capabilities that allow instancesto use additional physical CPU for short periods of time. Bursting happens automatically when yourVM requires more physical CPU than originally allocated. During thesespikes, each vCPU can burst up to 100% of CPU time, for short periods, before returning to theirnormal CPU time sharing limitations. Note that bursts are not permanent and are only possibleperiodically.
e2-micro,e2-small, ande2-mediumshared-core VMs can burst for dozens of seconds. If the CPU is utilized at 100%, then the burstlasts as follows:
e2-micro: 30 secondse2-small: 60 secondse2-medium120 seconds
The exact burst time is determined by aToken bucketmeaning utilizing the CPU less than 100% will result in longer bursts.
Bursting doesn't incur any additional charges. You are charged the listed on-demandprice for E2 shared-core and N1f1-micro, andg1-small shared-core VMs.
E2 Limitations
- The E2 machine series doesn't offer sustained use discounts (SUDs); however,it provides consistently low on-demand and committed-use pricing.
- The E2 machine series doesn't support GPUs, Local SSDs, sole-tenant nodes,or nested virtualization.
E2 machine types
E2 is available instandard,highmem, andhighcpu configurations, as wellas shared-core machine type. In general, E2 shared-core machine types can bemore cost-effective for running small, non-resource intensive applications thanstandard, high-memory, or high-CPU machine types.
The amount of memory configured per vCPU differs depending on the machine type:
standard: 4 GB of system memory per vCPUhighmem: 8 GB of system memory per vCPUhighcpu: 1 GB of system memory per vCPU- Shared core:
micro: 0.5 GB of system memory per vCPUsmall: 1 GB of system memory per vCPUmedium: 2 GB of system memory per vCPU
E2 standard
| Machine types | vCPUs | Memory (GB) | Max number of Persistent Disk (PDs)1 | Max total PD size (TiB) | Local SSD | Maximum egress bandwidth (Gbps)2 |
|---|---|---|---|---|---|---|
e2-standard-2 | 2 | 8 | 128 | 257 | No | Up to 4 |
e2-standard-4 | 4 | 16 | 128 | 257 | No | Up to 8 |
e2-standard-8 | 8 | 32 | 128 | 257 | No | Up to 16 |
e2-standard-16 | 16 | 64 | 128 | 257 | No | Up to 16 |
e2-standard-32 | 32 | 128 | 128 | 257 | No | Up to 16 |
2 Maximum egress bandwidth cannot exceed the number given. Actual SeeNetwork bandwidth.
E2 high-mem
| Machine types | vCPUs | Memory (GB) | Max number of Persistent Disk (PDs)1 | Max total Persistent Disk size (TiB) | Local SSD | Maximum egress bandwidth (Gbps)2 |
|---|---|---|---|---|---|---|
e2-highmem-2 | 2 | 16 | 128 | 257 | No | Up to 4 |
e2-highmem-4 | 4 | 32 | 128 | 257 | No | Up to 8 |
e2-highmem-8 | 8 | 64 | 128 | 257 | No | Up to 16 |
e2-highmem-16 | 16 | 128 | 128 | 257 | No | Up to 16 |
2Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
E2 high-cpu
| Machine types | vCPUs | Memory (GB) | Max number of Persistent Disk (PDs)1 | Max total PD size (TiB) | Local SSD | Maximum egress bandwidth (Gbps)2 |
|---|---|---|---|---|---|---|
e2-highcpu-2 | 2 | 2 | 128 | 257 | No | Up to 4 |
e2-highcpu-4 | 4 | 4 | 128 | 257 | No | Up to 8 |
e2-highcpu-8 | 8 | 8 | 128 | 257 | No | Up to 16 |
e2-highcpu-16 | 16 | 16 | 128 | 257 | No | Up to 16 |
e2-highcpu-32 | 32 | 32 | 128 | 257 | No | Up to 16 |
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
E2 shared-core
| Machine types | vCPUs | Fractional vCPUs1 | Memory (GB) | Max number of Persistent Disk (PDs)2 | Max total PD size (TiB) | Local SSD | Maximum egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|---|---|
e2-micro | 2 | 0.251 | 1 | 16 | 3 | no | Up to 1 |
e2-small | 2 | 0.51 | 2 | 16 | 3 | no | Up to 1 |
e2-medium | 2 | 11 | 4 | 16 | 3 | no | Up to 2 |
2 Persistent Disk and Hyperdisk usage is chargedseparately frommachine pricing.
3 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
Supported disk types for E2 VMs
E2 VMs can use the following block storage types:
- Zonal and regional balanced Persistent Disk (
pd-balanced) - Zonal and regional SSD Persistent Disk (
pd-ssd) - Zonal and regional standard Persistent Disk (
pd-standard)
N1 machine series
The N1 machine series is Compute Engine's first generationgeneral-purpose machine series available on Intel Skylake, Broadwell, Haswell,Sandy Bridge, and Ivy Bridge CPU platforms.
In summary, the N1 machine series offers the following features:
- Supports up to 96 vCPUs and 624 GB of memory.
- Has both predefined machine types and custom machine types.Custom machine types can be created within a wide range of memory-to-coreratio, ranging from 1 GB per vCPU to 6.5 GB per vCPU.
- Offers higher memory-to-core ratios for VMs created with the extended memoryfeature.
- Supports the following discount and consumption options:
- Resource-based and flexible committed use discounts (CUDs)
- Sustained use discounts (SUDs);N1 machine series offers a higher SUD percentage than the N2 machineseries.
- Spot VMs
- Reservations
- SupportsTensor Processing Units (TPUs) in selectzones.
- Can support up toten virtual interfaces per instance.
N1 machine types
N1 is available instandard,highmem, andhighcpu configurations, as wellas shared-core machine types. Different shared-core machine types sustaindifferent amounts of time on a physical core.
- An
f1-microVM instance sustains a single vCPU for upto 20% ofCPU time. - A
g1-smallVM instance sustains a single vCPU for upto 50% of CPU time.
The amount of memory configured per vCPU differs depending on the machine type:
standard: 3.75 GB of system memory per vCPUhighmem: 6.5 GB of system memory per vCPUhighcpu: 0.9 GB of system memory per vCPU- Shared core:
f1-micro: 0.6 GB of system memory per vCPUg1-small: 1.7 GB of system memory per vCPU
N1 standard
| Machine types | vCPUs1 | Memory (GB) | Max number of Persistent Disk (PDs)2 | Max total PD size (TiB) | Local SSD | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|---|---|---|
n1-standard-1 | 1 | 3.75 | 128 | 257 | Yes | Up to 2 | N/A |
n1-standard-2 | 2 | 7.50 | 128 | 257 | Yes | Up to 10 | N/A |
n1-standard-4 | 4 | 15 | 128 | 257 | Yes | Up to 10 | N/A |
n1-standard-8 | 8 | 30 | 128 | 257 | Yes | Up to 16 | N/A |
n1-standard-16 | 16 | 60 | 128 | 257 | Yes | Up to 324 | N/A |
n1-standard-32 | 32 | 120 | 128 | 257 | Yes | Up to 324 | N/A |
n1-standard-64 | 64 | 240 | 128 | 257 | Yes | Up to 324 | N/A |
n1-standard-96 | 96 | 360 | 128 | 257 | Yes | Up to 324 | N/A |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Persistent Disk and Hyperdisk usage is chargedseparately frommachine type pricing.
3 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 32 Gbps for Skylake or laterCPU platforms. 16 Gbps forall other platforms.
N1 high-memory
| Machine types | vCPUs1 | Memory (GB) | Max number of Persistent Disk (PDs)2 | Max total PD size (TiB) | Local SSD | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|---|---|---|
n1-highmem-2 | 2 | 13 | 128 | 257 | Yes | Up to 10 | N/A |
n1-highmem-4 | 4 | 26 | 128 | 257 | Yes | Up to 10 | N/A |
n1-highmem-8 | 8 | 52 | 128 | 257 | Yes | Up to 16 | N/A |
n1-highmem-16 | 16 | 104 | 128 | 257 | Yes | Up to 324 | N/A |
n1-highmem-32 | 32 | 208 | 128 | 257 | Yes | Up to 324 | N/A |
n1-highmem-64 | 64 | 416 | 128 | 257 | Yes | Up to 324 | N/A |
n1-highmem-96 | 96 | 624 | 128 | 257 | Yes | Up to 324 | N/A |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Persistent Disk and Hyperdisk usage is chargedseparately frommachine type pricing.
3 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 32 Gbps for Skylake or laterCPU platforms. 16 Gbps forall other platforms.
N1 high-cpu
| Machine types | vCPUs1 | Memory (GB) | Max number of Persistent Disk (PDs)2 | Max total PD size (TiB) | Local SSD | Default egress bandwidth (Gbps)3 | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|---|---|---|
n1-highcpu-2 | 2 | 1.80 | 128 | 257 | Yes | Up to 10 | N/A |
n1-highcpu-4 | 4 | 3.60 | 128 | 257 | Yes | Up to 10 | N/A |
n1-highcpu-8 | 8 | 7.20 | 128 | 257 | Yes | Up to 16 | N/A |
n1-highcpu-16 | 16 | 14.4 | 128 | 257 | Yes | Up to 324 | N/A |
n1-highcpu-32 | 32 | 28.8 | 128 | 257 | Yes | Up to 324 | N/A |
n1-highcpu-64 | 64 | 57.6 | 128 | 257 | Yes | Up to 324 | N/A |
n1-highcpu-96 | 96 | 86.4 | 128 | 257 | Yes | Up to 324 | N/A |
1 A vCPU is implemented as a single hardware thread, or logicalcore, on one of the availableCPU platforms.
2 Persistent Disk and Hyperdisk usage is chargedseparately frommachine type pricing.
3 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
4 32 Gbps for Skylake or laterCPU platforms. 16 Gbps forall other platforms.
N1 shared-core
| Machine types | vCPUs | Fractional vCPUs1 | Memory (GB) | Max number of Persistent Disk (PDs)2 | Max total PD size (TiB) | Local SSD | Maximum egress bandwidth (Gbps)3 |
|---|---|---|---|---|---|---|---|
f1-micro | 1 | 0.21 | 0.60 | 16 | 3 | No | Up to 1 |
g1-small | 1 | 0.51 | 1.70 | 16 | 3 | No | Up to 1 |
1 Fractional vCPU of 0.2 or 0.5, with 1 vCPU exposed to theguest operating system.
2 Persistent Disk and Hyperdisk usage is chargedseparately fromVM pricing.
3 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
Supported disk types for N1 VMs
N1 VMs can use the following block storage types:
- Zonal and regional balanced Persistent Disk (
pd-balanced) - Zonal and regional SSD Persistent Disk (
pd-ssd) - Zonal and regional standard Persistent Disk (
pd-standard) - Local SSD disks
Tau T2A machine series
The Tau T2A machine series runs on the Ampere Altra Arm processor with a basefrequency of 3.0 GHz. Tau T2A offers predefined machine types with 1 to 48vCPUs, supports 4 GB of memory per vCPU, and offers a maximum of 32 Gbps of outbounddata transfer.
This series is available only in selectregions and zones.
The Tau T2A machine series doesn't support simultaneous multithreading(SMT); each vCPU is equivalent to an entire core.
Tau T2A machine types
Tau T2A standard machine types have 4 GB of system memory per vCPU.
| Machine types | vCPUs* | Memory (GB) | Max number of Persistent Disk (PDs)† | Max total PD size (TiB) | Local SSD | Default egress bandwidth (Gbps)‡ | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|---|---|---|
t2a-standard-1 | 1 | 4 | 128 | 257 | No | Up to 10 | N/A |
t2a-standard-2 | 2 | 8 | 128 | 257 | No | Up to 10 | N/A |
t2a-standard-4 | 4 | 16 | 128 | 257 | No | Up to 10 | N/A |
t2a-standard-8 | 8 | 32 | 128 | 257 | No | Up to 16 | N/A |
t2a-standard-16 | 16 | 64 | 128 | 257 | No | Up to 32 | N/A |
t2a-standard-32 | 32 | 128 | 128 | 257 | No | Up to 32 | N/A |
t2a-standard-48 | 48 | 192 | 128 | 257 | No | Up to 32 | N/A |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
Tau T2A Limitations
The Tau T2A machine series doesn't support:
- Custom machine types
- Sole tenant nodes
- Nested virtualization
- Extreme Persistent Disk
- Local SSD
- Regional Persistent Disk
- Virtio-SCSI Storage Controller and Virtio-Net Ethernet Adapter
- Windows Server or Windows Client OS
- 32-bit mode EL0 (guest userspace support)
- Committed use discounts (CUDs)orsustained use discounts (SUDs);however, it offersSpot VM discounts.
- Virtual display devices
T2A supports theSecure bootfeature, but not all public OS images for T2A support secure boot.
Supported disk types for T2A
T2A VMs support only the NVMe disk interface and can use thefollowing block storage types:
- Zonal standard Persistent Disk (
pd-standard) - Zonal balanced Persistent Disk (
pd-balanced) - Zonal SSD (performance) Persistent Disk (
pd-ssd)
For instances running Microsoft Windows and using the NVMe disk interface, thecombined number of both Hyperdisk and Persistent Diskattached volumes can't exceed a total of 16 disks.SeeKnown issues.Local SSD volumes are excluded from this issue.
Tau T2D machine series
The Tau T2D machine series run on the third generationAMD EPYC Milan processor with a base frequency of 2.45 GHz, an effective frequency of 2.8 GHz, and a maxboost frequency of 3.5 GHz. This series has predefined machine types of up to 60vCPUs, support 4 GB of memory per vCPU, and a maximum of 32 Gbps outbound datatransfer. Italso supports the following discount and consumption options:
This series is available only in selectregions and zones.
Machine types in the Tau T2D machine series have simultaneous multithreading(SMT) disabled; therefore a vCPU is equivalent to an entire core.
Tau T2D Limitations
Tau T2D VMs don't support:
- Local SSD
- Regional Persistent Disk
- Custom VMs
- Sole-tenant nodes
- Extreme Persistent Disk
- GPUs
- Nested virtualization
- Flexible CUDs
- Sustained use discounts (SUDs)
- Confidential VMs
Tau T2D machine types
Tau T2D standard machine types have 4 GB of system memory per vCPU.
| Machine types | vCPUs* | Memory (GB) | Default egress bandwidth (Gbps)‡ | Tier_1 egress bandwidth (Gbps) |
|---|---|---|---|---|
t2d-standard-1 | 1 | 4 | Up to 10 | N/A |
t2d-standard-2 | 2 | 8 | Up to 10 | N/A |
t2d-standard-4 | 4 | 16 | Up to 10 | N/A |
t2d-standard-8 | 8 | 32 | Up to 16 | N/A |
t2d-standard-16 | 16 | 64 | Up to 32 | N/A |
t2d-standard-32 | 32 | 128 | Up to 32 | N/A |
t2d-standard-48 | 48 | 192 | Up to 32 | N/A |
t2d-standard-60 | 60 | 240 | Up to 32 | N/A |
1 SMT is not supported. Each vCPU is equivalent to an entire core. SeeCPU platforms.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
For details on the pricing information, see the following:
- For machine type pricing, seeVM pricing page.
- Disk usage and network usage is charged separately from machine type pricing.For details, seeDisk and imagepricing andNetwork pricing.
Supported disk types for T2D
T2D VMs can use the following block storage types:
- Zonal standard Persistent Disk (
pd-standard) - Zonal balanced Persistent Disk (
pd-balanced) - Zonal SSD (performance) Persistent Disk (
pd-ssd) - Hyperdisk Throughput (
hyperdisk-throughput)
| Machine types | Max number of disks per VM* | Max number of Hyperdisk volumes per VM† | Max total disk size (TiB) across all disks‡ | Local SSD |
|---|---|---|---|---|
t2d-standard-1 | 128 | 20 | 257 | No |
t2d-standard-2 | 128 | 20 | 257 | No |
t2d-standard-4 | 128 | 24 | 257 | No |
t2d-standard-8 | 128 | 32 | 257 | No |
t2d-standard-16 | 128 | 48 | 257 | No |
t2d-standard-32 | 128 | 64 | 512 | No |
t2d-standard-48 | 128 | 64 | 512 | No |
t2d-standard-60 | 128 | 64 | 512 | No |
* The maximum size per Persistent Disk volume is 64 TiB.
†The maximum size per Hyperdisk Throughput volume is 32 TiB.
‡You can attach a mixture of Hyperdiskand Persistent Disk volumes to a VM, but the total Persistent Disk capacity can't exceed257 TiB.
Custom machine types
If none of the predefined machine types in the general-purpose machine familymatch your workload needs, you can create a VM with a custom machine type.
Creating a VM with a custom machine type is ideal for workloads that requiremore processing power or more memory, but don't need all of the upgrades thatare provided by the next larger predefined machine type.
It costs slightly more to use a custom machine type than an equivalentpredefined machine type, and there are limitations in the amount of memory andvCPUs that you can select. The on-demand prices for custom machine types includea 5% premium over the on-demand and commitment prices for predefined machinetypes.
You can create a VM with a custom machine type for only the N and Emachine series in the general-purpose machine family. Custom machine types arenot available for the C and Tau machine series. Custom machine types are subjectto the same Persistent Disk limits as E2, N2, and N1 predefined machine types. Themaximum total Persistent Disk size for each VM is 257 TiB and the max number ofPersistent Disk is 128. N4, N4A, and N4D custom machines types are subject to thelimitations ofHyperdisk capacity
If a custom machine type doesn't meet your requirements, it's possible tocustomize the number of visible CPU coreson many machine types. It's also possible toset the number of threads per corefor certain machine types. You can make these changes during VM instancecreation, or by editing an existing VM instance. Reducing the number of visiblecores might impact the cost of your VMs. Be sure to reviewpricing prior tomaking any changes.
Review the following table for the custom machine type limits for each machineseries.
N4A custom machine types
- For N4A custom machine types, you can create a machine type with 1 to64 vCPUs and memory between 2 and 512 GB. vCPU and memory can be adjustedin increments of 1 GB.
- By default, the memory per vCPU that you can select for a custom machinetype is determined by the machine series you use. For the N4A machine series,select between 2 GB and 8 GB per vCPU. You can access more memory beyond thedefault option by enablingextended memory.
- N4A custom machine types are available only in selectregions and zones.
Examples of invalid machine types:
- 2 vCPUs, 0.5 GB of total memory. Invalid because the total memory is less than the minimum 2 GB and does not use an increment of 1 GB for an N4A VM.
- 100 vCPU, 200 GB of memory. Invalid because the vCPU count is too large. N4A custom machine types can use a maximum of 64 vCPUs.
Examples of valid machine types:
- 36 vCPUs, 72 GB of total memory. Valid because the amount of memory per vCPU is within the acceptable range of 2 GB to 8 GB per vCPU.
- 5 vCPUs, 14 GB of total memory. Valid because it has 5 vCPU, which is in the acceptable range of 1 to 64 vCPU, and the amount of memory per vCPU uses an increment of 1 GB and is within the acceptable range of 2 GB to 8 GB per vCPU.
N4D custom machine types
- The maximum number of vCPUs allowed for a custom machine type is determined by the machine series you choose. For the N4D machine series, which supports the AMD EPYC Turin platform, you can deploy custom machine types with 2 to 96 vCPUs and 1 to 768 GB of memory.
- You can create N4D custom machine types with 2, 4, 8, or 16 vCPUs. After 16, you can increment the number of vCPUs by 16, up to 96 vCPUs. The minimum acceptable number of vCPUs is 2.
- By default, the memory per vCPU that you can select for a custom machine type is determined by the machine series you choose. For N4D machine types, select between 0.5 GB and 8 GB per vCPU in 256 MB increments. Higher amounts of memory are possible by enablingextended memory.
- N4D custom machine types are available only in selectregions and zones.
- N4D custom machine types are available only with standard networking with a maximum egress limits of 50 Gbps.
Examples of invalid machine types:
- 2 vCPUs, 0.4 GB of total memory. Invalid because the total memory is less than the minimum 1 GB for an N4D VM and not in increments of 256 MB.
- 34 vCPUs, 34 GB of total memory. Invalid because the total number of vCPUs is not divisible by 16.
- 1 vCPU, 1024 MB of memory. Invalid because the vCPU count is too small. N4D custom machine types require a minimum of 2 vCPUs.
Examples of valid machine types:
- 32 vCPUs, 16 GB of total memory. Valid because the total number of vCPUs is a multiple of 16 and the total memory is a multiple of 256 MB. The amount of memory per vCPU is 0.5 GB, which satisfies the minimum requirement. Because the number of vCPUs is larger than 8 vCPUs, the number of vCPUs must be divisible by 16.
- 2 vCPUs, 7 GB of total memory. Valid because it has 2 vCPUs, which is the minimum value, and the total memory is a multiple of 256 MB. The amount of memory per vCPU is also within the acceptable range of 0.5 GB to 8 GB per vCPU.
N4 custom machine types
- For N4 custom machine types, you can create a machine type with 2 to 80vCPUs with the vCPUs in multiples of 2, and memory between 4 and 640 GB.
- By default, the memory per vCPU that you can select for a custom machine type is determined by the machine series you use. For the N4 machine series, select between 2 GB and 8 GB per vCPU in 256 MB increments. When creating a standard N4 machine type, the minimum memory you can select is 4 GB. Higher amounts of memory are possible by enablingextended memory.
Examples of invalid machine types:
- 2 vCPUs, 0.5 GB of total memory. Invalid because the total memory is less than the minimum 4 GB for an N4 VM.
- 1 vCPU, 8 GB of memory. Invalid because the vCPU count is too small. N4 custom machine types require a minimum of 2 vCPUs.
Examples of valid machine types:
- 36 vCPUs, 72 GB of total memory. Valid because the total number of vCPUs is even and the total memory is a multiple of 256 MB. The amount of memory per vCPU is also within the acceptable range of 2 GB to 8 GB per vCPU.
- 2 vCPUs, 14 GB of total memory. Valid because it has 2 vCPUs, which is the minimum value, and the total memory is a multiple of 256 MB. The amount of memory per vCPU is also within the acceptable range of 2 GB to 8 GB per vCPU.
N2D custom machine types
- The maximum number of vCPUs allowed for a custom machine type is determined by the machine series you choose. For the N2D machine series, which support AMD EPYC Rome and Milan platforms, you can deploy custom machine types with 2 to 96 vCPUs.
- You can create N2D custom machine types with 2, 4, 8, or 16 vCPUs. After 16, you can increment the number of vCPUs by 16, up to 96 vCPUs. The minimum acceptable number of vCPUs is 2.
- By default, the memory per vCPU that you can select for a custom machine type is determined by the machine series you choose. For N2D machine types, select between 0.5 GB and 8.0 GB per vCPU in 256 MB increments. Higher amounts of memory are possible by enablingextended memory.
- N2D custom machine types are available only in selectregions and zones.
- N2D custom machine types support per VM Tier_1 networking performance maximum egress limits of 50 Gbps to 100 Gbps. When enabled:
- VMs with 48 to 94 vCPUs have a total egress limit of 50 Gbps.
- VMs with 96 vCPUs have a total egress limit of 100 Gbps.
Examples of invalid machine types:
- 2 vCPUs, 0.4 GB of total memory. Invalid because the total memory is less than the minimum 1 GB for an N2D VM.
- 34 vCPUs, 34 GB of total memory. Invalid because the total number of vCPUs is not divisible by 16.
- 1 vCPU, 1024 MB of memory. Invalid because the vCPU count is too small. N2D custom machine types require a minimum of 2 vCPUs.
Examples of valid machine types:
- 32 vCPUs, 16 GB of total memory. Valid because the total number of vCPUs is even and the total memory is a multiple of 256 MB. The amount of memory per vCPU is 1 GB, which satisfies the minimum requirement. Because the number of vCPUs is larger than 8 vCPUs, the number of vCPUs must be divisible by 16.
- 2 vCPUs, 7 GB of total memory. Valid because it has 2 vCPUs, which is the minimum value, and the total memory is a multiple of 256 MB. The amount of memory per vCPU is also within the acceptable range of 1 GB to 8 GB per vCPU.
N2 custom machine types
- For N2 custom machine types, you can create a machine type with 2 to 80 vCPUs and memory between 1 and 864 GB. For machine types with up to 32 vCPUs, you can select a vCPU count that is a multiple of 2. For machine types with greater than 32 vCPUs, you must select a vCPU count that is a multiple of 4 (for example, 36, 40, 56, or 80).
- You can create N2 custom machine types on different processors:
- Cascade Lake, the 2nd generation of the Intel Xeon processor. This is the default processor for N2 custom machine types with less than 80 vCPUs.
- Ice Lake, the 3rd generation of the Intel Xeon processor. Ice Lake processors are available in specificregions and zones.
- By default, the memory per vCPU that you can select for a custom machine type is determined by the machine series you use. For the N2 machine series, select between 0.5 GB and 8.0 GB per vCPU in 256 MB increments. Higher amounts of memory are possible by enablingextended memory.
- N2 custom machine types have an option for a per VM Tier_1 networking performance maximum egress of 50 Gbps to 100 Gbps with a minimum of 30 vCPUs.
- 32 to 62 vCPUs have a total egress of 50 Gbps
- 64 to 78 vCPUs have a total egress of 75 Gbps
- 80 vCPUs have a total egress of 100 Gbps
Examples of invalid machine types:
- 2 vCPUs, 0.5 GB of total memory. Invalid because the total memory is less than the minimum 1 GB for an N2 VM.
- 34 vCPUs, 34 GB of total memory. Invalid because the total number of vCPUs is not divisible by 4.
- 1 vCPU, 1024 MB of memory. Invalid because the vCPU count is too small. N2 custom machine types require a minimum of 2 vCPUs.
Examples of valid machine types:
- 36 vCPUs, 18 GB of total memory. Valid because the total number of vCPUs is even and the total memory is a multiple of 256 MB. The amount of memory per vCPU is also within the acceptable range of 0.5 GB to 8 GB per vCPU. Because the number of vCPUs is larger than 32 vCPUs, the number of vCPUs must be divisible by 4.
- 2 vCPUs, 7 GB of total memory. Valid because it has 2 vCPUs, which is the minimum value, and the total memory is a multiple of 256 MB. The amount of memory per vCPU is also within the acceptable range of 0.5 GB to 8 GB per vCPU.
E2 custom machine types
- E2 custom machine types support predefined platforms with Intel or AMD EPYC processors. You can create E2 custom machine types with vCPUs in multiples of 2, up to 32 vCPUs. The minimum acceptable number of vCPUs for a VM is 2.
- By default, the general-purpose machine series you choose determines the memory per vCPU that you can select for a custom machine type. For E2, the ratio of memory per vCPU is 0.5 GB to 8 GB inclusive. When creating a standard E2 machine type, the minimum memory you can select is 1 GB.
- An exception to the minimum vCPU limitation is to create an e2-standard-2 VM, then customize the visible core to 1 vCPU. The resulting VM is an e2-custom VM. For example, you create an E2 VM using the
e2-standard-2machine type, stop the VM, and edit it by changing the visible core to 1 vCPU with 1.25 GB of memory. As a result, the machine type changes toe2-custom-2-1280. Pricing is described in theCustomize the number of visible CPU cores document.
Examples of invalid machine types:
- 1 vCPU, 1024 MB of memory. Invalid because the vCPU count is too small. E2 custom machine types require a minimum of 2 vCPUs.
- 32 vCPUs, 1 GB of total memory. Invalid because the ratio of vCPUs to memory is incorrect. The acceptable ratio is 0.5 GB of memory to 1 vCPU.
Examples of valid machine types:
- 32 vCPUs, 16 GB of total memory. Valid because the total number of vCPUs is even and the total memory is an acceptable ratio of memory to vCPU.
- 2 vCPUs, 8 GB of total memory. Valid because it has 2 vCPUs, which is the minimum value, and the total memory is a multiple of 256 MB. The amount of memory per vCPU is also within the acceptable range of 0.5 GB to 8 GB per vCPU.
E2 shared-core custom machine types
E2 shared-core machine types support predefined Intel or AMD EPYC processors, which are preselected for you at the time of VM creation. You can create shared-core machine types with a vCPU range of 0.25 to 1 vCPU. The memory range is 1 to 8 GB, with a maximum ratio of 8 GB per vCPU.
You can't customize the number of visible cores on a shared-core E2 VM.
e2-micro: 0.25 vCPU, 1 to 2 GB of memorye2-small: 0.50 vCPU, 1 to 4 GB of memorye2-medium: 1 vCPU, 1 to 8 GB of memory
N1 custom machine types
- You can create N1 custom machine types with 1 or more vCPUs. For VMs with more than 1 vCPU, you must increment the number of vCPUs by 2, up to 96 vCPUs for Intel Skylake platform,or up to 64 vCPUs for Intel Broadwell, Haswell, or Ivy Bridge CPU platforms.
- By default, the memory per vCPU that you can select for a custom machine type is determined by the machine series you choose. For N1 machine types, select between 0.9 GB and 6.5 GB per vCPU, inclusive. N1 custom machine types with 1 or 2 vCPUs require a minimum of 1 GB per vCPU. Higher amounts of memory are possible by enabling extended memory.
Examples of invalid machine types:
- 1 vCPU, 0.2 GB of total memory. Invalid because the total memory is less than the minimum 1 GB for an N1 VM.
- 3 vCPU, 1 GB of total memory. Invalid because the number of vCPU cores must be 1 or an even number up to 96.
Examples of valid machine types:
- 32 vCPUs, 29 GB of total memory. Valid because the total number of vCPUs is even and the total memory is a multiple of 256 MB. The total memory is an acceptable ratio of memory to vCPU.
- 1 vCPU, 1 GB of total memory. Valid because it has one vCPU, which is the minimum value, and the total memory is a multiple of 256 MB. The amount of memory per vCPU is also within the acceptable range of 1 GB to 6.5 GB per vCPU.
What's next
- Network bandwidth
- Configuring a VM with a high-bandwidth network
- Virtual machine instances
- VM instance pricing
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.