CPU platforms Stay organized with collections Save and categorize content based on your preferences.
When you create a virtual machine (VM) or bare metal instance usingCompute Engine, you specify a machine series and a machine type for theinstance. Each machine series is associated with one or more CPU platforms.If there are multiple CPU platforms available for a machine series, you canselect aminimum CPUplatform for the compute instance.
A CPU platform offers multiple physical processors, and each of these processorsis referred to as a core. For the processors available onCompute Engine, a single CPU core can run as multiple hardware threadsthroughSimultaneous multithreading (SMT),which is known on Intel processors asIntel Hyper-Threading Technology.On Compute Engine, each hardware thread is called a virtual CPU(vCPU). Some machine series don't use SMT, and each vCPU instead represents acore. When vCPUs are reported to the instance as occupying different virtualcores, Compute Engine verifies that these vCPUs never share the samephysical core.
Themachine type of your compute instancespecifies its number of vCPUs, and you can infer its number of physical CPUcores using the default vCPU per core ratio for that machine series:
- For the C4A, N4A (Preview),T2D, T2A, H4D, H3, and A4X machine series,Compute Engine instances always have one vCPU per core.
- For all other machine series, the compute instances have two vCPUs per coreby default.
You can optionallyset the number of threads per core,to a non-default value, which might benefit some workloads. Importantly, whenyou do this, the machine type of your compute instance no longer reflects thecorrect number of vCPUs. Instead, thepricingand number of physical CPU cores remains the same as it would be for thedefault two vCPUs per core ratio, and the number of vCPUs is half of thevalue indicated by the machine type.
Arm processors
For Arm processors, Compute Engine uses one thread per core. Each vCPUmaps to a physical core with no SMT.
The following table describes the Arm processors that are available forCompute Engine instances.
| CPU processor | Processor SKU | Supported machine series and types |
|---|---|---|
| NVIDIA Grace Processors with Arm Neoverse V2 cores | Superchip | A4X |
| Google Axion Processors | C4A, N4A (Preview) | |
| Ampere Altra | Q64-30 | Tau T2A |
x86 processors
For most x86 processors, each vCPU is implemented as a single hardware thread.
Intel processors
On Intel Xeon processors,Intel Hyper-Threading Technologysupports multiple threads running concurrently on each core. Themachine type of yourcompute instance determines the number of its vCPUs and memory.
The H3 machine series doesn't use hyper-threading, and one vCPUrepresents one physical core.
| CPU processor | Processor SKU | Supported machine series and types | Base frequency (GHz) | All-core turbo frequency (GHz) | Single-core max turbo frequency (GHz) |
|---|---|---|---|---|---|
| Intel Xeon Scalable Processor (Granite Rapids) 6th generation | |||||
| Intel Xeon Platinum 6985P-C Processor | 2.81 | 3.9 | 4.2 | ||
| Intel Xeon Scalable Processor (Emerald Rapids) 5th generation | |||||
| Intel Xeon Platinum 8581C Processor | 2.1 | 2.9 | 4.0 | ||
| 2.3 | 3.1 | 4.0 | |||
| 2.1 | 2.9 | 3.3 | |||
| Intel Xeon Scalable Processor (Sapphire Rapids) 4th generation | Intel Xeon Platinum 8490H Processor | 1.9 | 2.9 | 3.5 | |
| Intel Xeon Platinum 8481C Processor | 2.2 | 3.0 | 3.0 | ||
| 2.2 | 3.0 | 3.8 | |||
| 2.0 | 3.8 | 2.9 | |||
| Intel Xeon Scalable Processor (Ice Lake) 3rd Generation | Intel Xeon Platinum 8373C Processor | 2.6 | 3.4 | 3.5 | |
| Intel Xeon Scalable Processor (Cascade Lake) 2nd Generation | |||||
| Intel Xeon Gold 6268CL Processor | 2.8 | 3.4 | 3.9 | ||
| Intel Xeon Gold 6253CL Processor | 3.1 | 3.8 | 3.9 | ||
| Intel Xeon Platinum 8280L Processor | 2.5 | 3.4 | 4.0 | ||
| Intel Xeon Platinum 8273CL Processor | 2.2 | 2.9 | 3.7 | ||
| Intel Xeon Scalable Processor (Skylake) 1st Generation | Intel Xeon Scalable Platinum 8173M Processor | 2.0 | 2.7 | 3.5 | |
| Intel Xeon E7 (Broadwell E7) | Intel Xeon E7-8880V4 Processor | 2.2 | 2.6 | 3.3 | |
| Intel Xeon E5 v4 (Broadwell E5) | Intel Xeon E5-2696V4 Processor | 2.2 | 2.8 | 3.7 | |
| Intel Xeon E5 v3 (Haswell) | Intel Xeon E5-2696V3 Processor | 2.3 | 2.8 | 3.8 | |
| Intel Xeon E5 v2 (Ivy Bridge) | Intel Xeon E5-2696V2 Processor | 2.5 | 3.1 | 3.5 | |
| Intel Xeon E5 (Sandy Bridge) | Intel Xeon E5-2689 Processor | 2.6 | 3.2 | 3.6 |
1C4 machine types that use the Intel Granite Rapids CPU have abase frequency of 2.8, however vPMU will present 2.3 for compatibility purposes.
2N2 machine types that have 96 or more vCPUs require the IntelIce Lake CPU.
AMD processors
AMD processors provide optimized performance and scalability using SMT. Inalmost all cases, Compute Engine uses two threads per core, and eachvCPU is one thread. H4D and Tau T2D are the exceptions whereCompute Engine uses one thread per core and each vCPU maps to aphysical core.
Themachine typeof your compute instance determines the number of vCPUs and amount of memoryallocated to the instance.
| CPU processor | Processor SKU | Supported machine series | Base frequency (GHz) | Effective frequency (GHz) | Max boost frequency (GHz) |
|---|---|---|---|---|---|
| AMD EPYC Turin 5th Generation | AMD EPYC 9B45 | 2.7 | 3.5 | 4.1 | |
| AMD EPYC Genoa 4th Generation | AMD EPYC 9B14 | 2.6 | 3.3 | 3.7 | |
| AMD EPYC Milan 3rd Generation | AMD EPYC 7B13 | 2.45 | 2.8 | 3.5 | |
| AMD EPYC Rome 2nd Generation | AMD EPYC 7B12 | 2.25 | 2.7 | 3.3 |
Frequency behavior
The previous tables describe the hardware specifications of the CPUs that areavailable with Compute Engine, but keep the following points in mind:
Frequency: A PC's frequency, or clock speed, measures the number ofcycles the CPU executes per second, measured in GHz (gigahertz). Generally,higher frequencies indicate better performance. However, different CPU designshandle instructions differently, so an older CPU with a higher clock speedcan be outperformed by a newer CPU with a lower clock speed because the newerarchitecture deals with instructions more efficiently.
Base frequency: The frequency at which the CPU runs when the system isidle or under light load. When running at its base frequency, the CPU drawsless power and produces less heat.
A compute instance's guest environment reflects the base frequency,regardless of what frequency the CPU is actually running at.
All-core turbo frequency: The frequency at which each CPU typicallyruns when all cores in the socket are not idle at the same time. Differentworkloads place different demands on a system's CPU. Boost technologiesaddress this difference and help processes adapt to the workload demands byincreasing the CPU's frequency.
- Most compute instances get the all-core turbo frequency, even if onlythe base frequency is advertised to the guest environment.
- Ampere Altra Arm processors can provide more predictable performancebecause the frequency for Arm processors is always the all-core turbofrequency.
C4 instances areable to run at all-core-max turbo frequency by setting theAdvancedMachineFeaturefield to
ALL_CORE_MAX. If this field is unset, the instance runs atthe default setting, which is unrestricted frequency.The
ALL_CORE_MAXsetting isn't available with C4D or C4A computeinstances.
Max turbo frequency: The frequency a CPU targets when stressed by ademanding application like a video game or design modeling application. It'sthe maximum single-core frequency that a CPU achieves without overclocking.
Processor power management technologies: Intel processors support multipletechnologies to optimize the power consumption. These technologies are dividedinto two categories, or states:
- C-states are states when the CPU has reduced or turned off selectedfunctions.
- P-states provide a way to scale the frequency and voltage at which theprocessor runs so as to reduce the power consumption of the CPU.
All C4 machine types, and certain C2 (30, 60 vCPUs), C2D (56, 112 vCPUs) andM2 (208, 416 vCPUs) machine types support instance-provided C-state hints byway of the
MWAITinstruction.Compute Engine instances don't provide any facilities for customercontrol of P-states.
CPU features
Chip manufacturers add advanced technologies for computations, graphics,virtualization, and memory management to the CPUs they produce. Google Cloudsupports the use of some of these advanced features with Compute Engine.
Advanced Vector Extensions
Advanced Vector Extensions (AVX) are single instruction, multiple data (SIMD)extensions to the x86 instruction set architecture for microprocessors fromIntel and Advanced Micro Devices (AMD). AVX provides new instructions and anew coding scheme.
For more information, seeAdvanced Vector Extensions.
AVX is available with all x86 processors used by Compute Engine.
Advanced Vector Extensions (AVX2)
AVX2 (also known as Haswell New Instructions) introduces the followingadditions to AVX:
- Expands most vector integer SSE and AVX instructions to 256 bits
- Adds support for Gather, enabling vector elements to be loaded fromnon-contiguous memory locations
- Any-to-any permutes with DWORD- and QWORD-granularity
- Vector shifts
AVX2 is available with the following CPU platforms:
- Intel Xeon E5 v3 (Haswell) and newer processors
- All AMD processors
Advanced Vector Extensions (AVX512)
AVX-512 expands AVX to 512-bit support using the EVEX prefix encoding. AVX-512provides built-in acceleration for demanding workloads that involveheavy vector-based processing. The large register for the AVX-512 acceleratorsupports 32 double-precision and 64 single-precision floating-point numbers,in addition to eight 64-bit and 16 32-bit integers.
For more information about AVX-512, seeWhat is Intel AVX-512?.
AVX-512 is available with the following CPU platforms:
- Intel Xeon Scalable Processor (Skylake) 1st Generation and newer processors
- AMD EPYC Genoa 4th Generation and newer processors
Advanced Matrix Extensions
Intel Advanced Matrix Extensions (AMX) is a new instruction set architecture (ISA) extension designed to accelerateartificial intelligence (AI) and machine learning (ML) workloads. AMXintroduces new instructions that can be used to perform matrix multiplicationand convolution operations, which are two of the most common operations in AIand ML.
AMX introduces 2-dimensional registers calledtiles upon which acceleratorscan perform operations. AMX is intended as an extensible architecture. The firstaccelerator implemented is called tile matrix multiply unit (TMUL). EachCPU core of the Sapphire Rapids processor has an independent AMX TMUL unit.
For technical details about Intel AMX, seeIntel AMX support in 5.16.Intel offers a tutorial on AMX atCode Sample: Intel Advanced Matrix Extensions (Intel AMX) - Intrinsics Functions.
AMX is available with Intel Xeon 4th generation (Sapphire Rapids) and laterprocessors. AMX is not available with AMD or Arm processors.
Requirements for using AMX
Intel AMX instructions have certain minimum software requirements such as:
- For custom images, AMX is supported with Linux kernel version 5.16 orlater.
- Compute Engine offers support for AMX in the followingpublic images:
- CentOS Stream 9
- Container-Optimized OS 109 LTS or later
- RHEL 8 (latest build) or later
- Rocky Linux 8 (latest build) or later
- Ubuntu 22.04 or later
- Windows Server 2022 or later
- Tensorflow2.9.1 or greater
- Intel extension forIntel Optimization for PyTorch
CPU features available to bare metal instances
In addition to offering all the raw compute resources of the server, bare metalinstances that run on 4th generation and later Intel Xeon Scalable Processorscan use several on board, function-specific accelerators and offloads:
- Intel-QAT: Intel QuickAssist Technology (Intel QAT) acceleratescompression, encryption, and decryption
- Intel-DLB: Intel Dynamic Load Balancer (Intel DLB) helps to speed updata queues
- Intel IAA: Intel In-Memory Analytics Accelerator (Intel IAA) improvesquery processing performance.
- Intel DSA: Intel Data Streaming Accelerator (Intel DSA) helps tocopy and move data faster.
Confidential Computing
To protect your data while it's in use, CPU platforms that supportConfidential Computing technologies can be used to createConfidential VMinstances.
To learn more about the requirements for creating a Confidential VMinstance, seeSupported configurations.
What's next
- Learn more aboutMachine families.
- Learn more aboutCompute Engine instances.
- Learn more aboutImages.
- Learn how toSpecify a minimum CPU platform.
Try it for yourself
If you're new to Google Cloud, create an account to evaluate how Compute Engine performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
Try Compute Engine freeExcept as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.