Storage-optimized machine family for Compute Engine Stay organized with collections Save and categorize content based on your preferences.
The storage-optimized machine family is suitable for workloads that are low incore usage and high in storage density. For example, the Z3 machine series isuseful for scale-out analytics workloads, flash-optimized databases, and otherdatabase workloads.
Z3 also offers two machine types with different amounts ofTitanium SSD storage:standardlssd andhighlssd.These machine types are ideal for high performance workloads that need fastaccess to data stored in local storage, such as data streaming, SQL and NoSQLdatabases, data search, data analytics, and data warehousing. For moreinformation, seeZ3 machine types.
| Machine series | Workloads |
|---|---|
| Z3 |
|
Z3 machine series
Z3 instances are powered by the fourth generation Intel Xeon Scalable processor(code-named Sapphire Rapids), DDR5 memory, andTitanium offload processors.Z3 machine types are optimized for the underlying NUMAarchitecture to deliver optimal, reliable, and consistent performance.
The Z3 machine series offers the following Local SSD storage capacities usingTitanium SSD:
- Up to 36,000 GiB with VM instances
- 72,000 GiB with bare metal instances
Titanium SSD is custom-designed Local SSD based on Titanium I/Ooffload processing. It offers enhanced security, performance, andmanagement compared to Local SSD.
Z3 offers the following features:
- Uses Titanium to offload networking and storage processing from thehost CPU onto silicon devices deployed throughout the data center
- Delivers high performance block storage withGoogle Cloud Hyperdisk
- Offers the largest amount of Local SSD storage capacity of anyCompute Engine machine series with Titanium SSD
- SupportsIntel Advanced Matrix Extensions (AMX),which is a built-in accelerator that significantly improves the performance ofdeep-learning training and inference on the CPU.
- Offers bare metal instancesthat provide access to several onboard, function-specificaccelerators and offloads likeIntel QAT, Intel DLB, Intel DSA, Intel TDX, and Intel IAA.
- Supports the following discount and consumption options:
- Resource-based committed use discounts (CUDs)
- Flexible CUDs
- Spot VMs (excluding bare metal machine types)
- Reservations
Z3 instances use Titanium to enable higher levels of networkingperformance, isolation, and security. The Z3 machine series supports a defaultnetwork bandwidth of up to 100 Gbps and up to 200 Gbps withper VM Tier_1 networking performance.
For details on pricing, see theVM pricing page.Disk usage and network usage is charged separately from machine type pricing.For more information, seeDisk and image pricingandNetwork pricing. For Titanium SSD pricing,seeStorage-optimized machine type family pricing.
Z3 Limitations
The following restrictions apply:
- You can't useregional Persistent Diskwith Z3 instances.
- Z3 instances are only available inselect zones and regions. Forregional availability of bare metal instances, seeBare metal instances.
- You can't useGPUs with Z3 instances.
- Z3 doesn't support sole tenancy.
- You can't suspend a Z3 instance.
- You can't create custom machine types for Z3 instances.
- Live migration is only supported for Z3 instances with 18 TiB or less ofattached Titanium SSD.
- Z3 isn't supported on Windows images.
Z3 machine types
Note: In June 2025, some Z3 machine types listed in the following tablewere renamed.z3-highmem-176 is nowz3-highmem-176-standardlssd andz3-highmem-88 is nowz3-highmem-88-highlssd The allocated resources remainthe same.The Z3 machine series supports the following predefinedlssd machinesubtypes:
standardlssd: offers high performance search anddata analysis for medium-sized data sets. This machine type has a vCPUto Titanium SSD capacity ratio of less than 1:350 and offers the highestTitanium SSD performance per vCPU.highlssd: offers high performance and storage intensivestreaming and data analysis for large-sized data sets. This machine typehas a vCPU to Titanium SSD capacity ratio between 1:350 and 1:600 andoffers a higher total Titanium SSD capacity thanstandardlssd.
To create a bare metal instance with Z3,use thez3-highmem-192-highlssd-metal machine type.
Z3 standardlssd
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)2 |
|---|---|---|---|---|---|
z3-highmem-14-standardlssd | 14 | 112 | (1 x 3000 GiB) 3,000 GiB | Up to 23 | N/A |
z3-highmem-22-standardlssd | 22 | 176 | (2 x 3000 GiB) 6,000 GiB | Up to 23 | N/A |
z3-highmem-44-standardlssd | 44 | 352 | (3 x 3000 GiB) 9,000 GiB | Up to 32 | Up to 50 |
z3-highmem-88-standardlssd | 88 | 704 | (6 x 3000 GiB) 18,000 GiB | Up to 62 | Up to 100 |
z3-highmem-176-standardlssd | 176 | 1,406 | (12 x 3000 GiB) 36,000 GiB | Up to 100 | Up to 200 |
1A vCPU is implemented as a single hardware thread on the availableCPU platform.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
Z3 highlssd
| Machine types | vCPUs1 | Memory (GB) | Titanium SSD | Default egress bandwidth (Gbps)2 | Tier_1 egress bandwidth (Gbps)2 |
|---|---|---|---|---|---|
z3-highmem-8-highlssd | 8 | 64 | (1 x 3000 GiB) 3,000 GiB | Up to 23 | N/A |
z3-highmem-16-highlssd | 16 | 128 | (2 x 3000 GiB) 6,000 GiB | Up to 23 | N/A |
z3-highmem-22-highlssd | 22 | 176 | (3 x 3000 GiB) 9,000 GiB | Up to 23 | N/A |
z3-highmem-32-highlssd | 32 | 256 | (4 x 3000 GiB) 12,000 GiB | Up to 32 | N/A |
z3-highmem-44-highlssd | 44 | 352 | (6 x 3000 GiB) 18,000 GiB | Up to 32 | Up to 50 |
z3-highmem-88-highlssd | 88 | 704 | (12 x 3000 GiB) 36,000 GiB | Up to 62 | Up to 100 |
z3-highmem-192-highlssd-metal | 1923 | 1,536 | (12 x 6000 GiB) 72,000 GiB | Up to 100 | Up to 200 |
1A vCPU is implemented as a single hardware thread on the availableCPU platform.
2 Maximum egress bandwidth cannot exceed the number given. Actual egress bandwidth depends on the destination IP address and other factors. SeeNetwork bandwidth.
3 For bare metal instances, the number of vCPUs is equivalent to the number of hardware threads on the host server.
Supported disk types for Z3
Z3 VMs support only the NVMe disk interface and can use the following blockstorage types:
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Balanced High Availability (
hyperdisk-balanced-high-availability) - Hyperdisk Extreme (
hyperdisk-extreme) - Hyperdisk Throughput (
hyperdisk-throughput) - Balanced Persistent Disk (
pd-balanced) - SSD (performance) Persistent Disk (
pd-ssd) - Titanium SSD
Z3 bare metal instances can use the following block storage types:
- Hyperdisk Balanced (
hyperdisk-balanced) - Hyperdisk Extreme (
hyperdisk-extreme) - Titanium SSD
Every machine type in the Z3 machine series comes with locally attachedTitanium SSD disks. The disks are added automatically when you create aninstance. The capacity and performance for Titanium SSD disks for Z3 arelisted in the following table:
Z3 standardlssd
| IOPS | Throughput (MiBps) | ||||||
|---|---|---|---|---|---|---|---|
| Machine type | # of attached Titanium disks | Disk size (GiB) | Total size (GiB) | Read | Write | Read | Write |
z3-highmem-14-standardlssd | 1 | 3,000 | 3,000 | 750,000 | 500,000 | 3,000 | 2,500 |
z3-highmem-22-standardlssd | 2 | 3,000 | 6,000 | 1,500,000 | 1,000,000 | 6,000 | 5,000 |
z3-highmem-44-standardlssd | 3 | 3,000 | 9,000 | 2,250,000 | 1,500,000 | 9,000 | 7,500 |
z3-highmem-88-standardlssd | 6 | 3,000 | 18,000 | 4,500,000 | 3,000,000 | 18,000 | 15,000 |
z3-highmem-176-standardlssd | 12 | 3,000 | 36,000 | 9,000,000 | 6,000,000 | 36,000 | 30,000 |
Z3 highlssd
| IOPS | Throughput (MiBps) | ||||||
|---|---|---|---|---|---|---|---|
| Machine type | # of attached Titanium disks | Disk size (GiB) | Total size (GiB) | Read | Write | Read | Write |
z3-highmem-8-highlssd | 1 | 3,000 | 3,000 | 750,000 | 500,000 | 3,000 | 2,500 |
z3-highmem-16-highlssd | 2 | 3,000 | 6,000 | 1,500,000 | 1,000,000 | 6,000 | 5,000 |
z3-highmem-22-highlssd | 3 | 3,000 | 9,000 | 2,250,000 | 1,500,000 | 9,000 | 7,500 |
z3-highmem-32-highlssd | 4 | 3,000 | 12,000 | 3,000,000 | 2,000,000 | 12,000 | 10,000 |
z3-highmem-44-highlssd | 6 | 3,000 | 18,000 | 4,500,000 | 3,000,000 | 18,000 | 15,000 |
z3-highmem-88-highlssd | 12 | 3,000 | 36,000 | 9,000,000 | 6,000,000 | 36,000 | 30,000 |
z3-highmem-192-highlssd-metal | 12 | 6,000 | 72,000 | 9,000,000 | 6,000,000 | 36,000 | 30,000 |
For the performance limits of Hyperdisk and Persistent Disk, see thefollowing:
Disk and capacity limits
For details about the capacity limits, see Hyperdisk size and attachment limits andPersistent Disk maximum capacity.
Z3 storage limits are described in the following table:
Z3 standardlssd
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine type | Per VM1 | Hyperdisk volumes per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
z3-highmem-14-standardlssd | 128 | 16 | 16 | 16 | 0 |
z3-highmem-22-standardlssd | 128 | 32 | 32 | 32 | 0 |
z3-highmem-44-standardlssd | 128 | 32 | 32 | 32 | 0 |
z3-highmem-88-standardlssd | 128 | 32 | 32 | 32 | 8 |
z3-highmem-176-standardlssd | 128 | 32 | 32 | 32 | 8 |
1 The maximum size per Hyperdiskvolume is 64 TiB.
Z3 highlssd
| Maximum number of disks | |||||
|---|---|---|---|---|---|
| Machine type | Per VM1 | Hyperdisk volumes per VM | Hyperdisk Balanced | Hyperdisk Throughput | Hyperdisk Extreme |
z3-highmem-8-highlssd | 128 | 16 | 16 | 16 | 0 |
z3-highmem-16-highlssd | 128 | 16 | 16 | 16 | 0 |
z3-highmem-22-highlssd | 128 | 32 | 32 | 32 | 0 |
z3-highmem-32-highlssd | 128 | 32 | 32 | 32 | 0 |
z3-highmem-44-highlssd | 128 | 32 | 32 | 32 | 0 |
z3-highmem-88-highlssd | 128 | 32 | 32 | 32 | 8 |
z3-highmem-192-highlssd-metal | 32 | 32 | 16 | 0 | 16 |
1 The maximum size per Hyperdiskvolume is 64 TiB.
Network support for Z3 VMs
The following network interface drivers are required:
- Z3 instances requiregVNIC network interfaces.
- Z3 bare metal instances require theIntel IDPF LAN PF device driver.
Z3 supports up to 100 Gbps network bandwidth for standardnetworking and up to 200 Gbps with per VM Tier_1 networking performance for VM andbare metal instances.
Before migrating to Z3 or creating Z3 VMs or bare metalinstances, make sure that theoperating system imagethat you use supports the IDPF network driver for bare metal instances or thegVNIC driver for VM instances. To get the best possible performance onZ3 VMs, choose an OS image that supports both"Tier_1 Networking" and "200 Gbps network bandwidth". These images include anupdated gVNIC driver, even if the guest OS shows thegve driver version as1.0.0. If your Z3 VM isusing an operating system with an older version of gVNIC driver, this is stillsupported but the VM might experience suboptimal performance such as lessnetwork bandwidth or higher latency.
If you use a custom OS image to create a Z3 VM, you canmanually install the most recent gVNIC driver.The gVNIC driver version v1.4.2 or later is recommended for use with Z3VMs. Google recommends using the latest gVNIC driver version to benefit fromadditional features and bug fixes.
Maintenance experience for Z3 instances
During thelifecycle of aCompute Engine instance, the host machine that your instance runs on undergoes multiplehost events.A host event can include the regular maintenance ofCompute Engine infrastructure, or in rare cases, a host error. Compute Engine alsoapplies some non-disruptive lightweight upgrades for the hypervisor and networkin the background.
The Z3 machine series supports on-demand maintenance and offers thefollowing features related to host maintenance:
| Attached Titanium SSD (TiB) | Typical scheduled maintenance event frequency | Maintenance behavior | Advanced notification | On-demand maintenance | Simulate maintenance |
|---|---|---|---|---|---|
| 18 or less | Minimum of 30 days | Live migrate | 7 days | Yes | Yes |
| 36 | Minimum of 30 days | Terminates with Local SSD data persistence | 7 days | Yes | Yes |
| 72 (bare metal) | Minimum of 30 days | Terminates with Local SSD data persistence | 7 days | Yes | Yes |
The maintenance frequencies shown in the previous table are approximations, not guarantees.Compute Engine might occasionally perform maintenance more frequently.
Compute Engine preserves data on the local Titanium SSD disksfor Z3 instances during maintenance events.
If a host event occurs, Compute Engine tries to recover anyTitanium SSD disks attached to the instance. By default,Compute Engine spends up to 1 hour recovering the data. For Z3instances, Compute Engine spends up to 6 hours trying to recover theTitanium SSD data before reaching the timeout limit. This timeout limit iscustomizable. For more information about Local SSD and Titanium SSD recoveryoptions, seeDisk persistence following instance termination.
What's next
- Creating and starting a virtual machine instance
- Learn about the differentStorage options for your VM
- Move your workload to a new compute instance
- VM instance pricing
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.