About Hyperdisk ML Stay organized with collections Save and categorize content based on your preferences.
This document describes the features of Hyperdisk ML, which offers the highestthroughput of all Google Cloud Hyperdisk types. Google recommends using Hyperdisk MLfor machine learning and for workloads that require high read throughput onimmutable datasets. The high throughput Hyperdisk ML provides results in faster dataload times, shorter accelerator idle times and lower compute costs.
For large inference, training and HPC workloads, you can attach a single Hyperdisk MLvolume to multiple compute instances in read-only mode.
You can specify up to 1,200,000 MiB/sof throughput for a single Hyperdisk ML volume. You can't provision an IOPS level,but each MiB/s of provisioned throughput comes with 16 IOPS,up to 19,200,000 IOPS.
For more information about Hyperdisk and the otherHyperdisk types, seeAbout Hyperdisk.
To create a Hyperdisk ML volume, seeCreate a Hyperdisk volume.
Use cases
Hyperdisk ML is a good fit for the following use cases:
- HPC workloads
- Machine learning
- Accelerator-optimized workloads
Machine series support
You can use Hyperdisk ML with the following machine series:
About provisioned performance
You don't have to provision performance when you createHyperdisk volumes. If you don't provision performance, Compute Enginecreates the volume with default values that you can modify later.For details about default values, seeDefault IOPS andthroughput values.
If you know your performance needs, you can specify IOPS and throughput limitsfor a Hyperdisk ML volume when you create the volume, and you can changethe provisioned values after you create the volume. You can't specify anIOPS or throughput level if you don't specify a size.
Important: Hyperdisk volumescan't reach the provisioned performance unless the compute instance supports thatlevel of performance. For detailed performance limits for all supported instancesby machine type, seePerformance limits when attached toan instance.Size and performance limits
The following limits apply to the size, throughput, and IOPS values youcan specify for a Hyperdisk ML volume.
Size: between 4 GiB and64 TiB. The default size is100 GiB.
Throughput: between 400 MiB/s and1,200,000 MiB/s. Both the minimum and maximum throughputhave their own limits based on the size of the volume, as follows:
Minimum throughput: for volumes that are 4 to 3,341 GiB in size,the minimum value is 400 MiB/s.For volumes that are 3,342 GiB or greater in size, the minimumvalue depends on the size and ranges between 401 to 7,680 MiB/s.
Maximum throughput: for volumes that are 750 GiB or greater insize, the maximum value is 1,200,000 MiB/s.For volumes that are 749 GiB or smaller in size, the maximumvalue depends on the size and ranges between 6,400 to1,200,000 MiB/s.
For examples, seeLimits for provisioned throughput.
IOPS: you can't specify an IOPS limit for Hyperdisk ML volumes. Instead,the provisioned IOPS depends on the provisioned throughput. Each Hyperdisk MLvolume is provisioned with 16 IOPS for each MiB/sof throughput, up to a maximum of 19,200,000 IOPS.
Limits for provisioned throughput
The following table lists the limits for provisioned throughput for commonvolume sizes. If a size isn't listed, use the following formula to calculate theallowable values, wherex is the volume's size in GiB:
- Minimum configurable throughput:
MAX (400, 0.12x) - Maximum configurable throughput:
MIN (1,200,000, 1600x)
| Size | Min throughput | Max throughput |
|---|---|---|
| 4 | 400 | 6,400 |
| 10 | 400 | 16,000 |
| 50 | 400 | 80,000 |
| 64 | 400 | 102,400 |
| 100 | 400 | 160,000 |
| 300 | 400 | 480,000 |
| 500 | 400 | 800,000 |
| 1,000 | 400 | 1200000 |
| 5,000 | 600 | 1200000 |
| 25,000 | 3,000 | 1200000 |
| 64,000 | 7,680 | 1200000 |
Default size, IOPS, and throughput values
If you don't specify a size or throughput value when you create a Hyperdisk ML volume,Compute Engine assigns default values.
The default size for Hyperdisk ML volumes is 100 GiB.
The default IOPS and throughput are based on the following formulas.
- Default throughput:
MAX (24x, 400)MiB/s,wherexis the volume's size in GiB. - Default IOPS:
16t, wheretis thedefault throughput. You can't directly configure the IOPS level.
Change the provisioned performance or size
You can change the provisioned size every 4 hours and its throughput every 6 hours.For instructions on modifying size or performance, seeModify a Hyperdisk volume.
Performance limits when attached to an instance
This section lists the performance limits for Hyperdisk ML. You can specify up to 1,200,000 MiB/sof throughput for a single Hyperdisk ML volume. You can't provision an IOPS level,but each MiB/s of provisioned throughput comes with 16 IOPS,up to 19,200,000 IOPS.
This section lists the maximum performance that Hyperdisk ML volumescan achieve for each supported instance. A Hyperdisk ML volume's performancewhen it's attached to an instance can't exceed the limits for the instance'smachine type.The performance limits are also shared across all Hyperdisk ML volumesattached to the same instance, regardless of each volume's provisioned performance.
Scenarios that require multiple instances to reach provisioned performance
The provisioned throughput for a Hyperdisk ML volume is shared between each instance the volume is attached to,up to the maximum limit for the machine type that's listed in the following table.If a Hyperdisk ML volume's provisioned performance is higher than an instance's performance limit,the volume can achieve its provisioned performance only if it is attached to multiple instances.a3-ultragpu-8 instances have a throughput limit of4,000 MiB/s.
For example, suppose you have a Hyperdisk ML volume provisioned with 500,000 MiB/s of throughput.and you want to attach the volume toa3-ultragpu-8 instances. A singlea3-ultragpu-8 instance can't acheieve more than 4,000 MiB/sof throughput. Therefore, to achieve the volume's provisioned throughput, you must attach the volumeto at least 125 (500,000/4,000)a3-ultragpu-8 instances. On theother hand, for thea2-highgpu-1g machine type, you would need 272 instances.
| Instance machine type | Maximum IOPS | Maximum throughput (MiB/s) |
|---|---|---|
| A2 | ||
a2-*-1g | 28,800 | 1,800 |
a2-*-2g | 38,400 | 2,400 |
a2-*-4g | 38,400 | 2,400 |
a2-*-8g | 38,400 | 2,400 |
a2-megagpu-16g | 38,400 | 2,400 |
| A3 (A3+H100) | ||
a3-*-1g | 28,800 | 1,800 |
a3-*-2g | 38,400 | 2,400 |
a3-*-4g | 38,400 | 2,400 |
a3-*-8g (in read-only mode)1 | 64,000 | 4,000 |
a3-*-8g (in read-write mode)1 | 38,400 | 2,400 |
| C3 | ||
c3-*-4 | 6,400 | 400 |
c3-*-8 | 12,800 | 800 |
c3-*-22 | 28,800 | 1,800 |
c3-*-44 | 38,400 | 2,400 |
c3-*-88 | 38,400 | 2,400 |
c3-*-176 | 38,400 | 2,400 |
c3-*-192 | 38,400 | 2,400 |
| C3D | ||
c3d-*-4 | 6,400 | 400 |
c3d-*-8 | 12,800 | 800 |
c3d-*-16 | 19,200 | 1,200 |
c3d-*-30 | 19,200 | 1,200 |
c3d-*-60 | 38,400 | 2,400 |
c3d-*-90 | 38,400 | 2,400 |
c3d-*-180 | 38,400 | 2,400 |
c3d-*-360 | 38,400 | 2,400 |
| G2 | ||
g2-standard-4 | 12,800 | 800 |
g2-standard-8 | 19,200 | 1,200 |
g2-standard-12 | 28,800 | 1,800 |
g2-standard-16 | 38,400 | 2,400 |
g2-standard-24 | 38,400 | 2,400 |
g2-standard-32 | 38,400 | 2,400 |
g2-standard-48 | 38,400 | 2,400 |
g2-standard-96 | 38,400 | 2,400 |
| TPU v6e | ||
ct6e-standard-1t | 19,200 | 1,200 |
ct6e-standard-4t | 28,800 | 1,800 |
ct6e-standard-8t | 28,800 | 1,800 |
1Fora3-*-8g instances, performance depends on whether the Hyperdisk ML volume is attached to the instance in read-only or read-write mode.
Regional availability for Hyperdisk ML
Hyperdisk ML is available in the following regions and zones:
| Region | Available Zones |
|---|---|
Changhua County, Taiwan—asia-east1 | asia-east1-a |
asia-east1-b | |
asia-east1-c | |
Tokyo, Japan—asia-northeast1 | asia-northeast1-a |
asia-northeast1-b | |
asia-northeast1-c | |
Seoul, South Korea—asia-northeast3 | asia-northeast3-a |
asia-northeast3-b | |
Jurong West, Singapore—asia-southeast1 | asia-southeast1-a |
asia-southeast1-b | |
asia-southeast1-c | |
Mumbai, India—asia-south1 | asia-south1-b |
asia-south1-c | |
St. Ghislain, Belgium—europe-west1 | europe-west1-b |
europe-west1-c | |
London, England—europe-west2 | europe-west2-a |
europe-west2-b | |
europe-west3-b | |
Eemshaven, Netherlands—europe-west4 | europe-west4-a |
europe-west4-b | |
europe-west4-c | |
Zurich, Switzerland—europe-west6 | europe-west6-b |
europe-west6-c | |
Tel Aviv, Israel—me-west1 | me-west1-b |
me-west1-c | |
Council Bluffs, Iowa—us-central1 | us-central1-a |
us-central1-b | |
us-central1-c | |
us-central1-f | |
Moncks Corner, South Carolina—us-east1 | us-east1-b |
us-east1-c | |
us-east1-d | |
Ashburn, Virginia—us-east4 | us-east4-a |
us-east4-b | |
us-east4-c | |
Columbus, Ohio—us-east5 | us-east5-a |
us-east5-b | |
us-east5-c | |
Dallas, Texas—us-south1 | us-south1-a |
The Dalles, Oregon—us-west1 | us-west1-a |
us-west1-b | |
us-west1-c | |
Salt Lake City, Utah—us-west3 | us-west3-b |
Las Vegas, Nevada—us-west4 | us-west4-a |
us-west4-b | |
us-west4-c |
Disaster protection for Hyperdisk ML volumes
You can back up a Hyperdisk ML volume withstandard snapshots.Snapshots back up the data on a Hyperdisk ML volume at a specific point in time.
Cross-zonal replication
You can't replicate Hyperdisk MLvolumes to another zone. To replicate data to another zone within the same region,you must useHyperdisk Balanced High Availability volumes.
Share a Hyperdisk ML volume between VMs
For accelerator-optimized machine learning workloads, you can attach the sameHyperdisk ML volume to multiple instances. This enables concurrent read-only accessto a single volume from multiple VMs. This is more cost effective than havingmultiple disks with the same data.
There are no additional costs associated with sharing a disk between VMs.Attaching a disk in read-only mode to multiple VMs doesn't affect the disk'sperformance. Each VM can still reach the maximum disk performance possible forthe VM's machine series.
Limitations for sharing Hyperdisk ML between instances
- Hyperdisk ML volumes don't support multi-writer mode; you can share a Hyperdisk ML volume among multiple instances if the volume is in read-only mode.
- Hyperdisk ML volumes can't be attached to single instance in read-only mode.
- If you share a Hyperdisk ML volume in read-only mode, you can't re-enable write access to the disk.
- You can attach a Hyperdisk ML volume to up to 100 instances during every 30-second interval.
- For Hyperdisk ML volumes, the maximum number of instances depends on the provisioned size, as follows:
- Volumes with up to 256 GiB of capacity: 2,500 instances
- Volumes with capacity between 257 GiB to 1 TiB: 600 instances
- Volumes with capacity between 1.001 TiB and 2 TiB: 300 instances
- Volumes with capacity between 2.001 TiB and 16 TiB: 128 instances
- Volumes with capacity of 16.001 TiB or more: 30 instances
If the volume is attached to more than 20 VMs, then you must provision at least100 MiB/s of throughput for each VM. For example, if you attach a disk to500 VMs, you must provision the volume with at least 50,000 MiB/s ofthroughput.
To learn more, seeRead-only mode for Hyperdisk.
Pricing
You're billed for the total provisioned size and throughput of your Hyperdisk MLvolumes until you delete them. Charges incur even if the volume isn't attachedto any instances or if the instance is suspended or stopped. For moreinformation, seeDisk pricing.
Limitations
- Hyperdisk ML volumes are zonal and can only be accessed from the zone where youcreated the volume.
- You can't create amachine image from aHyperdisk volume.
- You can't create an instant snapshot from a Hyperdisk ML volume.
- You can't use Hyperdisk ML as boot disks.
- You can't create a Hyperdisk ML disk in read-write-single mode from a snapshotor a disk image. You must create the disk in read-only-many mode.
- You can change a Hyperdisk ML volume's size every 4 hours,and its throughput every 6 hours.
What's next
Add a Hyperdisk ML volume to your VM
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.