Optimize Persistent Disk performance Stay organized with collections Save and categorize content based on your preferences.
Persistent Disks give you the performance described in thedisk type chart if the VMdrives usage that is sufficient to reach the performance limits. After you sizeyour Persistent Disk volumes to meet your performance needs, your workload andoperating system might need some tuning.
The following sections describe VM and workload characteristics that impact diskperformance and discuss a few key elements that can be tuned for betterperformance. Some of the suggestions and how to apply some of them to specifictypes of workloads.
Factors that affect disk performance
The following sections describe factors that impact diskperformance for a VM.
- Network egress caps on write throughput
- Simultaneous reads and writes
- Logical volume size
- Multiple disks attached to a single VM instance
Network egress caps on write throughput
Your VM has anetwork egress cap that dependson the machine type of the VM.
Compute Engine stores data on Persistent Disk with multiple parallel writesto ensure built-in redundancy. Also, each write request has someoverhead that uses additional write bandwidth.
The maximum write traffic that a VM instance can issue is thenetwork egress cap divided by a bandwidth multiplier that accounts for thereplication and overhead.
The network egress caps are listed in theDefault egress bandwidth (Gbps)column in the machine type tables forgeneral purpose,compute-optimized,storage-optimized,memory-optimized, andaccelerator-optimizedmachine families.
The bandwidth multiplier is approximately 1.16x at full network utilizationmeaning that 16% of bytes written are overhead. For regional Persistent Disk, thebandwidth multiplier is approximately 2.32x to account for additionalreplication overhead.
In a situation where Persistent Disk read and write operations compete withnetwork egress bandwidth, 60% of the maximum network egress bandwidth, definedby the machine type, is allocated to Persistent Disk writes. The remaining 40% isavailable for all other network egress traffic. Refer toegress bandwidth for detailsabout other network egress traffic.
The following example shows how to calculate the maximum write bandwidth for aPersistent Disk on anN1 VM instance.Thebandwidth allocation is the portion of network egressbandwidth allocated to Persistent Disk. Themaximum write bandwidth is themaximum write bandwidth of the Persistent Disk adjusted for overhead.
| VM vCPU Count | Network egress cap (MB/s) | Bandwidth allocation (MB/s) | Maximum write bandwidth (MB/s) | Maximum write bandwidth at full network utilization (MB/s) |
|---|---|---|---|---|
| 1 | 250 | 150 | 216 | 129 |
| 2-7 | 1,250 | 750 | 1,078 | 647 |
| 8-15 | 2,000 | 1,200 | 1,724 | 1,034 |
| 16+ | 4,000 | 2,400 | 3,448 | 2,069 |
You can calculate the maximum Persistent Disk bandwidth using the followingformulas:
N1 VM with 1 vCPU
The network egress cap is:
2 Gbps / 8 bits = 0.25 GB per second = 250 MB per second
Persistent Disk bandwidth allocation at full network utilization is:
250 MB per second * 0.6 = 150 MB per second.
Persistent Disk maximum write bandwidth with no network contention is:
- Zonal disks: 250 MB per second / 1.16 ~= 216 MB per second
- Regional disks: 250 MB per second / 2.32 ~= 108 MB per second
Persistent Disk maximum write bandwidth at full network utilization is:
- Zonal disks: 150 MB per second / 1.16 ~= 129 MB per second
- Regional disks: 150 MB per second / 2.32 ~= 65 MB per second
The network egress limits provide an upper bound on performance.Other factors may limit performance below this level. See the following sectionsfor information on other performance constraints.
Simultaneous reads and writes
For standard Persistent Disk, simultaneous reads and writes share the sameresources. When your VM is using more read throughput or IOPS,it is able to perform fewer writes. Conversely, instances that use morewrite throughput or IOPS are able to perform fewer reads.
Persistent Disk volumes cannot simultaneously reach their maximum throughput andIOPS limits for both reads and writes.
The calculation for throughput isIOPS * I/O size. To take advantage of themaximum throughput limits for simultaneous reads and writes on SSD Persistent Disk,use an I/O size such that read and write IOPS combined don't exceed the IOPSlimit.
The following table lists the IOPS limits per VM for simultaneous reads andwrites.
| Standard Persistent Disk | SSD Persistent Disk (8 vCPUs) | SSD Persistent Disk (32+ vCPUs) | SSD Persistent Disk (64+ vCPUs) | ||||
|---|---|---|---|---|---|---|---|
| Read | Write | Read | Write | Read | Write | Read | Write |
| 7,500 | 0 | 15,000 | 0 | 60,000 | 0 | 100,000 | 0 |
| 5,625 | 3,750 | 11,250 | 3,750 | 45,000 | 15,000 | 75,000 | 25,000 |
| 3,750 | 7,500 | 7,500 | 7,500 | 30,000 | 30,000 | 50,000 | 50,000 |
| 1875 | 11,250 | 3,750 | 11,250 | 15,000 | 45,000 | 25,000 | 75,000 |
| 0 | 15,000 | 0 | 15,000 | 0 | 60,000 | 0 | 100,000 |
The IOPS numbers in this table are based on an 8 KB I/O size.Other I/O sizes, such as 16 KB, might have different IOPS numbersbut maintain the same read/write distribution.
The following table lists the throughput limits (MiB per second) per instance forsimultaneous reads and writes.
| Standard Persistent Disk | SSD Persistent Disk (6 to 14 vCPUs) | SSD Persistent Disk (16+ vCPUs) | |||
|---|---|---|---|---|---|
| Read | Write | Read | Write | Read | Write |
| 1200 | 0 | 800* | 800* | 1,200* | 1,200* |
| 900 | 100 | ||||
| 600 | 200 | ||||
| 300 | 300 | ||||
| 0 | 400 | ||||
Logical volume size
Persistent Disk can be up to 64 TiB in size, and you can create single logicalvolumes of up to 257 TiB using logical volume management inside your VM.A larger volume size impacts performance in the following ways:
- Not all local file systems work well at this scale. Common operations, suchas mounting and file system checking might take longer than expected.
- Maximum Persistent Disk performance is achieved at smaller sizes. Disks takelonger to fully read or write with this much storage on one VM. If yourapplication supports it, consider using multiple VMs for greater total-systemthroughput.
- Snapshotting large numbers of Persistent Disk might take longer thanexpected to complete and might provide an inconsistent view of your logicalvolume without careful coordination with your application.
Multiple disks attached to a single VM instance
The performance limits of disks when you have multiple disks attached to a VMdepend on whether the disks are of the same type or different types.
Multiple disks of the same type
If you have multiple disks of the same type attached to a VM instance in thesame mode (for example, read/write), the performance limits are thesame as the limits of a single disk that has the combined size of those disks.If you use all the disks at 100%, the aggregate performance limit is splitevenly among the disks regardless of relative disk size.
For example, suppose you have a 200 GBpd-standard disk and a1,000 GBpd-standard disk. If you don't use the 1,000 GB disk, thenthe 200 GB disk can reach the performance limit of a 1,200 GB standarddisk. If you use both disks at 100%, then each has the performance limit of a600 GBpd-standard disk (1,200 GB / 2 disks = 600 GB disk).
Multiple disks of different types
If you attach different types of disks to a VM, the maximum possible performanceis the performance limit of the fastest disk that the VM supports. Thecumulative performance of the attached disks will not exceed the performancelimits of the fastest disk the VM supports.
Optimize your disks for IOPS or throughput oriented workloads
Performance recommendations depend on whether you want to maximize IOPS orthroughput.
IOPS-oriented workloads
Databases, whether SQL or NoSQL, have usage patterns of random access to data.Google recommends the following values for IOPS-oriented workloads:
I/O queue depth values of 1 for each 400 to 800 of IOPS, up to a limit of 64on large volumes
One free CPU for every 2,000 random read IOPS and 1 free CPU for every 2,500random write IOPS
If available for your VM machine type, use Google Cloud Hyperdisk Extreme disks, which enable you to changethe provisioned IOPS.
Lower readahead values are typically suggested in best practices documents forMongoDB,Apache Cassandra,and other database applications.
Throughput-oriented workloads
Streaming operations, such as a Hadoop job, benefit from fast sequentialreads, and larger I/O sizes can increase streaming performance.
Use an I/O size of 256 KB or larger.
If available for your VM machine type, use Hyperdisk Throughput disks, which enable you to changethe provisioned throughput.
For standard Persistent Disk, use 8 or more parallel sequential I/Ostreams when possible. Standard Persistent Disk is designed to optimize I/Operformance for sequential disk access, similar to a physical HDD hard drive.
Make sure your application is optimized for a reasonable datalocality on large disks.
If your application accesses data that is distributed across different partsof a disk over a short period of time (hundreds of GB per vCPU), you won'tachieve optimal IOPS. For best performance, optimize for data locality,weighing factors like the fragmentation of the disk and the randomness ofaccessed parts of the disk.
For SSD Persistent Disk, make sure the I/O scheduler in the operating system isconfigured to meet your specific needs.
On Linux-based systems, check if the I/O scheduler is set to
none. ThisI/O scheduler doesn't reorder requests and is ideal for fast, random I/Odevices.On the command line, verify the I/O schedule that is used by your Linuxmachine:
cat /sys/block/sda/queue/scheduler
The output is similar to the following:
[mq-deadline] none
The I/O scheduler that is currently active is displayed in squarebrackets (
[]).If your I/O scheduler is not set to
none, perform one of the followingsteps:- To change your default I/O scheduler to
none, setelevator=noneintheGRUB_CMDLINE_LINUXentry of the GRUB configuration file. Usuallythis file is located in/etc/default/grub, but on some earlierdistributions, it might be located in a different directory.
GRUB_CMDLINE_LINUX="elevator=none vconsole.keymap=us console=ttyS0,38400n8 vconsole.font=latarcyrheb-sun16
After updating the GRUB configuration file,configure the bootloader on the system so that it can boot on Compute Engine.
- Alternatively, you can change the I/O scheduler at runtime:
echo 'none' | sudo tee /sys/block/sda/queue/scheduler
If you use this method, the system switches back to the default I/O scheduler on reboot. Run the
catcommand again to verify your I/O scheduler.- To change your default I/O scheduler to
Workload changes that can improve disk performance
Certain workload behaviors can improve the performance of I/O operations on theattached disks.
Use a high I/O queue depth
Persistent Disks have higher latency than locally attached disks such as LocalSSD disks because they are network-attached devices. They can provide very highIOPS and throughput, but you must make sure that sufficient I/O requests aredone in parallel. The number of I/O requests done in parallel is referred to astheI/O queue depth.
The following tables list the recommended I/O queue depth to ensure you can achievea certain performance level. The tables use a slight overestimate of typicallatency in order to show conservative recommendations. The example assumes thatyou are using an I/O size of 16 KB.
For SSD, Balanced, and Extreme Persistent Disk:
| Desired IOPS | Queue depth |
|---|---|
| 500 | 1 |
| 1,000 | 2 |
| 2,000 | 4 |
| 4,000 | 8 |
| 8,000 | 16 |
| 16,000 | 32 |
| 32,000 | 64 |
| 64,000 | 128 |
| 100,000 | 200 |
| Desired throughput (MB/s) | Queue depth |
|---|---|
| 8 | 1 |
| 16 | 2 |
| 32 | 4 |
| 64 | 8 |
| 128 | 16 |
| 256 | 32 |
| 512 | 64 |
| 1,000 | 128 |
| 1,200 | 153 |
For standard Persistent Disk:
| Desired IOPS | Queue depth |
|---|---|
| 200 | 1 |
| 400 | 2 |
| 800 | 4 |
| 1,600 | 8 |
| 3,200 | 16 |
| 6,400 | 32 |
| 12,800 | 64 |
| 15,000 | 75 |
| Desired throughput (MB/s) | Queue depth |
|---|---|
| 3.2 | 1 |
| 6.4 | 2 |
| 12.8 | 4 |
| 25.6 | 8 |
| 51.2 | 16 |
| 102.4 | 32 |
| 204.8 | 64 |
| 400 | 125 |
Generate enough I/Os using large I/O size
Use large I/O size
To ensure IOPS limits and latency don't bottleneck your applicationperformance, use a minimum I/O size of 256 KB or higher.
Use large stripe sizes for distributed file system applications. A randomI/O workload using large stripe sizes (4 MB or larger) achieves greatperformance on standard Persistent Disk due to how closely the workloadmimics multiple sequential stream disk access.
Make sure your application is generating enough I/O
Make sure your application is generating enough I/Os to fully use theIOPS and throughput limits of the disk. To better understand your workloadI/O pattern,review disk usage and performance metrics in Cloud Monitoring.
Make sure there is enough available CPU on the instance that is generatingthe I/O
If your VM instance is starved for CPU, your app won'tbe able to manage the IOPS described earlier. We recommend that youhave one available CPU for every 2,000–2,500 IOPS of expected traffic.
Limit heavy I/O loads to a maximum span
A span refers to a contiguous range of logical block addresses on a singlephysical disk. Heavy I/O loads achieve maximum performance when limited to acertain maximum span, which depends on the machine type of the VM to which thedisk is attached, as listed in the following table.
| Machine type | Recommended maximum span |
|---|---|
| 25 TB |
| All other machine types | 50 TB |
Spans on separate Persistent Disks that add up to 50 TB or less can beconsidered equal to a single 50 TB span for performance purposes.
Operating system changes to improve disk performance
In some cases, you can enable or disable features at the operating system level,or configure the attached disks in specific ways to improve the diskperformance.
Avoid using ext3 file systems in Linux
Using ext3 file system in a Linux VM can result in very poor performance underheavy write loads. Use ext4 when possible. The ext4 file system driveris backwards compatible with ext3/ext2 and supports mounting ext3file systems. The ext4 file system is the default on most Linux operatingsystems.
If you can't migrate to ext4, as a workaround, you can mount ext3file systems with thedata=journal mount option. This improves write IOPS atthe cost of write throughput. Migrating to ext4 can result in up to a 7ximprovement in some benchmarks.
Disable lazy initialization and enable DISCARD commands
Persistent Disks support discard operations orTRIMcommands, which allow operating systems to inform the disks when blocks areno longer in use. Discard support allows the operating system to mark diskblocks as no longer needed, without incurring the cost of zeroing out theblocks.
On most Linux operating systems, you enable discard operations when you mount aPersistent Disk on your VM. Windows Server 2012 R2 VMs enable discard operationsby default when you mount a Persistent Disk.
Enabling discard operations can boost general runtime performance, and it canalso speed up the performance of your disk when it is first mounted. Formattingan entire disk volume can be time consuming, solazy formattingis a common practice. The downside of lazy formatting is that the cost isoften then paid the first time the volume is mounted. By disabling lazyinitialization and enabling discard operations, you can get fast format andmount operations.
Disable lazy initialization and enable discard operations when formatting adisk by passing the following parameters tomkfs.ext4:
-E lazy_itable_init=0,lazy_journal_init=0,discardThe
lazy_journal_init=0parameter does not work on instances withCentOS 6 orRHEL 6 images. For VMs that use those operatingsystems, format the Persistent Disk without that parameter.-E lazy_itable_init=0,discardEnable discard operations when mounting a disk by passing the following flagto the
mountcommand:-o discard
Persistent Disk works well with the discard operations enabled. However, youcan optionally runfstrim periodically in addition to, or instead ofusing discard operations. If you do not use discard operations, runfstrim before you create a snapshot of your boot disk. Trimming the filesystem lets you create smaller snapshot images, which reduces the cost ofstoring snapshots.
Adjust the readahead value
To improve I/O performance, operating systems employ techniques such asreadahead, where moreof a file than was requested is read into memory with the assumption thatsubsequent reads are likely to need that data. Higher readahead increasesthroughput at the expense of memory and IOPS. Lower readahead increases IOPSat the expense of throughput.
On Linux systems, you can get and set the readahead value with theblockdev command:
$sudo blockdev --getra /dev/DEVICE_ID$sudo blockdev --setraVALUE /dev/DEVICE_IDThe readahead value is<desired_readahead_bytes> / 512 bytes.
For example, for an 8 MB readahead, 8 MB is 8388608 bytes(8 * 1024 * 1024).
8388608bytes/512bytes=16384You set blockdev to16384:
$sudo blockdev --setra 16384 /dev/DEVICE_IDModify your VM or create a new VM
There are limits associated with each VM machine typethat can impact the performance you can get from the attached disks.These limits include:
- Persistent Disk performance increases as the number of available vCPUs increases.
- Hyperdisk aren't supported with all machine types.
- Network egress rates increase as the numberof available vCPUs increases.
Ensure you have free CPUs
Reading and writing to Persistent Disk volumes requires CPU cycles from your VM.To achieve very high, consistent IOPS levels, you must have CPUsfree to process I/O.
To increase the number of vCPUs available with your VM, you cancreate a new VM, or you canedit the machine type of a VM instance.
Consider using Google Cloud Hyperdisk
For higher IOPS and throughput, consider usingHyperdiskvolumes instead of Persistent Disk if your machine series supports Hyperdisk.
To determine if your machine series supports Hyperdisk,seeMachine series support for Hyperdisk.
If your instance's machine series supports Hyperdisk, followthese steps to use Hyperdisk volumes:
Choose a Hyperdisk typethat meets your workload's needs.
To switch to the Hyperdisk type you chose, see the instructionsinChange the disk type.
Change the instance's machine series to improve performance
New VM machine series typically run on newer CPUs, which can offerbetter performance that their predecessors. Also, newer CPUs can supportadditional functionality to improve the performance of your workloads, such asAdvanced Matrix Extensions (AMX)or Intel Advanced Vector Extensions (AVX-512).
What's next
- Monitor your disk's performance byreviewing disk performance metricsandmonitoring disk health.
- Benchmark Persistent Disk volumes attached to Linux VMs.
- Learn aboutPersistent Disk pricing.
- Learn how to monitor your disk performance byreviewing disk performance metrics.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.