Benchmark Persistent Disk performance on a Linux VM

Linux

This document describes how to benchmarkPersistent Disk performance onLinux virtual machines (VMs). For Windows VMs, seeBenchmark persistent disk performance on a Windows VM.

To benchmark Persistent Disk performance on Linux, useFlexible I/O tester (FIO) instead of other disk benchmarking tools such asdd.By default,dd uses a very low I/O queue depth, and might not accurately testdisk performance. In general, avoid using special devices such as/dev/urandom,/dev/random, and/dev/zero in your Persistent Diskperformance benchmarks.

Tomeasure IOPS and throughput of a disk in use on a running instance,benchmark the file system with its intended configuration. Use this option totest a realistic workload without losing the contents of your existing disk.Note that when you benchmark the file system on an existing disk, there are manyfactors specific to your development environment that may affect benchmarkingresults, and you may not reach thedisk performance limits.

Tomeasure the raw performance of a persistent disk, benchmark theblock device directly. Use this option to compare raw disk performance todisk performance limits.

The following commands work with Debian or Ubuntu operating systems with theapt package manager.

Benchmarking IOPS and throughput of a disk on a running instance

If you want to measure IOPS and throughput for a realistic workload on an activedisk on a running instance without losing the contents of your disk, benchmarkagainst a new directory on the existing file system. Eachfio test runs forfive minutes.

  1. Connect to your instance.

  2. Install dependencies:

    sudo apt updatesudo apt install -y fio
  3. In the terminal, list the disks that are attached to your VM and find thedisk that you want to test. If your persistent disk is not yet formatted,format and mount the disk.

    sudo lsblk
    NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTsda      8:0    0   10G  0 disk└─sda1   8:1    0   10G  0 part /sdb      8:32   0  2.5T  0 disk /mnt/disks/mnt_dir

    In this example, we test a 2,500 GB SSD persistent disk with device IDsdb.

  4. Create a new directory,fiotest, on the disk. In this example, the disk ismounted at/mnt/disks/mnt_dir:

    TEST_DIR=/mnt/disks/mnt_dir/fiotestsudo mkdir -p $TEST_DIR
  5. Test write throughput by performing sequential writes with multiple parallelstreams (16+), using an I/O block size of 1 MB and an I/O depth of atleast 64:

    sudo fio --name=write_throughput --directory=$TEST_DIR --numjobs=16 \--size=10G --time_based --runtime=5m --ramp_time=2s --ioengine=libaio \--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \--group_reporting=1 --iodepth_batch_submit=64 \--iodepth_batch_complete_max=64
  6. Test write IOPS by performing random writes, using an I/O block size of4 KB and an I/O depth of at least 256:

     sudo fio --name=write_iops --directory=$TEST_DIR --size=10G \--time_based --runtime=5m --ramp_time=2s --ioengine=libaio --direct=1 \--verify=0 --bs=4K --iodepth=256 --rw=randwrite --group_reporting=1  \--iodepth_batch_submit=256  --iodepth_batch_complete_max=256
  7. Test read throughput by performing sequential reads with multiple parallelstreams (16+), using an I/O block size of 1 MB and an I/O depth of atleast 64:

    sudo fio --name=read_throughput --directory=$TEST_DIR --numjobs=16 \--size=10G --time_based --runtime=5m --ramp_time=2s --ioengine=libaio \--direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read \--group_reporting=1 \--iodepth_batch_submit=64 --iodepth_batch_complete_max=64
  8. Test read IOPS by performing random reads, using an I/O block size of4 KB and an I/O depth of at least 256:

    sudo fio --name=read_iops --directory=$TEST_DIR --size=10G \--time_based --runtime=5m --ramp_time=2s --ioengine=libaio --direct=1 \--verify=0 --bs=4K --iodepth=256 --rw=randread --group_reporting=1 \--iodepth_batch_submit=256  --iodepth_batch_complete_max=256
  9. Clean up:

    sudo rm $TEST_DIR/write* $TEST_DIR/read*

Benchmarking raw persistent disk performance

If you want to measure the performance of persistent disks alone outside ofyour development environment, test read and write performance for a blockdevice on a throwaway persistent disk and VM. Eachfio test runs for fiveminutes.The following commands assume a 2,500 GB SSD persistent disk attached toyour VM. If your device size is different, modify the value of the--filesizeargument. This disk size is necessary to achieve the 32 vCPU VM throughputlimits. For more information, seeBlock storage performance.

Warning: The commands in this section overwrite the contents of /dev/sdb.We strongly recommend using a throwaway VM and disk.
  1. Create and start a VM instance.

  2. Add a Persistent Disk to your VM instancewhich you intend to benchmark.

  3. Connect to your instance.

  4. Install dependencies:

    sudo apt-get updatesudo apt-get install -y fio
  5. Fill the disk with nonzero data. Persistent disk reads from empty blockshave a latency profile that is different from blocks that contain data.We recommend filling the disk before running any read latency benchmarks.

    # Running this command causes data loss on the second device.# We strongly recommend using a throwaway VM and disk.sudo fio --name=fill_disk \  --filename=/dev/sdb --filesize=2500G \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --bs=128K --iodepth=64 --rw=randwrite \  --iodepth_batch_submit=64 --iodepth_batch_complete_max=64
  6. Test write bandwidth by performing sequential writes with multiple parallelstreams (16+), using 1 MB as the I/O size and having an I/O depth that isgreater than or equal to 64.

    # Running this command causes data loss on the second device.# We strongly recommend using a throwaway VM and disk.sudo fio --name=write_bandwidth_test \  --filename=/dev/sdb --filesize=2500G \  --time_based --ramp_time=2s --runtime=5m \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --bs=1M --iodepth=64 --iodepth_batch_submit=64 --iodepth_batch_complete_max=64 \  --rw=write --numjobs=16 --offset_increment=100G
  7. Test write IOPS. To achieve maximum PD IOPS, you mustmaintain a deep I/O queue. If, for example, the write latency is1 millisecond,the VM can achieve, at most, 1,000 IOPS for each I/O in flight.To achieve 15,000 write IOPS, the VM must maintain at least 15 I/Os inflight. If your disk and VM are able to achieve 30,000 write IOPS, thenumber of I/Os in flight must be at least 30 I/Os.If the I/O size is larger than 4 KB, the VM might reach the bandwidthlimit before it reaches the IOPS limit.

    # Running this command causes data loss on the second device.# We strongly recommend using a throwaway VM and disk.sudo fio --name=write_iops_test \  --filename=/dev/sdb --filesize=2500G \  --time_based --ramp_time=2s --runtime=5m \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --bs=4K --iodepth=256 --rw=randwrite \  --iodepth_batch_submit=256 --iodepth_batch_complete_max=256
  8. Test write latency. While testing I/O latency, the VM must not reach maximumbandwidth or IOPS; otherwise, the observed latency won't reflect actualpersistent disk I/O latency. For example, if the IOPS limit isreached at an I/O depth of 30 and thefio command has double that, then thetotal IOPS remains the same and the reported I/O latency doubles.

    # Running this command causes data loss on the second device.# We strongly recommend using a throwaway VM and disk.sudo fio --name=write_latency_test \  --filename=/dev/sdb --filesize=2500G \  --time_based --ramp_time=2s --runtime=5m \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --bs=4K --iodepth=4 --rw=randwrite --iodepth_batch_submit=4 \  --iodepth_batch_complete_max=4
  9. Test read bandwidth by performing sequential reads with multiple parallelstreams (16+), using 1 MB as the I/O size and having an I/O depth that isequal to 64 or greater.

    sudo fio --name=read_bandwidth_test \  --filename=/dev/sdb --filesize=2500G \  --time_based --ramp_time=2s --runtime=5m \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --bs=1M --iodepth=64 --rw=read --numjobs=16 --offset_increment=100G \  --iodepth_batch_submit=64 --iodepth_batch_complete_max=64
  10. Test read IOPS. To achieve the maximum PD IOPS, you must maintain a deepI/O queue. If, for example, the I/O size is larger than 4 KB, the VMmight reach the bandwidth limit before it reaches the IOPS limit. To achievethe maximum 100k read IOPS, specify--iodepth=256 for this test.

    sudo fio --name=read_iops_test \  --filename=/dev/sdb --filesize=2500G \  --time_based --ramp_time=2s --runtime=5m \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --bs=4K --iodepth=256 --rw=randread \  --iodepth_batch_submit=256 --iodepth_batch_complete_max=256
  11. Test read latency. It's important to fill the disk with data to get arealistic latency measurement. It's important that the VM not reachIOPS or throughput limits during this test because after the persistent diskreaches its saturation limit, it pushes back on incoming I/Os and this isreflected as an artificial increase in I/O latency.

    sudo fio --name=read_latency_test \  --filename=/dev/sdb --filesize=2500G \  --time_based --ramp_time=2s --runtime=5m \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --bs=4K --iodepth=4 --rw=randread \  --iodepth_batch_submit=4 --iodepth_batch_complete_max=4
  12. Test sequential read bandwidth.

    sudo fio --name=read_bandwidth_test \  --filename=/dev/sdb --filesize=2500G \  --time_based --ramp_time=2s --runtime=5m \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --numjobs=4 --thread --offset_increment=500G \  --bs=1M --iodepth=64 --rw=read \  --iodepth_batch_submit=64  --iodepth_batch_complete_max=64
  13. Test sequential write bandwidth.

    sudo fio --name=write_bandwidth_test \  --filename=/dev/sdb --filesize=2500G \  --time_based --ramp_time=2s --runtime=5m \  --ioengine=libaio --direct=1 --verify=0 --randrepeat=0 \  --numjobs=4 --thread --offset_increment=500G \  --bs=1M --iodepth=64 --rw=write \  --iodepth_batch_submit=64  --iodepth_batch_complete_max=64
  14. Clean up the throwaway Persistent Disk and VM:

    1. Delete the disk used forbenchmarking performance.
    2. Delete the VM createdfor benchmarking performance.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.