Benchmarking Local SSD performance

Local SSD performance limits provided in theChoose a storage option sectionwere achieved by using specific settings on the Local SSD instance. If yourvirtual machine (VM) instance is having trouble reaching these performancelimits and you have already configured the instance using therecommended localSSD settings, you cancompare your performance limits against the published limits by replicating thesettings used by the Compute Engine team.

Warning: The script used in this section is intended for benchmarking andperformance comparisons only and is not intended to optimize your disk forperformance. We strongly recommend against running this script on a VMwith a Local SSD where you want to keep the data because this scriptdiscards any data on your Local SSD.

These instructions assume that you are using a Linux operating system with theapt package manager installed.

Note: These instructions create Local SSDs connected to an instance using theNVMe interface. If you use the SCSI interface instead, replace the disk location/dev/disk/by-id/google-local-nvme-ssd-0 with/dev/disk/by-id/google-local-ssd-0 in the following commands.

Create a VM with one Local SSD device

The number of Local SSD that a VM can have is based on themachine type you use to create the VM. For details, seeChoosing a valid number of Local SSDs.

  1. Create a Local SSD instance that has four or eight vCPUs for each device,depending on your workload.

    For example, the following command creates a C3 VM with 4 vCPUsand 1 Local SSD.

    gcloud compute instances create c3-ssd-test-instance \    --machine-type "c3-standard-4-lssd"

    For second generation and earlier machine types, you specify the numberof Local SSD to attach to the VM using the--local-ssd flag. The followingcommand creates an N2 VM with 8 vCPUs and 1 Local SSD that uses the NVMe diskinterface:

    gcloud compute instances create ssd-test-instance \    --machine-type "n2-standard-8" \    --local-ssd interface=nvme
  2. Run the following script on your VM. The script replicates the settingsused to achieve the SSD performance figures provided in theperformance section. Notethat the--bs parameter definesthe block size, which affects the results for different types of read andwrite operations.

    # install toolssudoapt-get-yupdatesudoapt-getinstall-yfioutil-linux# discard Local SSD sectorssudoblkdiscard/dev/disk/by-id/google-local-nvme-ssd-0# full write pass - measures write bandwidth with 1M blocksizesudofio--name=writefile\--filename=/dev/disk/by-id/google-local-nvme-ssd-0--bs=1M--nrfiles=1\--direct=1--sync=0--randrepeat=0--rw=write--end_fsync=1\--iodepth=128--ioengine=libaio# rand read - measures max read IOPS with 4k blockssudofio--time_based--name=readbenchmark--runtime=30--ioengine=libaio\--filename=/dev/disk/by-id/google-local-nvme-ssd-0--randrepeat=0\--iodepth=128--direct=1--invalidate=1--verify=0--verify_fatal=0\--numjobs=4--rw=randread--blocksize=4k--group_reporting# rand write - measures max write IOPS with 4k blockssudofio--time_based--name=writebenchmark--runtime=30--ioengine=libaio\--filename=/dev/disk/by-id/google-local-nvme-ssd-0--randrepeat=0\--iodepth=128--direct=1--invalidate=1--verify=0--verify_fatal=0\--numjobs=4--rw=randwrite--blocksize=4k--group_reporting

Create a VM with the maximum number of Local SSD

  1. If you want to attach24 or more Local SSD devicesto an instance, use a machine type with 32 or more vCPUs.

    The following commands create a VM with the maximum allowed number ofLocal SSD disks using the NVMe interface:

    Attach Local SSD to VM

    gcloud compute instances create ssd-test-instance \    --machine-type "n1-standard-32" \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme \    --local-ssd interface=nvme

    Use -lssd machine types

    Newer machine series offer-lssd machine types that come with a predetermined number of Local SSD disks. For example, to benchmarka VM with 32 Local SSD (12 TiB capacity), use the following command:

    gcloud compute instances create ssd-test-instance \    --machine-type "c3-standard-176-lssd"
  2. Install themdadm tool. The install process formdadm includes a user prompt that halts scripts, so run theprocess manually:

    Debian and Ubuntu

    sudoaptupdate &&sudoaptinstallmdadm--no-install-recommends

    CentOS and RHEL

    sudoyuminstallmdadm-y

    SLES and openSUSE

    sudozypperinstall-ymdadm
  3. Use thefind command to identify all of the Local SSDs that you wantto mount together:

    find/dev/|grepgoogle-local-nvme-ssd

    The output looks similar to the following:

    /dev/disk/by-id/google-local-nvme-ssd-23/dev/disk/by-id/google-local-nvme-ssd-22/dev/disk/by-id/google-local-nvme-ssd-21/dev/disk/by-id/google-local-nvme-ssd-20/dev/disk/by-id/google-local-nvme-ssd-19/dev/disk/by-id/google-local-nvme-ssd-18/dev/disk/by-id/google-local-nvme-ssd-17/dev/disk/by-id/google-local-nvme-ssd-16/dev/disk/by-id/google-local-nvme-ssd-15/dev/disk/by-id/google-local-nvme-ssd-14/dev/disk/by-id/google-local-nvme-ssd-13/dev/disk/by-id/google-local-nvme-ssd-12/dev/disk/by-id/google-local-nvme-ssd-11/dev/disk/by-id/google-local-nvme-ssd-10/dev/disk/by-id/google-local-nvme-ssd-9/dev/disk/by-id/google-local-nvme-ssd-8/dev/disk/by-id/google-local-nvme-ssd-7/dev/disk/by-id/google-local-nvme-ssd-6/dev/disk/by-id/google-local-nvme-ssd-5/dev/disk/by-id/google-local-nvme-ssd-4/dev/disk/by-id/google-local-nvme-ssd-3/dev/disk/by-id/google-local-nvme-ssd-2/dev/disk/by-id/google-local-nvme-ssd-1/dev/disk/by-id/google-local-nvme-ssd-0

    find does not guarantee an ordering. It's alright if the devices arelisted in a different order as long as number of output lines match the expected number of SSD partitions.

    If using SCSI devices, use the followingfind command:

    find /dev/ | grep google-local-ssd

    NVMe devices should all be of formgoogle-local-nvme-ssd-# and SCSI devices should all be of formgoogle-local-ssd-#.

  4. Use themdadm tool to combine multiple Local SSD devices intoa single array named/dev/md0. The following example merges twentyfour Local SSD devices that use the NVMe interface. For Local SSD devicesthat use SCSI, use the device names returned from thefind command instep 3.

    sudomdadm--create/dev/md0--level=0--raid-devices=24\/dev/disk/by-id/google-local-nvme-ssd-0\/dev/disk/by-id/google-local-nvme-ssd-1\/dev/disk/by-id/google-local-nvme-ssd-2\/dev/disk/by-id/google-local-nvme-ssd-3\/dev/disk/by-id/google-local-nvme-ssd-4\/dev/disk/by-id/google-local-nvme-ssd-5\/dev/disk/by-id/google-local-nvme-ssd-6\/dev/disk/by-id/google-local-nvme-ssd-7\/dev/disk/by-id/google-local-nvme-ssd-8\/dev/disk/by-id/google-local-nvme-ssd-9\/dev/disk/by-id/google-local-nvme-ssd-10\/dev/disk/by-id/google-local-nvme-ssd-11\/dev/disk/by-id/google-local-nvme-ssd-12\/dev/disk/by-id/google-local-nvme-ssd-13\/dev/disk/by-id/google-local-nvme-ssd-14\/dev/disk/by-id/google-local-nvme-ssd-15\/dev/disk/by-id/google-local-nvme-ssd-16\/dev/disk/by-id/google-local-nvme-ssd-17\/dev/disk/by-id/google-local-nvme-ssd-18\/dev/disk/by-id/google-local-nvme-ssd-19\/dev/disk/by-id/google-local-nvme-ssd-20\/dev/disk/by-id/google-local-nvme-ssd-21\/dev/disk/by-id/google-local-nvme-ssd-22\/dev/disk/by-id/google-local-nvme-ssd-23

    The response is similar to the following:

    mdadm:Defaultingtoversion1.2metadatamdadm:array/dev/md0started.

    You can confirm the details of the array withmdadm --detail. Addingthe--prefer=by-id flag will list the devices using the/dev/disk/by-id paths.

    sudomdadm--detail--prefer=by-id/dev/md0

    The output should look similar to the following for each device in the array.

    ...NumberMajorMinorRaidDeviceState025900activesync/dev/disk/by-id/google-local-nvme-ssd-0...
  5. Run the following script on your VM. The script replicates the settingsused to achieve the SSD performance figures provided in theperformance section.that the--bs parameter definesthe block size, which affects the results for different types of read andwrite operations.

    # install toolssudoapt-get-yupdatesudoapt-getinstall-yfioutil-linux# full write pass - measures write bandwidth with 1M blocksizesudofio--name=writefile\--filename=/dev/md0--bs=1M--nrfiles=1\--direct=1--sync=0--randrepeat=0--rw=write--end_fsync=1\--iodepth=128--ioengine=libaio# rand read - measures max read IOPS with 4k blockssudofio--time_based--name=benchmark--runtime=30\--filename=/dev/md0--ioengine=libaio--randrepeat=0\--iodepth=128--direct=1--invalidate=1--verify=0--verify_fatal=0\--numjobs=48--rw=randread--blocksize=4k--group_reporting--norandommap# rand write - measures max write IOPS with 4k blockssudofio--time_based--name=benchmark--runtime=30\--filename=/dev/md0--ioengine=libaio--randrepeat=0\--iodepth=128--direct=1--invalidate=1--verify=0--verify_fatal=0\--numjobs=48--rw=randwrite--blocksize=4k--group_reporting--norandommap

Benchmarking Storage Optimized VMs

  1. Storage Optimized VMs (like the Z3 Family) should be benchmarked directly againstthe device partitions. You can get the partition names withlsblk

    lsblk-oname,size-lpn|grep2.9T|awk'{print $1}'

    The output looks similar to the following:

    /dev/nvme1n1/dev/nvme2n1/dev/nvme3n1/dev/nvme4n1/dev/nvme5n1/dev/nvme6n1/dev/nvme7n1/dev/nvme8n1/dev/nvme9n1/dev/nvme10n1/dev/nvme11n1/dev/nvme12n1
  2. Directly run the benchmarks against the Local SSD partitions by separating themwith colon delimiters.

    # install benchmarking toolssudoapt-get-yupdatesudoapt-getinstall-yfioutil-linux# Full Write Pass.# SOVM achieves max read performance on previously written/discarded ranges.sudofio--readwrite=write--blocksize=1m--iodepth=4--ioengine=libaio\--direct=1--group_reporting\--name=job1--filename=/dev/nvme1n1--name=job2--filename=/dev/nvme2n1\--name=job3--filename=/dev/nvme3n1--name=job4--filename=/dev/nvme4n1\--name=job5--filename=/dev/nvme5n1--name=job6--filename=/dev/nvme6n1\--name=job7--filename=/dev/nvme7n1--name=job8--filename=/dev/nvme8n1\--name=job9--filename=/dev/nvme9n1--name=job10--filename=/dev/nvme10n1\--name=job11--filename=/dev/nvme11n1--name=job12--filename=/dev/nvme12n1# rand read - measures max read IOPS with 4k blockssudofio--readwrite=randread--blocksize=4k--iodepth=128\--numjobs=4--direct=1--runtime=30--group_reporting--ioengine=libaio\--name=job1--filename=/dev/nvme1n1--name=job2--filename=/dev/nvme2n1\--name=job3--filename=/dev/nvme3n1--name=job4--filename=/dev/nvme4n1\--name=job5--filename=/dev/nvme5n1--name=job6--filename=/dev/nvme6n1\--name=job7--filename=/dev/nvme7n1--name=job8--filename=/dev/nvme8n1\--name=job9--filename=/dev/nvme9n1--name=job10--filename=/dev/nvme10n1\--name=job11--filename=/dev/nvme11n1--name=job12--filename=/dev/nvme12n1# rand write - measures max write IOPS with 4k blockssudofio--readwrite=randwrite--blocksize=4k--iodepth=128\--numjobs=4--direct=1--runtime=30--group_reporting--ioengine=libaio\--name=job1--filename=/dev/nvme1n1--name=job2--filename=/dev/nvme2n1\--name=job3--filename=/dev/nvme3n1--name=job4--filename=/dev/nvme4n1\--name=job5--filename=/dev/nvme5n1--name=job6--filename=/dev/nvme6n1\--name=job7--filename=/dev/nvme7n1--name=job8--filename=/dev/nvme8n1\--name=job9--filename=/dev/nvme9n1--name=job10--filename=/dev/nvme10n1\--name=job11--filename=/dev/nvme11n1--name=job12--filename=/dev/nvme12n1

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.