About Persistent Disk Stay organized with collections Save and categorize content based on your preferences.
This document describes the features, types, performance and benefits of Persistent Diskvolumes. If you need block storage for a virtual machine (VM) instance or container,such as for a boot disk or data disk, use Persistent Disk volumes ifGoogle Cloud Hyperdisk isn't available for your compute instance. To learn about theother block storage options in Compute Engine, seeChoose a disk type.
Persistent Disk volumes are durable network storage devices that your instancescan access like physical disks in a desktop or a server. Persistent Disk volumesaren't attached to the physical machine hosting the instance. Instead, they areattached to the instance asnetwork block devices.When you read to or write from a Persistent Disk volume, data is transmitted over the network.
The data on each Persistent Disk volume is distributed across several physical disks.Compute Engine manages the physical disks and the data distribution foryou to ensure redundancy and optimal performance.
You can detach or move the volumes to keep your data even after you delete yourinstances. Persistent Disk performance increases with size, so you can resize yourexisting Persistent Disk volumes or add more Persistent Disk volumes to aVM to meet your performance and storage space requirements.
Add a non-boot disk to your instancewhen you need reliable and affordable storage with consistent performancecharacteristics.
Add a Persistent Disk to your instance
Persistent Disk types
When you create a Persistent Disk volume, you can select one of the following disktypes:
- Balanced Persistent Disk (
pd-balanced
)- An alternative to SSD (Performance) Persistent Disk.
- Balance of performance and cost. For most Compute Engine machinetypes, these disks have the same maximum IOPS asSSD Persistent Disk and lower IOPS per GiB. This disk type offers performancelevels suitable for most general-purpose applications at a price pointbetween that of standard and SSD Persistent Disk.
- Backed by solid-state drives (SSD).
- SSD (Performance) Persistent Disk (
pd-ssd
)- Suitable for enterprise applications and high-performance databasesthat require lower latency and more IOPS than standard Persistent Diskprovides.
- Backed by solid-state drives (SSD).
- Standard Persistent Disk (
pd-standard
)- Suitable for large data processing workloads that primarily usesequential I/Os.
- Backed by standard hard disk drives (HDD).
- Extreme Persistent Disk (
pd-extreme
)- Offers consistently high performance for both random access workloads andbulk throughput.
- Designed for high-end database workloads.
- Lets you provision the target IOPS.
- Backed by solid-state drives (SSD).
- Available with a limited number ofmachine types.
If you create a disk in the Google Cloud console, the default disk type ispd-balanced
. If you create a disk using the gcloud CLI or theCompute Engine API, the default disk type ispd-standard
.
For information about machine type support, refer to the following:
Durability of Persistent Disk
Disk durability represents the probability of data loss, by design, for atypical disk in a typical year, using a set of assumptions about hardwarefailures, the likelihood of catastrophic events, isolation practices andengineering processes in Google data centers, and the internal encodings usedby each disk type. Persistent Disk data loss events are extremely rare and havehistorically been the result of coordinated hardware failures, software bugs, ora combination of the two. Google also takes many steps to mitigate theindustry-wide risk ofsilent data corruption.Human error by a Google Cloud customer, such as when a customeraccidentally deletes a disk, is outside the scope of Persistent Disk durability.
There is a very small risk of data loss occurring with a regional Persistent Diskvolume due to its internal data encodings and replication. Regional Persistent Diskprovideshigh availabilityand can be used for disaster recovery if an entire data center is lost andcan't be recovered.Regional Persistent Disk provides twice as many disk replicas as zonal Persistent Disk,with each replica distributed between two zones in the same region. If a primary zonebecomes unavailable during an outage, the replica in the second zone can beaccessed immediately.
For more information about region-specific considerations, seeGeography and regions.
The following table shows durability for each disk type's design. 99.999% durabilitymeans that with 1,000 disks, you would likely go a hundred years withoutlosing a single one.
Note: Durability is in the aggregate for each disk type, and doesn'trepresent a financially backed service level agreement (SLA).Zonal standard Persistent Disk | Zonal balanced Persistent Disk | Zonal SSD Persistent Disk | Zonal extreme Persistent Disk | Regional standard Persistent Disk | Regional balanced Persistent Disk | Regional SSD Persistent Disk |
---|---|---|---|---|---|---|
Better than 99.99% | Better than 99.999% | Better than 99.999% | Better than 99.9999% | Better than 99.999% | Better than 99.9999% | Better than 99.9999% |
Machine series support
Select a machine series to see its supported Persistent Disk (PD) types.
Machine series | SSD PD | Balanced PD | Extreme PD | Standard PD |
---|---|---|---|---|
C4 | — | — | — | — |
C4A | — | — | — | — |
C4D | — | — | — | — |
C3 | — | — | ||
C3D | — | — | ||
N4 | — | — | — | — |
N2 | ||||
N2D | — | |||
N1 | — | |||
T2D | — | |||
T2A | — | |||
E2 | — | |||
Z3 | — | — | ||
H3 | — | — | — | |
C2 | — | |||
C2D | — | |||
X4 | — | — | — | — |
M4 | — | — | — | — |
M3 | — | |||
M2 | ||||
M1 | ||||
N1+GPU | — | |||
A4X | — | — | — | — |
A4 | — | — | — | — |
A3 (H200) | — | — | — | — |
A3 (H100) | — | — | ||
A2 | — | |||
G2 | — | — |
Maximum capacity
Persistent Disk volumes can be up to 64 TiB in size. You can addup to 127 secondary, non-boot zonal Persistent Disk volumes to a VM instance.However, the combined total capacity of all Persistent Disk volumes attachedto a single VM can't exceed 257 TiB.
You can create single logical volumes of up to 257 TiB using logical volumemanagement inside your VM. For information about how to ensure maximumperformance with large volumes, seeLogical volume size.
Zonal Persistent Disk
A zonal Persistent Disk is a Persistent Disk that's accessible only within onespecific zone, for example,europe-west-2
.
Ease of use
Compute Engine handles most disk management tasks for you so thatyou don't need to deal with partitioning, redundant disk arrays, or subvolumemanagement.Generally, you don't need to create larger logical volumes. However, you can extendyour secondary attached Persistent Disk capacity to 257 TiB perVM and apply these practices to your Persistent Disk volumes.You can save time and get the best performance if youformat your Persistent Disk volumeswith a single file system and no partition tables.
If you need to separate your data into multiple unique volumes,create additional disksrather than dividing your existing disks into multiple partitions.
When you require additional space on your Persistent Disk volumes,resize your disks rather thanrepartitioning and formatting.
Performance
Persistent Disk performance is predictable and scales linearly withprovisioned capacity until the limits for a VM's provisioned vCPUs arereached. For more information about performance scaling limits and optimization,seeConfigure disks to meet performance requirements.
Standard Persistent Disk volumes are efficient and economical for handlingsequential read/write operations, but they aren't optimized to handle highrates of random input/output operations per second (IOPS). If your apps requirehigh rates of random IOPS, use SSD or extreme Persistent Disk. SSD Persistent Disk isdesigned for single-digit millisecond latencies. Observed latency isapplication specific.
Compute Engine optimizes performance and scaling on Persistent Diskvolumes automatically. You don't need to stripe multiple disks together orpre-warm disks to get the best performance. When you need more disk space orbetter performance,resize your disksand possibly add more vCPUs to add more storage space, throughput, and IOPS.Persistent Disk performance is based on the total Persistent Disk capacity attached to aVM and the number of vCPUs that the VM has.
For boot devices, you can reduce costs by using a standardPersistent Disk. Small, 10 GiB Persistent Disk volumes can work for basic boot andpackage management use cases. However, to ensure consistent performance for moregeneral use of the boot device, use a balanced Persistent Disk as your bootdisk.
Because Persistent Disk write operations contribute to the cumulative networkegress traffic for your VM, Persistent Disk write operations are capped by thenetwork egress capfor your VM.
Reliability
Persistent Disk has built-in redundancy to protect your data againstequipment failure and to ensure data availability through data centermaintenance events. Checksums are calculated for all Persistent Disk operations,so we can ensure that what you read is what you wrote.
Additionally, you cancreate snapshots of Persistent Disk toprotect against data loss due to user error. Snapshots are incremental, andtake only minutes to create even if you snapshot disks that are attachedto running VMs.
Regional Persistent Disk
Regional Persistent Disk volumes have storage qualities that are similar to zonalPersistent Disk. However, regional Persistent Disk volumes provide durable storage andreplication of data between two zones in the same region.
About synchronous disk replication
When you create a new Persistent Disk, you can eithercreate the disk in one zone, or replicate it across two zones within thesame region.
For example, if you create one disk in a zone, such as inus-west1-a
, youhave one copy of the disk. A disk created in only one zone is referred to as azonal disk. You can increase the disk's availability by storing anothercopy of the disk in a different zone within the region, such as inus-west1-b
.
Persistent Disk replicated across two zones in the same region are calledRegional Persistent Disk. You can also use Hyperdisk Balanced High Availability for cross-zonal synchronousreplication of Google Cloud Hyperdisk.
It's unlikely for a region to fail altogether, but zonal failures can happen.Replicating within the region to different zones, as shown in the followingimage, helps with availability and reduces disk latency. If both replicationzones fail, it's considered a region-wide failure.
Disk is replicated in two zones.
In the replicated scenario, the data is available in the local zone(us-west1-a
) which is the zone the virtual machine (VM) is running in. Then,the data is replicated to another zone (us-west1-b
). One of the zones must bethe same zone that the VM is running in.
If a zonal outage occurs, you can usually failover your workloadrunning on Regional Persistent Disk to another zone. To learn more, seeRegional Persistent Disk failover.
Note: Disk replication only provides high availability of the disks. Zonal outages might also affect the VMs or other components, which can also cause outages.Design considerations for Regional Persistent Disk
If you'redesigning robust systems orhigh availability services onCompute Engine, use Regional Persistent Disk combined with other bestpractices such asbacking up your data using snapshots.Regional Persistent Disk volumes are also designed to work withregional managed instance groups.
Performance
Regional Persistent Disk volumes are designed for workloads that require a lowerRecovery Point Objective (RPO) andRecovery Time Objective (RTO) compared to using Persistent Disk snapshots.
Regional Persistent Disk are an option when write performance is less criticalthan data redundancy across multiple zones.
Like zonal Persistent Disk, Regional Persistent Disk can achieve greaterIOPS and throughput performance on VMs with a greater number of vCPUs.For more information about this and other limitations, seeConfigure disks to meet performance requirements.
When you need more disk space or better performance, you canresize your regional disksto add more storage space, throughput, and IOPS.
Reliability
Compute Engine replicates data of your regional Persistent Disk to thezones you selected when you created your disks. The data of each replica isspread across multiple physical machines within the zone to ensure redundancy.
Similar to zonal Persistent Disk, you cancreate snapshots of Persistent Disk toprotect against data loss due to user error. Snapshots are incremental, andtake only minutes to create even if you snapshot disks that are attachedto running VMs.
Limitations for Regional Persistent Disk
- You can attach regional Persistent Disk only to VMs that useE2,N1,N2, andN2D machine types.
- You can attach Hyperdisk Balanced High Availability only tosupported machine types.
- You can't create a regional Persistent Disk from anOS image, or from a disk that was created from an OS image.
- You can't create a Hyperdisk Balanced High Availability disk by cloning a zonal disk. To create a Hyperdisk Balanced High Availability disk from an zonal disk, complete the steps inChange a zonal disk to a Hyperdisk Balanced High Availability disk.
- When using read-only mode, you can attach a regional balanced Persistent Disk to a maximum of 10 VM instances.
- The minimum size of a regional standard Persistent Disk is 200 GiB.
- You can only increase the size of a regional Persistent Disk or Hyperdisk Balanced High Availability volume; you can't decrease its size.
- Regional Persistent Disk and Hyperdisk Balanced High Availability volumes have different performance characteristics than their corresponding zonal disks. For more information, seeAbout Persistent Disk performance andHyperdisk Balanced High Availability performance limits.
- You can't use a Hyperdisk Balanced High Availability volume that's in multi-writer mode as a boot disk.
- If you create a replicated disk by cloning a zonal disk, then the two zonal replicas aren't fully in sync at the time of creation. After creation, you can use the regional disk clone within 3 minutes, on average. However, you might need to wait for tens of minutes before the disk reaches a fully replicated state and the recovery point objective (RPO) is close to zero. Learn how to check if your replicated disk is fully replicated.
Storage interface types
The storage interface is chosen automatically for you when you create yourinstance or add Persistent Disk volumes to a VM. Tau T2A and third generation VMs(such as M3) use theNVMeinterface for Persistent Disk.
Confidential VMinstances also use NVMe Persistent Disk. All other Compute Engine machineseries use theSCSI diskinterface for Persistent Disk.
Most public images include both NVMe and SCSI drivers. Most images include akernel with optimized drivers that allow your VM to achieve the best performanceusing NVMe. Your imported Linux images achieve the best performance with NVMe ifthey include kernel version4.14.68
or later.
To determine if an operating system version supports NVMe, see theoperating system details page.
Multi-writer mode
Caution: Google recommends using Hyperdisk Balanced or Hyperdisk Balanced High Availability (Preview)volumes in multi-writer mode instead of SSD Persistent Disk volumes.Preview
This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
You can attach an SSD Persistent Disk in multi-writer mode to up to two N2VMs simultaneously so that both VMs can read and write to the disk.
Persistent Disk in multi-writer mode provides a shared block storage capabilityand presents an infrastructural foundation for building highly availableshared file systems and databases. These specialized file systems and databasesshould be designed to work with shared block storage and handle cache coherencebetween VMs by using tools such asSCSI Persistent Reservations.
However, Persistent Disk with multi-writer mode should generally not be useddirectly. Many file systems such as EXT4, XFS,and NTFS aren't designed to be used with shared block storage. For moreinformation about the best practices when sharing Persistent Disk between VMs,seeBest practices.
If you require a fully managed file storage, you canmount a Filestorefile share on your Compute Engine VMs.
To enable multi-writer mode for new Persistent Disk volumes, create a newPersistent Disk and specify the--multi-writer
flag in the gcloud CLIor themultiWriter
property in the Compute Engine API. For more information, seeShare Persistent Disk volumes between VMs.
Persistent Disk encryption
Compute Engine automatically encrypts your data before it travelsoutside of your VM to the Persistent Disk storage space. Each Persistent Diskremains encrypted either with system-defined keys or withcustomer-supplied keys.Google distributes Persistent Disk data across multiple physicaldisks in a manner that users don't control.
When you delete a Persistent Disk volume, Google discards the cipher keys,rendering the data irretrievable. This process is irreversible.
If you want to control the encryption keys that are used to encrypt your data,create your disks with your own encryption keys.
Restrictions
You can't attach a Persistent Disk volume to an VM in another project.
You can attach a balanced Persistent Disk to a maximum of 10 VMs inread-only mode.
Forcustom machine typesor predefined machine types with a minimum of 1 vCPU, you can attach up to128 Persistent Disk volumes.
Each Persistent Disk volume can be up to 64 TiB in size, so there is noneed to manage arrays of disks to create large logical volumes. Each VM canattach only a limited amount of total Persistent Disk space and a limitednumber of individual Persistent Disk volumes. Predefined machine types andcustom machine types have the same Persistent Disk limits.
Most VMs can have up to 128 Persistent Disk volumes and up to 257 TiB oftotal disk space attached. Total disk space for a VM includes the sizeof the boot disk.
Shared-core machine typesare limited to 16 Persistent Disk volumes and 3 TiB of total Persistent Diskspace.
Creating logical volumes larger than 64 TiB might require specialconsideration. For more information about larger logical volume performanceseelogical volume size.
Persistent Disk and Colossus
Persistent Disk is designed to run in tandem with Google's file system,Colossus,which is a distributed block storage system. Persistent Disk driversautomatically encrypt data on the VM before it's transmitted from the VM ontothe network. Then, Colossus persists the data. When Colossus reads the data, thedriver decrypts the incoming data.
Persistent Disk volumes use Colossus for the storage backend.
Having disks as a service is useful in a number of cases, for example:
- Resizing the disks while the instance is running becomes easier thanstopping the instance first. You can increase the disk size without stoppingthe instance.
- Attaching and detaching disks becomes easier when disks and VMs don't haveto share the same lifecycle or be co-located. It's possible to stop a VM anduse its Persistent Disk boot disk to boot another VM.
- High availability features like replication become easier because the diskdriver can hide replication details and provide automatic write-timereplication.
What's next
Learn how toadd a Persistent Disk volume to your VM.
Reviewdisk and image pricing information.
Learn how toclone a Persistent Disk volume.
Learn how toshare Persistent Disk volumes between VMs.
Learn how tooptimize Persistent Disk performance.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-09 UTC.