Share disks between instances Stay organized with collections Save and categorize content based on your preferences.
You can access the same disk from multiple virtual machine (VM) instances byattaching the disk to each instance. You can attach a disk in read-only mode ormulti-writer mode to an instance.
Withread-only mode, multiple instances can only read data from the disk.None of the instances can write to the disk. Sharing a disk in read-only modebetween instances is less expensive than having copies of the same data onmultiple disks.
Withmulti-writer mode, multiple instances can read and write to the samedisk. This is useful for highly-available (HA) shared file systems and databaseslike SQL Server Failover Cluster Infrastructure (FCI).
You can share a zonal disk only between instances in the same zone. Regionaldisks can be shared only with instances in the same zones as the disk'sreplicas.
There are no additional costs associated with sharing a disk between instances.Compute Engine instances don't have to use the same machine type toshare a disk, but each instance must use a machine type that supports disksharing.
This document discusses multi-writer and read-only disk sharing inCompute Engine, including the supported disk types and performanceconsiderations.
Before you begin
- If you haven't already, then set up authentication.Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
Afterinstalling the Google Cloud CLI,initialize it by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update
.- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Afterinstalling the Google Cloud CLI,initialize it by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
For more information, seeAuthenticate for using REST in the Google Cloud authentication documentation.
Enable disk sharing
You can attach an existing Hyperdisk or Persistent Disk volume tomultiple instances. However, for Hyperdisk volumes, you mustfirst put the disk in multi-writer or read-only mode by setting its access mode.
A Hyperdisk volume's access mode is a property that determineshow instances can access the disk.
The available access modes are as follows:
- Single-writer mode (
READ_WRITE_SINGLE
): This is the default access mode.Allows the disk to be attached to at most one instance at any time. Theinstance has read-write access to the disk. - Read-only mode (
READ_ONLY_MANY
): enables simultaneous attachments tomultiple instances in read-only mode. Instances can't write to the disk inthis mode. Required for read-only sharing. - Multi-writer mode (
READ_WRITE_MANY
): enables simultaneous attachments tomultiple instances in read-write mode. Required for multi-writer sharing.
Support for each access mode varies by Hyperdisk type, as statedin the following table. You can't set the access mode for Hyperdisk Throughput volumes.
Hyperdisk type | Supported access modes |
---|---|
Hyperdisk Balanced Hyperdisk Balanced High Availability |
|
Hyperdisk Extreme |
|
Hyperdisk ML |
|
Hyperdisk Throughput |
|
For disks that can be shared between instances, you can set the access mode ator after disk creation. For instructions on setting the access mode,seeset the disk's access mode.
Read-only mode for Hyperdisk and Persistent Disk
This section discusses sharing a single disk in read-only mode between multipleinstances.
Note: A disk's read-only setting applies to all instances that the disk isattached to. You can't attach a disk in read-write mode to one instance andattach the same disk in read-only mode to another instance.Supported disk types for read-only mode
You can attach these disk types to multiple instances in read-only mode:
- Hyperdisk ML
- Zonal and regional Balanced Persistent Disk
- SSD Persistent Disk
- Standard Persistent Disk
Performance in read-only mode
Attaching a disk in read-only mode to multiple instances doesn't affect thedisk's performance. Each instance can still reach the maximum disk performancepossible for the instance's machine type.
Limitations for sharing disks in read-only mode
- If you share a Hyperdisk ML volume in read-only mode, you can't re-enable write access to the disk.
- You can attach a Hyperdisk ML volume to up to 100 instances during every 30-second interval.
The maximum number of instances a disk can be attached to varies by disk type:
For Hyperdisk ML volumes, the maximum number of instances depends on the provisioned size, as follows:
- Volumes less than 256 GiB in size: 2,500 VMs
- Volumes with capacity of 256 GiB or more, and less than 1 TiB:1,500 VMs
- Volumes with capacity of 1 TiB or more, and less than 2 TiB:600 VMs
- Volumes with 2 TiB or more of capacity:30 VMs
- Zonal or regional Balanced Persistent Disk volumes in read-only mode support at most 10 instances.
- For SSD Persistent Disk, Google recommends at most 100 instances.
- For Standard Persistent Disk volumes, the recommended maximum is 10 instances.
How to share a disk in read-only mode between instances
If you're not using Hyperdisk ML, attach the disk to multiple instances byfollowing the instructions inAttach a non-boot disk to an instance.
To attach a Hyperdisk ML volume in read-only mode to multiple instances, you mustfirst set the disk's access mode toread-only mode.After you set the access mode,attach the Hyperdisk ML volume to your instances.
Multi-writer mode for Hyperdisk
Disks in multi-writer mode are suitable for use cases like the following:
- Implementing SQL Server FCI.
- Clustered file systems where multiple instances all write to the same disk
- Highly available systems in active-active or active-passive mode. Attachingthe same disk to multiple instances can prevent disruptions because if oneinstance fails, other instances still have access to the disk and cancontinue to run the workload.
If your primary goal is shared file storage among compute instances, considerone of the following options:
- Filestore, Google'smanaged file storage solution
- Cloud Storage
- A network file server onCompute Engine
Supported Hyperdisk and machine types for multi-writer mode
You can use Hyperdisk Balanced, Hyperdisk Balanced High Availability, and Hyperdisk Extreme volumes (Preview) in multi-writer mode.You can attach a single Hyperdisk Balanced or Hyperdisk Balanced High Availability volumein multi-writer mode to at most 8 instances. You can attach asingle Hyperdisk Extreme volume in multi-writer mode (Preview)to at most 16 instances. You can't attach volumes inmulti-writer mode to bare metal instances.
Hyperdisk Balanced supports multi-writer mode for the following machine types:
Hyperdisk Balanced High Availability supports multi-writer mode for the following machine types:
Hyperdisk Extreme supports multi-writer mode (Preview) for the following machine types:
Multi-writer mode for Hyperdisk supports the NVMe interface. Ifyou're attaching a disk in multi-writer mode to an instance, the instance'sboot disk must also be attached with NVMe.
Supported file systems for multi-writer mode
Warning: If you use single-instance file systems, such as EXT4, XFS, or NTFS, ona disk in multi-writer mode, you might experience data loss if multiple VMsaccess the disk at the same time. To mitigate this issue, you can use aclustering software that ensures exclusive access to a single VM at a time,such as SQL Server FCI using NTFS. Otherwise, avoid using single-instance filesystems for shared storage.To access a disk from multiple instances, use one of the following options:
- Persistent Reservations (PR),especially for HA systems such as SQL Server FCI and NetApp ONTAP. Googlerecommends using PR commands to provide I/O fencing and maintain data integrity.For a list of the supported PR commands, seeI/O fencing with persistent reservations.
- Clustered file systems that support multiple instances writing to the samevolume. Examples of such file systems include OCFS2, VMFS and GFS2.
- Scale-out software systems like Lustre and IBM Spectrum Scale.
- Your own synchronization mechanism to coordinate concurrent reads and writes.
Hyperdisk performance in multi-writer mode
When you attach a Hyperdisk Balanced or Hyperdisk Balanced High Availability disk in multi-writer mode to multiple instances,the disk's provisioned performance is divided evenly across all instances—evenamong instances that aren't running or that aren't actively using the disk.However, the maximum performance foreach instance is ultimately limited by the throughput and IOPS limits of eachinstance's machine type.
For example, suppose you attach a Hyperdisk Balanced volume provisioned with 100,000 IOPSto 2 instances. Each instance gets 50,000 IOPS concurrently.
The following table shows how much performance each instance in this examplewould get depending on how many instances you attach the disk to. Each time youattach a disk to another instance, Compute Engine asynchronously adjusts theperformance allotted to each previously attached instance.
# of instances attached | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|
Max IOPS per instance | 100,000 | 50,000 | ~33,333 | 25,000 | 20,000 | ~16,667 | 14285 | 12,500 |
Max throughput per instance in MiBps | 1,200 | 600 | 400 | 300 | 240 | 200 | ~172 | 150 |
When you attach a Hyperdisk Extreme disk in multi-writer mode (Preview)to multiple instances, the disk's provisionedperformance is allocated to each instance based on how much performance eachinstance requires. For example, a single instance could consume the entireprovisioned performance of the disk if the other attached instances are idle.
Limitations for sharing Hyperdisk volumes in multi-writer mode
- You can attach a single Hyperdisk Balanced or Hyperdisk Balanced High Availability volume in multi-writer mode toat most 8 instances. You can attach a singleHyperdisk Extreme volume in multi-writer mode (Preview)to at most 16 instances.
- You can't create machine images, or disk images from Hyperdiskvolumes in multi-writer mode.
- You can't create a Hyperdisk volume in multi-writer modewhen you're creating or editing an instance. You must create theHyperdisk volumeseparately first, and then attach it to the instance.
- You can't resize a Hyperdisk volume in multi-writer mode unlessyou detach the volume from all instances.
You can make the following changes to a Hyperdisk volume that'sin multi-writer mode, even if the volume is attached to multiple instances:
- Modify its provisioned IOPS or throughput
- Attach the disk to additional instances
When you make one of these changes, Compute Engine redistributes theHyperdisk volume'sprovisioned performance across all the attached instances. This process mighttake up to 6 hours to complete.
You can't create an image from a Hyperdisk volume inmulti-writer mode.
You can't enable auto-delete for Hyperdisk volumes inmulti-writer mode.
You can't use a Hyperdisk volume in multi-writer mode as theboot disk for an instance.
Hyperdisk volumes in multi-writer mode can't be used withinstances on sole tenancy nodes.
You must use the same interface type as the instance's boot disk.
You can't change the machine type of an instance that's attached to a disk inmulti-writer mode.
Storage pools support only Hyperdisk Balanced volumes in multi-writermode. Storage pools don't support Hyperdisk Balanced High Availability and Hyperdisk Extreme volumes.
You can useWindows Server Failover Clusterswith Hyperdisk volumes in multi-writer mode on machine typesthat use SCSI or NVMe storage interfaces. However, when using machine typeswith the NVMe storage interface, the following limitations apply:
- You must use a Windows server from 2022 or later.
- You must create each clustered diskdirectly on a multi-writer Hyperdisk volume. Striping or pooling ofmulti-writer Hyperdisk volumes isn't supported.
- Persistent reservation commands might fail if the Hyperdiskvolume is attached to a running VM. To resolve this, restart the VM orattach the Hyperdisk volume only when the VM is stopped.
Available regions
You can enable multi-writer mode in all the regions where Hyperdisk Balanced, Hyperdisk Balanced High Availability, and Hyperdisk Extreme areavailable. For a list of supported regions, view the regional availability for your Hyperdisk volume:
- Regional availability for Hyperdisk Balanced
- Regional availability for Hyperdisk Balanced High Availability
- Regional availability for Hyperdisk Extreme
I/O fencing with persistent reservations
Google recommends using persistent reservations (PR) with disks inmulti writer mode to provide I/O fencing. Persistent reservations manage accessto the disk between instances. This prevents data corruption from instancessimultaneously writing to the same portion of the disk.
Hyperdisk volumes in multi-writer mode supportNVMe (spec 1.2.1)reservations.
Supported reservation modes
The following reservation modes are supported:
- Write Exclusive: there will be a single reservation holder and a singlewriter. All other registrants/non-registrants will only have read access.
- Write Exclusive - Registrants Only: there will be a single reservationholder. All registrants will have read and write access to the disk. Thenon-registrants will only have read access.
The following reservation modes aren't supported:
- Write Exclusive - All registrants
- Exclusive Access
- Exclusive Access - Registrant Only
- Exclusive Access - All registrants
NVMeGet Features - Host Identifier
is supported. The instance number is used as the default Host ID.
The following NVMe reservation features are not supported:
- Set Features - Host Identifier
- Reservation notifications:
- Get Log Page
- Reservation Notification Mask
Supported commands
NVMe reservations support the following commands:
- Reservation Register Action (
RREGA
) - Replace/Register/Unregister -IEKEY
- Reservation Acquire Action (
RACQA
) - Acquire/Preempt -IEKEY
- Reservation Release Action (
RRELA
) - Release/Clear -IEKEY
- Reservation Report
- Reservation capabilities (
RESCAP
) field in the identify namespace datastructure.
NVMe reservations don't support the following commands:
- Preempt and Abort
- Disabling Persist Through Power Loss (PTPL). PTPL is always enabled.
How to share a disk in multi-writer mode
Before you attach a disk in multi-writer mode to multiple instances, you mustset the disk's access mode to multi-writer. You can set the access mode for adisk when you create it.
You can also set the access mode for an existing disk,but you must first detach the disk from all instances.
To create and use a new disk in multi-writer mode, follow these steps:
- Create the disk, setting its access mode to multi-writer. For instructions,seeAdd a Hyperdisk to your instance.
- Attach the disk to each instance.
To use an existing disk in multi-writer mode, follow these steps:
- Detach the disk from all instances.
- Set the disk's access mode to multi-writer.
- Attach the disk to each instance.
Multi-writer mode for Persistent Disk volumes
Caution: Google recommends that you use Hyperdisk Balanced volumes in multi-writer modeinstead of SSD Persistent Disk volumes.Preview
This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2virtual machine (VM) instances simultaneously so that both VMs can read andwrite to the disk.
If you have more than 2 N2 VMs or you're using any other machine series, youcan use one of the following options:
- Connect your instances to Cloud Storage
- Connect your instances toFilestore
- Create a network file server onCompute Engine
To enable multi-writer mode for new Persistent Disk volumes,create a new Persistent Disk volume and specify the--multi-writer
flag in thegcloud CLI or themultiWriter
property in the Compute Engine API.
Persistent Disk volumes in multi-writer mode provide a shared block storage capabilityand present an infrastructural foundation for building distributed storagesystems and similar highly available services. When using Persistent Disk volumesin multi-writer mode, use ascale-out storage software systemthat has the ability to coordinate access to Persistent Disk devices acrossmultiple VMs. Examples of these storage systems includeLustre and IBM Spectrum Scale.Most single VM file systems such as EXT4, XFS, and NTFS are not designed to beused with shared block storage.
For more information, seeBest practices in thisdocument. If you require a fully managed file storage, you canmount a Filestore file share on your Compute Engine instances.
Persistent Disk volumes in multi-writer mode support a subset ofSCSI-3 PersistentReservations (SCSI PR) commands. High-availability applications can use thesecommands for I/O fencing and failover configurations.
The following SCSI PR commands are supported:
- IN {REPORT CAPABILITIES, READ FULL STATUS, READ RESERVATION, READ KEYS}
- OUT {REGISTER, REGISTER AND IGNORE EXISTING KEY, RESERVE, PREEMPT, CLEAR, RELEASE}
For instructions, seeShare an SSD Persistent Disk volume in multi-writer mode between VMs.
Supported Persistent Disk types for multi-writer mode
You can simultaneously attach SSD Persistent Disk in multi-writer mode to up to 2 N2VMs.
Best practices for multi-writer mode
- I/O fencing using SCSI PR commands results in a crash consistent state ofPersistent Disk data. Some file systems don't have crash consistency andtherefore might become corrupt if you use SCSI PR commands.
- Many file systems such as EXT4, XFS, and NTFS are not designed to be usedwith shared block storage and don't have mechanisms to synchronize orperform operations that originate from multiple VM instances.
- Before you use Persistent Disk volumes in multi-writer mode, ensure that youunderstand your file system and how it can be safely used with shared blockstorage and simultaneous access from multiple instances.
Persistent Disk performance in multi-writer mode
Persistent Disk volumes created in multi-writer mode have specific IOPS andthroughput limits.
Zonal SSD persistent disk multi-writer mode | ||
---|---|---|
Maximum sustained IOPS | ||
Read IOPS per GB | 30 | |
Write IOPS per GB | 30 | |
Read IOPS per instance | 15,000–100,000* | |
Write IOPS per instance | 15,000–100,000* | |
Maximum sustained throughput (MB/s) | ||
Read throughput per GB | 0.48 | |
Write throughput per GB | 0.48 | |
Read throughput per instance | 240–1,200* | |
Write throughput per instance | 240–1,200* |
To learn how to share persistent disks between multiple VMs, seeShare persistent disks between VMs.
Restrictions for sharing a disk in multi-writer mode
- Multi-writer mode is only supported forSSD type Persistent Disk volumes.
- You can create a Persistent Disk volume in multi-writer mode inany zone, but you can only attach that disk to VMs in the following locations:
australia-southeast1
europe-west1
us-central1
(us-central1-a
andus-central1-c
zones only)us-east1
(us-east1-d
zone only)us-west1
(us-west1-b
andus-west1-c
zones only)
- Attached VMs must have anN2 machine type.
- Minimum disk size is 10 GiB.
- Disks in multi-writer mode don't support attaching more than 2 VMs at a time.Multi-writer mode Persistent Disk volumes don't supportPersistent Disk metrics.
- Disks in multi-writer mode cannot change to read-only mode.
- You cannot use disk images or snapshots to create Persistent Disk volumes inmulti-writer mode.
- You can't create snapshots or images from Persistent Disk volumes in multi-writer mode.
- Lower IOPS limits. Seedisk performancefor details.
- You can't resize a multi-writer Persistent Disk volume.
- When creating an instance using the Google Cloud CLI, you can't create amulti-writer Persistent Disk volume using the
--create-disk
flag.
Share an SSD Persistent Disk volume in multi-writer mode between VMs
Preview
This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
Caution: Google recommends you useHyperdisk Balanced volumes in multi-writer mode.You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMsin the same zone. SeePersistent Disk multi-writer modefor details about how this mode works. You can create and attach multi-writerPersistent Disk volumes using the following process:
gcloud
Create and attach a zonal Persistent Disk volume by using thegcloud CLI:
Use the
gcloud beta compute disks create
commandcommand to create a zonal Persistent Disk volume.Include the--multi-writer
flag to indicate that the disk must beshareable between the VMs in multi-writer mode.gcloud beta compute disks createDISK_NAME \ --sizeDISK_SIZE \ --type pd-ssd \ --multi-writer
Replace the following:
DISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GB, of the new diskAcceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Diskvolumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes inmulti-writer mode.
After you create the disk, attach it to any running or stopped VMwith an N2 machine type.Use the
gcloud compute instances attach-disk
command:gcloud compute instances attach-diskINSTANCE_NAME \ --diskDISK_NAME
Replace the following:
INSTANCE_NAME
: the name of the N2 VMwhere you are adding the new zonal Persistent Disk volumeDISK_NAME
: the name of the new disk thatyou are attaching to the VM
Repeat the
gcloud compute instances attach-disk
commandbut replaceINSTANCE_NAME with the nameof your second VM.
After you create and attach a new disk to an instance, format and mountthe disk using ashared-disk file system.Most file systems are not capable of using shared storage. Confirm that yourfile system supports these capabilities before you use it withmulti-writer Persistent Disk. You cannot mount the disk to multiple VMsusing the same process you would normally use to mount the disk to a singleVM.
REST
Use theCompute Engine API to createand attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.
In the API, construct a
POST
request to create a zonal Persistent Diskvolume usingthedisks.insert
method.Include thename
,sizeGb
, andtype
properties. To create this newdisk as an empty and unformatted non-boot disk, don't specify a sourceimage or a source snapshot for this disk. Include themultiWriter
property with a value ofTrue
to indicate that the disk must besharable between the VMs in multi-writer mode.POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks{"name": "DISK_NAME","sizeGb": "DISK_SIZE","type": "zones/ZONE/diskTypes/pd-ssd","multiWriter": "True"}
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your VM and newdisk are locatedDISK_NAME
: the name of the new diskDISK_SIZE
: the size, in GB, of the new diskAcceptable sizes range from 1 GB to 65,536 GB for SSD Persistent Diskvolumes, or 200 GB to 65,536 GB for standard Persistent Disk volumes inmulti-writer mode.
To attach the disk to an instance, construct a
POST
request to thecompute.instances.attachDisk
method. Include the URL to the zonalPersistent Disk volume that you just created:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk{"source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME"}
Replace the following:
PROJECT_ID
: your project IDZONE
: the zone where your VM and newdisk are locatedINSTANCE_NAME
: the name of the VM whereyou are adding the new Persistent Disk volume.DISK_NAME
: the name of the new disk
To attach the disk to a second VM, repeat the
instances.attachDisk
command from the previous step. Set theINSTANCE_NAME
to the name of thesecond VM.
After you create and attach a new disk to an instance, format and mountthe disk using ashared-disk file system.Most file systems are not capable of using shared storage. Confirm that yourfile system supports these capabilities before you use it withmulti-writer Persistent Disk.
What's next
- Learn aboutcross-zonal synchronous disk replication.
- Learn aboutAsynchronous Replication.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-18 UTC.