Create Hyperdisk Storage Pools

Hyperdisk Storage Pools are a new block storage resource that helps you manage yourHyperdisk block storage in aggregate. Hyperdisk Storage Pools are availablein Hyperdisk Throughput Storage Pool and Hyperdisk Balanced Storage Pool variants.

You must specify the following properties when creating a storage pool:

  • Zone
  • Storage pool type
  • Capacity provisioning type
  • Pool provisioned capacity
  • Performance provisioning type
  • Pool provisioned IOPS and throughput

You can use Standard capacity, Advanced capacity, Standard performance, or Advanced performance provisioningtypes with Hyperdisk Storage Pools:

  • Standard capacity: The capacity provisioned for each disk created in thestorage pool is deducted from the total provisioned capacity of thestorage pool.
  • Advanced capacity: The storage pool benefits from thin-provisioning and datareduction. Only the amount of actual written data is deducted from thetotal provisioned capacity of the storage pool.
  • Standard performance: The performance provisioned for each disk created in thestorage pool is deducted from the total provisioned performance of thestorage pool.
  • Advanced performance: The performance provisioned for each disk benefits fromthin-provisioning. Only the amount of performance used by a disk isdeducted from the total provisioned performance of the storage pool.

Before you begin

Required roles and permissions

To get the permissions that you need to create a storage pool, ask your administrator to grant you the following IAM roles on the project:

  • Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1)
  • To connect to a VM instance that can run as a service account: Service Account User (v1) (roles/iam.serviceAccountUser role)

For more information about granting roles, seeManage access to projects, folders, and organizations.

These predefined roles contain the permissions required to create a storage pool. To see the exact permissions that are required, expand theRequired permissions section:

Required permissions

The following permissions are required to create a storage pool:

  • compute.storagePools.create on the project
  • compute.storagePools.setLabels on the project

You might also be able to get these permissions withcustom roles or otherpredefined roles.

Limitations

Take note of the following limitations when creating Hyperdisk Storage Pools:

Resource limits:

  • You can create a Hyperdisk Storage Pool with up to 5 PiB of provisioned capacity.
  • You can create a maximum of 5 storage pools per hour.
  • You can create a maximum of 10 storage pools per day.
  • You can create at most 10 storage pools per project.
  • You can't change the provisioning model for a pool; you can't change a Standard capacity storage pool to an Advanced capacity storage pool or an Advanced performance storage pool to a Standard performance storage pool.
  • Storage pools are a zonal resource.
  • You can create up to 1,000 disks in a storage pool.
  • You can use Hyperdisk Storage Pools with only Compute Engine. Cloud SQL instances cannot use Hyperdisk Storage Pools.
  • You can change the provisioned capacity or performance of a storage pool at most two times in a 24 hour period.

Limits for disks in a storage pool:

  • Only new disks in the same project and zone can be created in a storage pool.
  • Moving disks in or out of a storage pool is not permitted. To move a disk in or out of a storage pool, you have to recreate the disk from a snapshot. For more information, seeChange the disk type.
  • To create boot disks in a storage pool, you must use a Hyperdisk Balanced Storage Pool.
  • Storage pools don't supportregional disks.
  • You can'tclone,create instant snapshots of, orconfigure Asynchronous Replication for disks in a storage pool.

Capacity ranges and provisioned performance limits

When creating a storage pool, the provisioned capacity, IOPS, andthroughput are subject to the limits described inLimits for storage pools.

Create a Hyperdisk Storage Pool

To create a new Hyperdisk Storage Pool, use the Google Cloud console, Google Cloud CLI, orREST.

Console

  1. Go to theCreate a storage pool page in the Google Cloud console.
    Go to the Create Storage Pool page
  2. In theName field, enter a unique name for the storage pool.
  3. Optional: In theDescription field, enter a description for thestorage pool.
  4. Select theRegion andZone in which to createthe storage pool.
  5. Choose a value for theStorage pool type.
  6. Choose a provisioning type in theCapacity type field and specifythe capacity to provision for the storage pool in theStorage pool capacity field.You can specify a size from 10 TiB to 1 PiB.

    To create a storage pool with large capacity, you might have torequest a quota adjustment.

  7. Choose a provisioning type in thePerformance type field.

  8. For Hyperdisk Balanced Storage Pools, in theProvisioned IOPS field, enterthe IOPS to provision for the storage pool.

  9. For a Hyperdisk Throughput Storage Pool or Hyperdisk Balanced Storage Pool, in theProvisioned throughput field,enter the throughput to provision for the storage pool.

  10. ClickSubmit to create the storage pool.

gcloud

To create a Hyperdisk Storage Pool, use thegcloud compute storage-pools create command.

gcloud compute storage-pools createNAME  \    --zone=ZONE   \    --storage-pool-type=STORAGE_POOL_TYPE   \    --capacity-provisioning-type=CAPACITY_TYPE \    --provisioned-capacity=POOL_CAPACITY   \    --performance-provisioning-type=PERFORMANCE_TYPE \    --provisioned-iops=IOPS   \    --provisioned-throughput=THROUGHPUT   \    --description=DESCRIPTION

Replace the following:

  • NAME: the unique storage pool name.
  • ZONE: the zone in which to create the storage pool,for example,us-central1-a.
  • STORAGE_POOL_TYPE: the type of disk to store inthe storage pool. The allowed values arehyperdisk-throughputandhyperdisk-balanced.
  • CAPACITY_TYPE: Optional: the capacityprovisioning type of the storage pool. The allowed values areadvanced andstandard. If not specified, the valueadvancedis used.
  • POOL_CAPACITY: the total capacity to provision for the newstorage pool, specified in GiB by default.
  • PERFORMANCE_TYPE: Optional: the performanceprovisioning type of the storage pool. The allowed values areadvanced andstandard. If not specified, the valueadvancedis used.
  • IOPS: the IOPS to provision for thestorage pool. You can use this flag only with Hyperdisk Balanced Storage Pools.
  • THROUGHPUT: the throughput in MBps to provisionfor the storage pool.
  • DESCRIPTION: Optional: a text string that describes thestorage pool.

REST

Construct aPOST request to create a Hyperdisk Storage Pool by using thestoragePools.insert method.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/storagePools{    "name": "NAME",    "description": "DESCRIPTION",    "poolProvisionedCapacityGb": "POOL_CAPACITY",    "storagePoolType": "projects/PROJECT_ID/zones/ZONE/storagePoolTypes/STORAGE_POOL_TYPE",    "poolProvisionedIops": "IOPS",    "poolProvisionedThroughput": "THROUGHPUT",    "capacityProvisioningType": "CAPACITY_TYPE",    "performanceProvisioningType": "PERFORMANCE_TYPE"}

Replace the following:

  • PROJECT_ID: the project ID
  • ZONE: the zone in which to create the storage pool,for example,us-central1-a.
  • NAME: a unique name for the storage pool .
  • DESCRIPTION: Optional: a text string that describes thestorage pool.
  • POOL_CAPACITY: the total capacity toprovision for the new storage pool, specified in GiB by default.
  • STORAGE_POOL_TYPE: the type of disk to store inthe storage pool. The allowed values arehyperdisk-throughputandhyperdisk-balanced.
  • IOPS: Optional: the IOPS to provision for thestorage pool. You can use this flag only with Hyperdisk Balanced Storage Pools.
  • THROUGHPUT: Optional: The throughput in MBps to provisionfor the storage pool.
  • CAPACITY_TYPE: Optional: the capacityprovisioning type of the storage pool. The allowed values areadvanced andstandard. If not specified, the valueadvancedis used.
  • PERFORMANCE_TYPE: Optional: the performanceprovisioning type of the storage pool. The allowed values areadvanced andstandard. If not specified, the valueadvancedis used.

Go

// createHyperdiskStoragePool creates a new Hyperdisk storage pool in the specified project and zone.funccreateHyperdiskStoragePool(wio.Writer,projectId,zone,storagePoolName,storagePoolTypestring)error{// projectID := "your_project_id"// zone := "europe-west4-b"// storagePoolName := "your_storage_pool_name"// storagePoolType := "projects/**your_project_id**/zones/europe-west4-b/diskTypes/hyperdisk-balanced"ctx:=context.Background()client,err:=compute.NewStoragePoolsRESTClient(ctx)iferr!=nil{returnfmt.Errorf("NewStoragePoolsRESTClient: %v",err)}deferclient.Close()// Create the storage pool resourceresource:=&computepb.StoragePool{Name:proto.String(storagePoolName),Zone:proto.String(zone),StoragePoolType:proto.String(storagePoolType),CapacityProvisioningType:proto.String("advanced"),PerformanceProvisioningType:proto.String("advanced"),PoolProvisionedCapacityGb:proto.Int64(10240),PoolProvisionedIops:proto.Int64(10000),PoolProvisionedThroughput:proto.Int64(1024),}// Create the insert storage pool requestreq:=&computepb.InsertStoragePoolRequest{Project:projectId,Zone:zone,StoragePoolResource:resource,}// Send the insert storage pool requestop,err:=client.Insert(ctx,req)iferr!=nil{returnfmt.Errorf("Insert storage pool request failed: %v",err)}// Wait for the insert storage pool operation to completeiferr=op.Wait(ctx);err!=nil{returnfmt.Errorf("unable to wait for the operation: %w",err)}// Retrieve and return the created storage poolstoragePool,err:=client.Get(ctx,&computepb.GetStoragePoolRequest{Project:projectId,Zone:zone,StoragePool:storagePoolName,})iferr!=nil{returnfmt.Errorf("Get storage pool request failed: %v",err)}fmt.Fprintf(w,"Hyperdisk Storage Pool created: %v\n",storagePool.GetName())returnnil}

Java

importcom.google.cloud.compute.v1.InsertStoragePoolRequest;importcom.google.cloud.compute.v1.Operation;importcom.google.cloud.compute.v1.StoragePool;importcom.google.cloud.compute.v1.StoragePoolsClient;importjava.io.IOException;importjava.util.concurrent.ExecutionException;importjava.util.concurrent.TimeUnit;importjava.util.concurrent.TimeoutException;publicclassCreateHyperdiskStoragePool{publicstaticvoidmain(String[]args)throwsIOException,ExecutionException,InterruptedException,TimeoutException{// TODO(developer): Replace these variables before running the sample.// Project ID or project number of the Google Cloud project you want to use.StringprojectId="YOUR_PROJECT_ID";// Name of the zone in which you want to create the storagePool.Stringzone="us-central1-a";// Name of the storagePool you want to create.StringstoragePoolName="YOUR_STORAGE_POOL_NAME";// The type of disk you want to create.// Storage types can be "hyperdisk-throughput" or "hyperdisk-balanced"StringstoragePoolType=String.format("projects/%s/zones/%s/storagePoolTypes/hyperdisk-balanced",projectId,zone);// Optional: the capacity provisioning type of the storage pool.// The allowed values are advanced and standard. If not specified, the value advanced is used.StringcapacityProvisioningType="advanced";// The total capacity to provision for the new storage pool, specified in GiB by default.longprovisionedCapacity=128;// the IOPS to provision for the storage pool.// You can use this flag only with Hyperdisk Balanced Storage Pools.longprovisionedIops=3000;// the throughput in MBps to provision for the storage pool.longprovisionedThroughput=140;// The allowed values are low-casing strings "advanced" and "standard".// If not specified, "advanced" is used.StringperformanceProvisioningType="advanced";createHyperdiskStoragePool(projectId,zone,storagePoolName,storagePoolType,capacityProvisioningType,provisionedCapacity,provisionedIops,provisionedThroughput,performanceProvisioningType);}// Creates a hyperdisk storagePool in a projectpublicstaticStoragePoolcreateHyperdiskStoragePool(StringprojectId,Stringzone,StringstoragePoolName,StringstoragePoolType,StringcapacityProvisioningType,longcapacity,longiops,longthroughput,StringperformanceProvisioningType)throwsIOException,ExecutionException,InterruptedException,TimeoutException{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.try(StoragePoolsClientclient=StoragePoolsClient.create()){// Create a storagePool.StoragePoolresource=StoragePool.newBuilder().setZone(zone).setName(storagePoolName).setStoragePoolType(storagePoolType).setCapacityProvisioningType(capacityProvisioningType).setPoolProvisionedCapacityGb(capacity).setPoolProvisionedIops(iops).setPoolProvisionedThroughput(throughput).setPerformanceProvisioningType(performanceProvisioningType).build();InsertStoragePoolRequestrequest=InsertStoragePoolRequest.newBuilder().setProject(projectId).setZone(zone).setStoragePoolResource(resource).build();// Wait for the insert disk operation to complete.Operationoperation=client.insertAsync(request).get(1,TimeUnit.MINUTES);if(operation.hasError()){System.out.println("StoragePool creation failed!");thrownewError(operation.getError().toString());}// Wait for server updateTimeUnit.SECONDS.sleep(10);StoragePoolstoragePool=client.get(projectId,zone,storagePoolName);System.out.printf("Storage pool '%s' has been created successfully",storagePool.getName());returnstoragePool;}}}

Node.js

// Import the Compute libraryconstcomputeLib=require('@google-cloud/compute');constcompute=computeLib.protos.google.cloud.compute.v1;// Instantiate a storagePoolClientconststoragePoolClient=newcomputeLib.StoragePoolsClient();// Instantiate a zoneOperationsClientconstzoneOperationsClient=newcomputeLib.ZoneOperationsClient();/** * TODO(developer): Update/uncomment these variables before running the sample. */// Project ID or project number of the Google Cloud project you want to use.constprojectId=awaitstoragePoolClient.getProjectId();// Name of the zone in which you want to create the storagePool.constzone='us-central1-a';// Name of the storagePool you want to create.// storagePoolName = 'storage-pool-name';// The type of disk you want to create. This value uses the following format:// "projects/{projectId}/zones/{zone}/storagePoolTypes/(hyperdisk-throughput|hyperdisk-balanced)"conststoragePoolType=`projects/${projectId}/zones/${zone}/storagePoolTypes/hyperdisk-balanced`;// Optional: The capacity provisioning type of the storage pool.// The allowed values are advanced and standard. If not specified, the value advanced is used.constcapacityProvisioningType='advanced';// The total capacity to provision for the new storage pool, specified in GiB by default.constprovisionedCapacity=10240;// The IOPS to provision for the storage pool.// You can use this flag only with Hyperdisk Balanced Storage Pools.constprovisionedIops=10000;// The throughput in MBps to provision for the storage pool.constprovisionedThroughput=1024;// Optional: The performance provisioning type of the storage pool.// The allowed values are advanced and standard. If not specified, the value advanced is used.constperformanceProvisioningType='advanced';asyncfunctioncallCreateComputeHyperdiskPool(){// Create a storagePool.conststoragePool=newcompute.StoragePool({name:storagePoolName,poolProvisionedCapacityGb:provisionedCapacity,poolProvisionedIops:provisionedIops,poolProvisionedThroughput:provisionedThroughput,storagePoolType,performanceProvisioningType,capacityProvisioningType,zone,});const[response]=awaitstoragePoolClient.insert({project:projectId,storagePoolResource:storagePool,zone,});letoperation=response.latestResponse;// Wait for the create storage pool operation to complete.while(operation.status!=='DONE'){[operation]=awaitzoneOperationsClient.wait({operation:operation.name,project:projectId,zone:operation.zone.split('/').pop(),});}console.log(`Storagepool:${storagePoolName}created.`);}awaitcallCreateComputeHyperdiskPool();

What's next?

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-10-02 UTC.