Reduce latency by using compact placement policies

This document describes how to reduce network latency among yourCompute Engine instances by creating and applying compact placement policiesto them. To learn more about placement policies, including their supportedmachine series, restrictions, and pricing, seePlacement policies overview.

A compact placement policy specifies that your instances should be physicallyplaced closer to each other. This can help improve performance and reducenetwork latency among your instances when, for example, you run high performancecomputing (HPC), machine learning (ML), or database server workloads.

Before you begin

Required roles

To get the permissions that you need to create and apply a compact placement policy to compute instances, ask your administrator to grant you the following IAM roles on your project:

For more information about granting roles, seeManage access to projects, folders, and organizations.

These predefined roles contain the permissions required to create and apply a compact placement policy to compute instances. To see the exact permissions that are required, expand theRequired permissions section:

Required permissions

The following permissions are required to create and apply a compact placement policy to compute instances:

  • To create placement policies:compute.resourcePolicies.create on the project
  • To apply a placement policy to existing instances:compute.instances.addResourcePolicies on the project
  • To create instances:
    • compute.instances.create on the project
    • To use a custom image to create the VM:compute.images.useReadOnly on the image
    • To use a snapshot to create the VM:compute.snapshots.useReadOnly on the snapshot
    • To use an instance template to create the VM:compute.instanceTemplates.useReadOnly on the instance template
    • To assign alegacy network to the VM:compute.networks.use on the project
    • To specify a static IP address for the VM:compute.addresses.use on the project
    • To assign an external IP address to the VM when using a legacy network:compute.networks.useExternalIp on the project
    • To specify a subnet for the VM:compute.subnetworks.use on the project or on the chosen subnet
    • To assign an external IP address to the VM when using a VPC network:compute.subnetworks.useExternalIp on the project or on the chosen subnet
    • To set VM instance metadata for the VM:compute.instances.setMetadata on the project
    • To set tags for the VM:compute.instances.setTags on the VM
    • To set labels for the VM:compute.instances.setLabels on the VM
    • To set a service account for the VM to use:compute.instances.setServiceAccount on the VM
    • To create a new disk for the VM:compute.disks.create on the project
    • To attach an existing disk in read-only or read-write mode:compute.disks.use on the disk
    • To attach an existing disk in read-only mode:compute.disks.useReadOnly on the disk
  • To create a reservation:compute.reservations.create on the project
  • To create an instance template:compute.instanceTemplates.create on the project
  • To create a managed instance group (MIG):compute.instanceGroupManagers.create on the project
  • To view the details of a instance:compute.instances.get on the project

You might also be able to get these permissions withcustom roles or otherpredefined roles.

Create a compact placement policy

Before you create a compact placement policy, consider the following:

  • If you want to apply a compact placement policy to a compute instance otherthan M3, M2, M1, N2D, or N2, then we recommend that you specify amaximum distance value.

  • By default, you can't apply compact placement policies with a max distancevalue to A3 Mega, A3 High, or A3 Edge instances. To request access to thisfeature, contact youraccount team or thesales team.

To create a compact placement policy, select one of the following options:

gcloud

  • To apply the compact placement policy to M3, M2, M1, N2D, or N2instances, create the policy using thegcloud compute resource-policies create group-placement commandwith the--collocation=collocated flag.

    gcloud compute resource-policies create group-placementPOLICY_NAME \    --collocation=collocated \    --region=REGION

    Replace the following:

    • POLICY_NAME: the name of the compact placementpolicy.

    • REGION: the region in which to create theplacement policy.

  • To apply the compact placement policy to any other supported instances,create the policy using thegcloud beta compute resource-policies create group-placement commandwith the--collocation=collocated and--max-distance flags.

    Preview — The--max-distance flag

    This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

    gcloud beta compute resource-policies create group-placementPOLICY_NAME \    --collocation=collocated \    --max-distance=MAX_DISTANCE \    --region=REGION

    Replace the following:

    • POLICY_NAME: the name of the compact placementpolicy.

    • MAX_DISTANCE: the maximum distance configurationfor your instances. The value must be between1, which specifiesto place your instances in the same rack for the lowest networklatency possible, and3, which specifies to place your instancesin adjacent clusters. If you want to apply the compact placementpolicy to a reservation, or to an A4 or A3 Ultra instance, then youcan't specify a value of1.

    • REGION: the region in which to create theplacement policy.

REST

  • To apply the compact placement policy to M3, M2, M1, N2D, or N2instances, create the policy by making aPOST request to theresourcePolicies.insert method.In the request body, include thecollocation field and set it toCOLLOCATED.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/resourcePolicies{  "name": "POLICY_NAME",  "groupPlacementPolicy": {    "collocation": "COLLOCATED"  }}

    Replace the following:

    • PROJECT_ID: the ID of the project where you wantto create the placement policy.

    • REGION: the region in which to create theplacement policy.

    • POLICY_NAME: the name of the compact placementpolicy.

  • To apply the compact placement policy to any other supported instances,create the policy by making aPOST request to thebeta.resourcePolicies.insert method.In the request body, include the following:

    • Thecollocation field set toCOLLOCATED.

    • ThemaxDistance field.

    Preview — ThemaxDistance field

    This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

    POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/regions/REGION/resourcePolicies{  "name": "POLICY_NAME",  "groupPlacementPolicy": {    "collocation": "COLLOCATED",    "maxDistance":MAX_DISTANCE  }}

    Replace the following:

    • PROJECT_ID: the ID of the project where you want tocreate the placement policy.

    • REGION: the region in which to create the placementpolicy.

    • POLICY_NAME: the name of the compact placementpolicy.

    • MAX_DISTANCE: the maximum distance configurationfor your instances. The value must be between1, which specifiesto place your instances in the same rack for the lowest networklatency possible, and3, which specifies to place your instancesin adjacent clusters. If you want to apply the compact placementpolicy to a reservation, or to an A4 or A3 Ultra instance, then youcan't specify a value of1.

Apply a compact placement policy

You can apply a compact placement policy to an existing compute instance ormanaged instance group (MIG), or when creating instances, instance templates,MIGs, or reservations of instances.

To apply a compact placement policy to a Compute Engine resource, selectone of the following methods:

After you apply a compact placement policy to a instance, you canverify the physical location of the instance in relationto other instances that specify the same placement policy.

Apply the policy to an existing instance

Before applying a compact placement policy to an existing compute instance, makesure of the following:

Otherwise, applying the compact placement policy to the instance fails. If theinstance already specifies a placement policy and you want to replace it, thenseeReplace a placement policy in a instanceinstead.

To apply a compact placement policy to an existing instance, select one of thefollowing options:

gcloud

  1. Stop the instance.

  2. To apply a compact placement policy to an existing instance, use thegcloud compute instances add-resource-policies command.

    gcloud compute instances add-resource-policiesINSTANCE_NAME \    --resource-policies=POLICY_NAME \    --zone=ZONE

    Replace the following:

    • INSTANCE_NAME: the name of an existing instance.

    • POLICY_NAME: the name of an existing compactplacement policy.

    • ZONE: the zone where the instance is located.

  3. Restart the instance.

REST

  1. Stop the instance.

  2. To apply a compact placement policy to an existing instance, make aPOST request to theinstances.addResourcePolicies method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/addResourcePolicies{  "resourcePolicies": [    "projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME"  ]}

    Replace the following:

    • PROJECT_ID: the ID of the project where thecompact placement policy and the instance are located.

    • ZONE: the zone where the instance is located.

    • INSTANCE_NAME: the name of an existing instance.

    • REGION: the region where the compact placementpolicy is located.

    • POLICY_NAME: the name of an existing compactplacement policy.

  3. Restart the instance.

Apply the policy while creating a instance

You can only create a compute instance that specifies a compact placement policyin the same region as the placement policy.

To create a instance that specifies a compact placement policy, select one ofthe following options:

gcloud

To create a instance that specifies a compact placement policy, use thegcloud compute instances create commandwith the--maintenance-policy and--resource-policies flags.

gcloud compute instances createINSTANCE_NAME \    --machine-type=MACHINE_TYPE \    --maintenance-policy=MAINTENANCE_POLICY \    --resource-policies=POLICY_NAME \    --zone=ZONE

Replace the following:

  • INSTANCE_NAME: the name of the instance to create.

  • MACHINE_TYPE: asupported machine typefor compact placement policies.

  • MAINTENANCE_POLICY: thehost maintenance policyof the instance. If the compact placement policy you specify uses amaximum distance value of1 or2, or your chosen machine typedoesn't support live migration, then you can only specifyTERMINATE.Otherwise, you can specifyMIGRATE orTERMINATE.

  • POLICY_NAME: the name of an existing compactplacement policy.

  • ZONE: the zone in which to create the instance.

REST

To create an instance that specifies a compact placement policy, make aPOST request to theinstances.insert method.In the request body, include theonHostMaintenance andresourcePoliciesfields.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{  "name": "INSTANCE_NAME",  "machineType": "zones/ZONE/machineTypes/MACHINE_TYPE",  "disks": [    {      "boot": true,      "initializeParams": {        "sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE"      }    }  ],  "networkInterfaces": [    {      "network": "global/networks/default"    }  ],  "resourcePolicies": [    "projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME"  ],  "scheduling": {    "onHostMaintenance": "MAINTENANCE_POLICY"  }}

Replace the following:

  • PROJECT_ID: the ID of the project where the compactplacement policy is located.

  • ZONE: the zone where to create the instance in andwhere the machine type is located. You can only specify a zone inthe region of the compact placement policy.

  • INSTANCE_NAME: the name of the instance to create.

  • MACHINE_TYPE: asupported machine typefor compact placement policies.

  • IMAGE_PROJECT: the image project that contains theimage—for example,debian-cloud. For more information about thesupported image projects, seePublic images.

  • IMAGE: specify one of the following:

    • A specific version of the OS image—for example,debian-12-bookworm-v20240617.

    • Animage family, which must beformatted asfamily/IMAGE_FAMILY. This specifiesthe most recent, non-deprecated OS image. For example, if youspecifyfamily/debian-12, the latest version in the Debian 12image family is used. For more information about using imagefamilies, seeImage families best practices.

  • REGION: the region where the compact placement policyis located.

  • POLICY_NAME: the name of an existing compactplacement policy.

  • MAINTENANCE_POLICY: thehost maintenance policyof the instance. If the compact placement policy you specify uses amaximum distance value of1 or2, or your chosen machine typedoesn't support live migration, then you can only specifyTERMINATE.Otherwise, you can specifyMIGRATE orTERMINATE.

For more information about the configuration options to create a instance, seeCreate and start an instance.

Apply the policy while creating instances in bulk

You can only create compute instances in bulk with a compact placement policyin the same region as the placement policy.

To create instances in bulk that specify a compact placement policy, select oneof the following options:

gcloud

To create instances in bulk that specify a compact placement policy, use thegcloud compute instances bulk create commandwith the--maintenance-policy and--resource-policies flags.

For example, to create instances in bulk in a single zone and specify a namepattern for the instances, run the following command:

gcloud compute instances bulk create \    --count=COUNT \    --machine-type=MACHINE_TYPE \    --maintenance-policy=MAINTENANCE_POLICY \    --name-pattern=NAME_PATTERN \    --resource-policies=POLICY_NAME \    --zone=ZONE

Replace the following:

  • COUNT: the number of instances to create, whichcan't be higher than thesupported maximum number of instancesof the specified compact placement policy.

  • MACHINE_TYPE: asupported machine typefor compact placement policies.

  • MAINTENANCE_POLICY: thehost maintenance policyof the instance. If the compact placement policy you specify uses amaximum distance value of1 or2, or your chosen machine typedoesn't support live migration, then you can only specifyTERMINATE.Otherwise, you can specifyMIGRATE orTERMINATE.

  • NAME_PATTERN: the name pattern for the instances. Toreplace a sequence of numbers in an instance name, use a sequence ofhash (#) characters. For example, usingvm-# for the name patterngenerates instances with names starting withvm-1,vm-2, andcontinuing up to the number of instances specified byCOUNT.

  • POLICY_NAME: the name of an existing compactplacement policy.

  • ZONE: the zone in which to create the instances inbulk.

REST

To create instances in bulk that specify a compact placement policy, make aPOST request to theinstances.bulkInsert method.In the request body, include theonHostMaintenance andresourcePoliciesfields.

For example, to create instances in bulk in a single zone and specify a namepattern for the instances, make aPOST request as follows:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/bulkInsert{  "count": "COUNT",  "namePattern": "NAME_PATTERN",  "instanceProperties": {    "machineType": "MACHINE_TYPE",    "disks": [      {        "boot": true,        "initializeParams": {          "sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE"        }      }    ],    "networkInterfaces": [      {        "network": "global/networks/default"      }    ],    "resourcePolicies": [      "projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME"    ],    "scheduling": {      "onHostMaintenance": "MAINTENANCE_POLICY"    }  }}

Replace the following:

  • PROJECT_ID: the ID of the project where the compactplacement policy is located.

  • ZONE: the zone in which to create the instances inbulk.

  • COUNT: the number of instances to create, which can'tbe higher than thesupported maximum number of instancesof the specified compact placement policy.

  • NAME_PATTERN: the name pattern for the instances. Toreplace a sequence of numbers in an instance name, use a sequence ofhash (#) characters. For example, usingvm-# for the name patterngenerates instances with names starting withvm-1,vm-2, andcontinuing up to the number of instances specified byCOUNT.

  • MACHINE_TYPE: asupported machine typefor compact placement policies.

  • IMAGE_PROJECT: the image project that contains theimage—for example,debian-cloud. For more information about thesupported image projects, seePublic images.

  • IMAGE: specify one of the following:

    • A specific version of the OS image—for example,debian-12-bookworm-v20240617.

    • Animage family, which must beformatted asfamily/IMAGE_FAMILY. This specifiesthe most recent, non-deprecated OS image. For example, if youspecifyfamily/debian-12, the latest version in the Debian 12image family is used. For more information about using imagefamilies, seeImage families best practices.

  • REGION: the region where the compact placement policyis located.

  • POLICY_NAME: the name of an existing compactplacement policy.

  • MAINTENANCE_POLICY: thehost maintenance policyof the instance. If the compact placement policy you specify uses a maximumdistance value of1 or2, or your chosen machine type doesn'tsupport live migration, then you can only specifyTERMINATE.Otherwise, you can specifyMIGRATE orTERMINATE.

For more information about the configuration options to create instances inbulk, seeCreate instances in bulk.

Apply the policy while creating a reservation

If you want to create an on-demand, single-project reservation that specifies acompact placement policy, then you must create aspecifically targeted reservation.When you create instances to consume the reservation, make sure of thefollowing:

  • The instances must specify the same compact placement policy applied to thereservation.

  • The instances must specifically target the reservation to consume it. Formore information, seeConsume instances from a specific reservation.

To create a single-project reservation with a compact placement policy, selectone of the following methods:

Note: You can apply compact placement policies only to single-project,standalone reservations. Shared reservations or reservations attached tocommitments aren't supported. For the full list of requirements, see therequirements for reservations with compact placement policies.

To create a single-project reservation with a compact placement policy byspecifying properties directly, select one of the following options:

gcloud

To create a single-project reservation with a compact placement policy byspecifying properties directly, use thegcloud compute reservations create commandwith the--require-specific-reservation and--resource-policies=policyflags.

gcloud compute reservations createRESERVATION_NAME \    --machine-type=MACHINE_TYPE \    --require-specific-reservation \    --resource-policies=policy=POLICY_NAME \    --vm-count=NUMBER_OF_INSTANCES \    --zone=ZONE

Replace the following:

  • RESERVATION_NAME: the name of the reservation.

  • MACHINE_TYPE: asupported machine typefor compact placement policies.

  • POLICY_NAME: the name of an existing compactplacement policy.

  • NUMBER_OF_INSTANCES: the number of instances toreserve, which can't be higher than thesupported maximum number of instancesof the specified compact placement policy.

  • ZONE: the zone in which to reserve instances. You canonly reserve instances in a zone in the region of the specifiedcompact placement policy.

REST

To create a single-project reservation with a compact placement policy byspecifying properties directly, make aPOST request to thereservations.insert method.In the request body, include theresourcePolicies field, and thespecificReservationRequired field set totrue.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/reservations{  "name": "RESERVATION_NAME",  "resourcePolicies": {    "policy" : "projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME"  },  "specificReservation": {    "count": "NUMBER_OF_INSTANCES",    "instanceProperties": {      "machineType": "MACHINE_TYPE",    }  },  "specificReservationRequired": true}

Replace the following:

  • PROJECT_ID: the ID of the project where the compactplacement policy is located.

  • ZONE: the zone in which to reserve instances. You canonly reserve instances in a zone in the region of the specifiedcompact placement policy.

  • RESERVATION_NAME: the name of the reservation.

  • REGION: the region where the compact placement policyis located.

  • POLICY_NAME: the name of an existing compactplacement policy.

  • NUMBER_OF_INSTANCES: the number of instances toreserve, which can't be higher than thesupported maximum number of instancesof the specified compact placement policy.

  • MACHINE_TYPE: asupported machine typefor compact placement policies.

For more information about the configuration options to create single-projectreservations, seeCreate a reservation for a single project.

Apply the policy while creating an instance template

If you want to create a regional instance template, then you must create thetemplate in the same region as the compact placement policy. Otherwise, creatingthe instance template fails.

After creating an instance template that specifies a compact placement policy,you can use the template to do the following:

To create an instance template that specifies a compact placement policy, selectone of the following options:

gcloud

To create an instance template that specifies a compact placement policy,use thegcloud compute instance-templates create commandwith the--maintenance-policy and--resource-policies flags.

For example, to create a global instance template that specifies a compactplacement policy, run the following command:

gcloud compute instance-templates createINSTANCE_TEMPLATE_NAME \    --machine-type=MACHINE_TYPE \    --maintenance-policy=MAINTENANCE_POLICY \    --resource-policies=POLICY_NAME

Replace the following:

  • INSTANCE_TEMPLATE_NAME: the name of the instancetemplate.

  • MACHINE_TYPE: asupported machine typefor compact placement policies.

  • MAINTENANCE_POLICY: thehost maintenance policyof the instance. If the compact placement policy you specify uses a maximumdistance value of1 or2, or your chosen machine type doesn'tsupport live migration, then you can only specifyTERMINATE.Otherwise, you can specifyMIGRATE orTERMINATE.

  • POLICY_NAME: the name of an existing compactplacement policy.

REST

To create an instance template that specifies a compact placement policy,make aPOST request to one of the following methods:

In the request body, include theonHostMaintenance andresourcePoliciesfields.

For example, to create a global instance template that specifies a compactplacement policy, make aPOST request as follows:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{  "name": "INSTANCE_TEMPLATE_NAME",  "properties": {    "disks": [      {        "boot": true,        "initializeParams": {          "sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE"        }      }    ],    "machineType": "MACHINE_TYPE",    "networkInterfaces": [      {        "network": "global/networks/default"      }    ],    "resourcePolicies": [      "POLICY_NAME"    ],    "scheduling": {      "onHostMaintenance": "MAINTENANCE_POLICY"    }  }}

Replace the following:

  • PROJECT_ID: the ID of the project where the compactplacement policy is located.

  • INSTANCE_TEMPLATE_NAME: the name of the instancetemplate.

  • IMAGE_PROJECT: the image project that contains theimage—for example,debian-cloud. For more information about thesupported image projects, seePublic images.

  • IMAGE: specify one of the following:

    • A specific version of the OS image—for example,debian-12-bookworm-v20240617.

    • Animage family, which must beformatted asfamily/IMAGE_FAMILY. This specifiesthe most recent, non-deprecated OS image. For example, if youspecifyfamily/debian-12, the latest version in the Debian 12image family is used. For more information about using imagefamilies, seeImage families best practices.

  • MACHINE_TYPE: asupported machine typefor compact placement policies.

  • POLICY_NAME: the name of an existing compactplacement policy.

  • MAINTENANCE_POLICY: thehost maintenance policyof the instance. If the compact placement policy you specify uses amaximum distance value of1 or2, or your chosen machine typedoesn't support live migration, then you can only specifyTERMINATE.Otherwise, you can specifyMIGRATE orTERMINATE.

For more information about the configuration options to create an instancetemplate, seeCreate instance templates.

Apply the policy to instances in a MIG

After youcreate an instance template that specifies acompact placement policy, you can use the template to do the following:

Caution: Avoid using a compact placement policy (or its instance template) formultiple MIGs. A policy's maximum distance value affects themaximum number of compute instances that the policy supports.If you share a policy among multiple MIGs, then it restricts the number ofinstances that you can create in the MIGs. Instead, use a new instance templatewith a new compact placement policy for each MIG.

Apply the policy while creating a MIG

You can only create compute instances that specify a compact placement policyif the instances are located in the same region as the placement policy.

To create a MIG using an instance template that specifies a compact placementpolicy, select one of the following options:

gcloud

To create a MIG using an instance template that specifies a compactplacement policy, use thegcloud compute instance-groups managed create command.

For example, to create a zonal MIG using a global instance template thatspecifies a compact placement policy, run the following command:

gcloud compute instance-groups managed createINSTANCE_GROUP_NAME \    --size=SIZE \    --template=INSTANCE_TEMPLATE_NAME \    --zone=ZONE

Replace the following:

  • INSTANCE_GROUP_NAME: the name of the MIG to create.

  • SIZE: the size of the MIG.

  • INSTANCE_TEMPLATE_NAME: the name of an existingglobal instance template that specifies a compact placement policy.

  • ZONE: the zone in which to create the MIG, whichmust be in the region where the compact placement policy is located.

REST

To create a MIG using an instance template that specifies a compactplacement policy, make aPOST request to one of the following methods:

For example, to create a zonal MIG using a global instance template thatspecifies a compact placement policy, make aPOST request as follows:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instanceGroupManagers{  "name": "INSTANCE_GROUP_NAME",  "targetSize":SIZE,  "versions": [    {      "instanceTemplate": "global/instanceTemplates/INSTANCE_TEMPLATE_NAME"    }  ]}

Replace the following:

  • PROJECT_ID: the ID of the project where the compactplacement policy and the instance template that specifies the placementpolicy are located.

  • ZONE: the zone in which to create the MIG, which mustbe in the region where the compact placement policy is located.

  • INSTANCE_GROUP_NAME: the name of the MIG to create.

  • INSTANCE_TEMPLATE_NAME: the name of an existingglobal instance template that specifies a compact placement policy.

  • SIZE: the size of the MIG.

For more information about the configuration options to create MIGs, seeBasic scenarios for creating MIGs.

Apply the policy to an existing MIG

You can only apply a compact placement policy to an existing MIG if the MIG islocated in the same region as the placement policy or, for zonal MIGs, in a zonein the same region as the placement policy.

To update a MIG to use an instance template that specifies a compact placementpolicy, select one of the following options:

gcloud

To update a MIG to use an instance template that specifies a compactplacement policy, use thegcloud compute instance-groups managed rolling-action start-update command.

For example, to update a zonal MIG to use an instance template thatspecifies a compact placement policy, and replace the existing instancesfrom the MIG with new instances that specify the template's properties, runthe following command:

gcloud compute instance-groups managed rolling-action start-updateMIG_NAME \    --version=template=INSTANCE_TEMPLATE_NAME \    --zone=ZONE

Replace the following:

  • MIG_NAME: the name of an existing MIG.

  • INSTANCE_TEMPLATE_NAME: the name of an existingglobal instance template that specifies a compact placement policy.

  • ZONE: the zone where the MIG is located. You canonly apply the compact placement policy to a MIG located in the sameregion as the placement policy.

REST

To update a MIG to use an instance template that specifies a compactplacement policy, and automatically apply the properties of the template andthe placement policy to existing instances in the MIG, make aPATCHrequest to one of the following methods:

For example, to update a zonal MIG to use a global instance template thatspecifies a compact placement policy, and replace the existing instancesfrom the MIG with new instances that specify the template's properties,make the followingPATCH request:

PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instanceGroupManagers/MIG_NAME{  "instanceTemplate": "global/instanceTemplates/INSTANCE_TEMPLATE_NAME"}

Replace the following:

  • PROJECT_ID: the ID of the project where the MIG, thecompact placement policy, and the instance template that specifies theplacement policy are located.

  • ZONE: the zone where the MIG is located. You canonly apply the compact placement policy to a MIG located in the sameregion as the placement policy.

  • MIG_NAME: the name of an existing MIG.

  • INSTANCE_TEMPLATE_NAME: the name of an existingglobal instance template that specifies a compact placement policy.

For more information about the configuration options to update the instances ina MIG, seeUpdate and apply new configurations to instances in a MIG.

Verify the physical location of an instance

After applying a compact placement policy to a compute instance, you can viewthe instance's physical location in relation to other instances. This comparisonis limited to instances located in your project and that specify the samecompact placement policy. Viewing the physical location of an instance helps youto do the following:

  • Confirm that the policy was successfully applied.

  • Identify which instances are closest to each other.

To view the physical location of an instance in relation to other instances thatspecify the same compact placement policy, select one of the following options:

gcloud

To view the physical location of an instance that specifies a compact placementpolicy, use thegcloud compute instances describe commandwith the--format flag.

gcloud compute instances describeINSTANCE_NAME \    --format="table[box,title=VM-Position](resourcePolicies.scope():sort=1,resourceStatus.physicalHost:label=location)" \    --zone=ZONE

Replace the following:

  • INSTANCE_NAME: the name of an existing instance thatspecifies a compact placement policy.

  • ZONE: the zone where the instance is located.

The output is similar to the following:

VM-PositionRESOURCE_POLICIES: us-central1/resourcePolicies/example-policy']PHYSICAL_HOST: /CCCCCCC/BBBBBB/AAAA

The value for thePHYSICAL_HOST field is composed by three parts. Theseparts each represent the cluster, rack, and host where the instance islocated.

When comparing the position of two instances that use the same compactplacement policy in your project, the more parts of thePHYSICAL_HOSTfield the instances share, the closer they are physically located to eachother. For example, assume that two instances both specify one of thefollowing sample values for thePHYSICAL_HOST field:

  • /CCCCCCC/xxxxxx/xxxx: the two instances are placedin the same cluster, which equals a maximum distance value of2.Instances placed in the same cluster experience low network latency.

  • /CCCCCCC/BBBBBB/xxxx: the two instances are placedin the same rack, which equals a maximum distance value of1.Instances placed in the same rack experience lower network latencythan instances placed in the same cluster.

  • /CCCCCCC/BBBBBB/AAAA: the two instances share thesame host. Instances placed in the same host minimize network latency asmuch as possible.

REST

To view the physical location of an instance that specifies a compact placementpolicy, make aGET request to theinstances.get method.

GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME

Replace the following:

  • PROJECT_ID: the ID of the project where the instanceis located.

  • ZONE: the zone where the instance is located.

  • INSTANCE_NAME: the name of an existing instance thatspecifies a compact placement policy.

The output is similar to the following:

{  ...  "resourcePolicies": [    "https://www.googleapis.com/compute/v1/projects/example-project/regions/us-central1/resourcePolicies/example-policy"  ],  "resourceStatus": {    "physicalHost": "/xxxxxxxx/xxxxxx/xxxxx"  },  ...}

The value for thephysicalHost field is composed by three parts. Theseparts each represent the cluster, rack, and host where the instance islocated.

When comparing the position of two instances that use the same compactplacement policy in your project, the more parts of thephysicalHostfield the instances share, the closer they are physically located to eachother. For example, assume that two instances both specify one of thefollowing sample values for thephysicalHost field:

  • /CCCCCCC/xxxxxx/xxxx: the two instances are placedin the same cluster, which equals a maximum distance value of2.Instances placed in the same cluster experience low network latency.

  • /CCCCCCC/BBBBBB/xxxx: the two instances are placedin the same rack, which equals a maximum distance value of1.Instances placed in the same rack experience lower network latency thaninstances placed in the same cluster.

  • /CCCCCCC/BBBBBB/AAAA: the two instances share thesame host. Instances placed in the same host minimize network latencyas much as possible.

What's next?

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.