Move your workload to a new compute instance

In certain situations, you might want to move your workload from an existingvirtual machine instance (VM) to a newer VM. Reasons to move to a new VM includethe following:

  • Take advantage of the new machine types for faster storage or networkingspeeds. For example, upgrade from C2 to H3 for improved networking bandwidth.
  • Benefit from greater price performance relative to the source VM instance. Forexample, upgrade from N1 to N4 for greater value on the 5th Generation IntelXeon processor.
  • Use features only available on the new VM instance. For example, upgradefrom N4 to C4 to take advantage of additional performance and maintenanceoptions or upgrade from H3 to H4D to get Cloud RDMA support.
  • Change a virtual machine (VM) instance to a bare metal instance.
  • Add Local SSD disks to your C3 or C3D VM instance.

When upgrading to the newest generation machine series, you might be ableto use the simpler procedure described inEdit the machine type of a compute instanceif the following conditions are met by the current (source) VM:

  • The operation system (OS) version is supported by the new machine series.
  • The disk type of the boot disk attached to the source VM is supported by thenew machine series.
  • The VM doesn't use Local SSD storage.
  • Your VM with attached GPUs uses a G2 machine type.SeeAdd or remove GPUs for details.
  • The VM is using only features that are supported by thenew machine series.
  • The VM isn't part of a managed instance group (MIG).
  • You don't need to create additional network interfaces for your instance touse the new features.

Before you begin

Required roles

To get the permissions that you need to edit or change a VM, ask your administrator to grant you the following IAM roles on the project:

For more information about granting roles, seeManage access to projects, folders, and organizations.

These predefined roles contain the permissions required to edit or change a VM. To see the exact permissions that are required, expand theRequired permissions section:

Required permissions

The following permissions are required to edit or change a VM:

  • To change the machine type:
    • compute.instances.stop on the project
    • compute.instances.create on the project
    • compute.instances.start on the project
    • compute.instances.setMachineType on the instance
  • To create a snapshot of the disk:
    • compute.snapshots.create on the project
    • compute.disks.createSnapshot on the disk
  • To create a new disk:
    • compute.disks.list on the project
    • compute.disks.create on the project
    • compute.disks.update on the project
  • To attach a disk to a VM:
    • compute.instances.attachDisk on the instance
    • compute.disks.use on the disk
  • To delete a disk: compute.disks.delete on the project
  • To make changes to the network type:
    • compute.networks.list on the project
    • compute.networks.update on the project

You might also be able to get these permissions withcustom roles or otherpredefined roles.

Evaluate VM migration options

Migrating from one machine type to another is dependent uponseveral factors, including: regional availability of the new machine type,and the compatibility of thestorage optionsandnetwork interfaces with respect to theguest OS of the source and the new machine series.

Compute requirements

Review the following requirements for your current instance and the newmachine type:

  • Explore themachine family resourcedocumentation to identify what machine types are suitable for your workload.Consider whether your application requires specific hardware (GPUs), highperformance, or lower costs.
  • Review the features of the disk types supported by the new machine type. Mostof the features of Persistent Disk, but not all, are supported byHyperdisk. However, Hyperdisk provides additionalfeatures that aren't available with Persistent Disk.
  • Review the features for the prospective machine series. The new machineseries might not support the same features that you use with your currentmachine series, such as custom machine types, Local SSD, orShielded VM.
  • Review theregions and zonesto ensure the new machine series is available in all the regions as your currentVM. You might need to adjust your deployment, high availability, anddisaster recovery plans.
  • Review your OS migration plan:
    • If the new VM requires a newer version of the OS, verify that yourapplications are compatible with the newer OS version.
    • If you're moving to Arm and an Arm image is not available for your currentOS version, choose a new OS or an OS version to run your applicationson and verify that your applications are compatible with the new OS.
  • It's possible to migrate from a C3 VM instance to a C3 bare metal instance, aslong as the source C3 VM instance uses asupported operating system and network driver.
  • If you're moving from a machine series other than C3 to a bare metal instance,you must create a new instance. You might have to run your own hypervisor;however, you can also run any operating system that supports bare metalinstances as long as the IDPF driver is enabled. Bare metal instances use theIDPF network interface presented as only a physical function, not a virtualfunction.

Storage requirements

Review the following storage requirements for your current instance and the newinstance type:

  • Review the supported storage types and the supported storage interfaces for thenew machine series.
    • By default, first and second generation machine series use only thePersistent Disk storage type and the VirtIO-SCSI interfaces.
    • Third generation and newer machine series (like M3, C3, and N4) supportonly the NVMe interface, and some support only Hyperdisk andLocal SSD storage types.
    • Bare metal instances support only Hyperdisk.
  • Disk compatibility:
    • If the boot disk uses a disk type that isn't supported by the new machineseries, for examplepd-standard, then you must create a new boot diskfor the new VM.
    • If you are upgrading the OS to a new version, and the operating systemdoesn't support in-place upgrades, then you must create a new boot disk.All data on the source boot disk is lost unless you copy it to a temporarynon-boot disk. Next, you create a new boot disk and copy the data storedon the temporary non-boot disk to the new boot disk.
    • If you aren't upgrading the OS version, then you can take a snapshot ofyour current boot disk and restore it to the new, supported disk type.When you create a VM, you can then use this new disk as the boot disk.
    • If a non-boot disk uses a disk type that isn't supported by the new machineseries, you can use a snapshot to change the source disk to a new disk type,as described inChange the disk type.
  • Local SSD disks can't be moved to a new VM. You can attach a disklarge enough to store all the Local SSD data to your current VM, and then use asnapshot to change the source disk to a new disk type, as described inChange the disk type. After youcreate a VM with attached Local SSD disks, then you can copy thedata back to the Local SSD disks.
  • If your current VM instance uses disks in astorage pool, but you are moving yourworkload to a VM in a different region, then you must recreate the disks andstorage pool in the new region.
  • If the new machine series uses a different disk interface (for example, NVMeinstead of SCSI), then the disk device names in the guest OS are different. Makesure your applications and scripts use eitherpersistent device namesorsymlinks when referencing the attacheddisks.

Networking requirements

Review the following networking requirements for your current instance and thenew instance type:

  • Review the supported networking interfaces for the new VM.

    • By default, first and second generation machine series use only the VirtIOnetwork interface.
    • Third generation and newer machine series (like M3, C3, and N4) supportonly the gVNIC network interface.
    • Bare metal instances support only theIDPF network interface.
  • Make sure your application and operating system support the interfacesavailable for the machine series.

  • Review your network configuration for your VM to determine if youneed to keep theassigned IP addresses. If so, youmust promote the IP addresses to static IP addresses.

  • If you use per VM Tier_1 networking performance with your current VM, make sure it isavailable or needed with the new machine series. For example, you can useTier_1 networking with a C2 machine type, but it is not available with anH3 VM.

To determine the network interface type of your current VM, use thegcloud compute instances describecommand to view the VM'snic-type:

  gcloud compute instances describeVM_NAME --zone=ZONE

If your VM was created with the default NIC type (VirtIO), the NIC type isautomatically changed togVNIC when you change the machine type to a thirdgeneration or later machine type.

Prepare to move your existing VMs

After you've completed theevaluation section,the next step is to prepare to move your VM instances by requesting resourcesfor the new VM instance and preparing backups of the source VM instance.

Prepare compute resources

Complete the following steps to prepare for moving your current instance to anew instance:

  1. Request quota in the region and zones where youplan to move your resources. If you have existing quota for a machine type, youcan request to move that quota. The process takes a few days to complete.
  2. Create a reservationfor the new VM instances to ensure the machine resources are available in thenew region and zones. Make sure you understand how reserved resources areconsumed andtest that you can consume reserved resources.
  3. Extend your high availability and disaster recovery plansto include the new region.
  4. If needed, upgrade the OS on the current VM.
    1. If supported by the operating system vendor, perform an in-place upgradeof your OS to a version that is supported by the new machine series and verifythat your workload is performing as expected on the new OS version.
    2. If an in-place upgrade of the OS isn't supported, then, when you create anew VM, you must create a new boot disk. Determine what information youneed to copy from the current boot disk, and copy it to a temporary locationon a non-boot disk so it can be transferred to the new VM. If you don'thave any non-boot disks attached to your current VM:
  5. If applicable to your Linux distribution, checkudevrules under/etc/udev/rules.d/. This file might contain entries relevant tothe hardware configuration of the current instance, but not the new instance.For example, the following entry ensures thateth0 is provided byvirtio-pcidriver (VirtIO Net), which preventsgve driver (gVNIC) to provide thisinterface. This could lead to networking startup scripts and connectivity issuesin the new instance:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="virtio-pci", ATTR{dev_id}=="0x0", KERNELS=="0000:00:04.0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

Prepare storage resources

Complete the following steps to prepare for moving the data in the disksattached to your current instance to a new instance:

  1. On Linux systems, test your updated applications and scripts to make surethey work with persistent device names or symlinks instead of the diskdevice names.
  2. If you're migrating from a VM that runs Microsoft Windows:
  3. If your new VM doesn't support the same disk types as your currentVM, you might need to update your deployment scripts or instance templates tosupport the new machine series.
  4. If your current VM uses a disk type for the boot disk that isn't supported bythe new machine series, and you are migrating multiple VMs with the sameconfiguration, create a custom image to use when creating the new VMs:
    1. Create a snapshotof the pd-standard boot disk of your current VM.
    2. Create a custom imageusing the disk snapshot as the source.
  5. If you need to move Local SSD information, create a blank disk large enoughtobackup your Local SSD disks.
    1. If possible, use a disk type that is supported by the new VM.
    2. If there are no disk types that are supported by both the current VM andthe new VM, then create a temporary disk using a disk typesupported by the current VM.
    3. Attach the new disk to the current VM,thenformat and mount the disk.
    4. Copy the data from the Local SSD disks attached to the current VM to thistemporary disk.
  6. Change the disk type of any disksattached to the current VM that use a disk type that isn't supported by the newVM. To move the disk data to new disks, create snapshots of the disks. You canalternativelytransfer files from oneVM to the other.

    1. You can take the snapshots while the VM is running, but any data writtento the disksafter you take the snapshot is not captured. Becausesnapshots are incremental,you can take a second snapshot after you stop the VM to capture all the mostrecent changes. This approach should minimize the length of time the VM isunavailable while you switch to a new VM.
    2. Alternatively, you can take all the disk snapshots after you stop the VM.We recommend that you create a snapshot of all the disks attached to your VM,even if the disk type is supported by the new machine series. Include anytemporary disks that contain the copied Local SSD data.
    3. The amount of time it takes to snapshot a disk is dependent upon multiplefactors, such as the disk size and amount of data contained on the disk. Forexample, if you take a snapshot of a 1 TiB disk that is 85% full, it mighttake 5 minutes for the snapshot to complete. But, if you take a snapshot ofa 100 TiB disk that is 85% full, it might take 11 minutes to complete. Werecommend you perform test snapshots of your disks before you start themigration process to understand how long snapshotting takes.
  7. If you have a disk that can be taken offline, you can use the followingapproach to move the data to a new disk while the source VM is still available:

    1. Detach the disk fromyour VM.
    2. Take a snapshot of the disk.
    3. Use the snapshot tocreate a new diskusing a disk type that is supported by the new machine series. The newdisk must be either the same size or larger than the source disk.

Prepare network resources

Complete the following steps to update the network configuration used byyour current instance to support the new instance:

  1. If you're creating a VM in a new region,create a VPC network and subnets in thenew region.
  2. If you configured custom NIC queue counts, seeQueue allocations and changing the machine type.
  3. If you want to keep the IP addresses used by the source VM,promote the IP addresses to static IP addresses.
  4. Unassign static IP addressbefore you stop your source VM.

Prepare the SUSE Enterprise Linux Server operating system

To avoid hardware-specific dependencies, rebuild theinitramfs (Initial RAMfilesystem). This includes a wider range of drivers and modules,making the operating system compatible with other instance types. Failure todo so will run into theknown issuethat prevents the VM from booting properly.

Before shutting down the system, run the following command as root to rebuildtheinitramfs with all drivers:

  sudo dracut --force --no-hostonly

Move your workload to the new VM

Afterpreparing your VMs for migration, the next step isto move your workload to the new VM.

If you are moving your VMs from the first generation to the second generationmachine series, read the instructions on theEdit the machine type of a VMpage. If you want to change the name of your existing VM, review theinformation atRename a VM.

This section describes how to move your workload from a first or secondgeneration VM to a third (or newer) generation VM. During this procedure,you create a new VM instance, then move your workloads to the new VM.

Note: Moving from an H3 VM to any other machine series is not supported.

Create the new VM

When moving your workloads from first or second generation VMs (N1 or N2 forexample) to third generation or later, you must first create a new VM and thenmove your workloads.

  1. If the source VM uses non-boot disks with a disk type that is supported bythe new machine series, detach the disks from the VM.
  2. Stop your source VM.
  3. Create snapshots of all disks that are still attached to the source VM.
  4. Create a new compute VM instance using either apublic image or acustom image that is configured to use gVNIC.When creating the new VM, choose the following options:
    • Select the machine type from the machine series that you have chosen.
    • Select a supported OS image, or use a custom image that you createdpreviously.
    • Select a supported disk type for the boot disk, for example, Hyperdisk Balanced.
    • If you created new disks from snapshots of the original disks, includethose new disks.
    • Specify the new VPC network, if you're creating the instance in adifferent region.
    • If both VirtIO and gVNIC are supported for the new instance, select gVNIC.
    • Specify the static IP addresses, if you promoted the ephemeral IPaddresses of the source VM.
  5. Start the new VM.

After the instance starts

Now that the new instance has been created and started, complete the followingsteps to finish the configuration of the new instance and copy over all the datafrom the source instance.

  1. Attach the disks you detached from thesource VM to the new VM.
  2. For any disks attached to the source VM that use a disk type not supportedby the new VM,Create a disk from a snapshot and attach it to the new instance.When creating the new disk, select a disk type that is supported by the new VMand specify a size that is at least as large as the original disk.
  3. If the original VM used aresource policyfor any disks that were recreated for the new VM, you need to add the resourcepolicy to the new disks.
  4. If you created the new VM using a public OS image, and not a customimage, then do the following:
    1. Configure the necessary users, drivers, packages, and file directories onthe new instance to support your workload.
    2. Install your modified applications and programs on the new VM.Recompile the programs on the new OS or architecture, if required.
  5. Optional: If you moved the contents of Local SSD disks to a temporary disk,and the new VM has attached Local SSD storage, after youformat and mount the disks,you can move the data from the temporary disk to the Local SSD disks.
  6. Reassign any static IP addressesassociated with the source VM to the new VM.
  7. Complete any additional tasks required to make your new VM highlyavailable, such as configuring load balancers andupdating the forwarding rules.
  8. Optional: Update theDNS entries, if needed,for the new VM.
  9. Recommended:Schedule disk backupsfor the new disks.
  10. Recommended: If you changed the OS to a different version or architecture,recompile your applications.

If you have capacity issues when moving your workloads, reach out to yourTechnical Account Manager (TAM). For other issues, open a support case withCloud Customer Care.

Migration example of n1-standard-8 to n4-standard-8

The following example is a migration of ann1-standard-8 VM to ann4-standard-8 VM. Then1-standard-8 VM has aPD-SSD boot disk runninganUbuntu1804 image and aPD-SSD data disk. You must use the CLIor REST API for this procedure.

Note: To receive security patches for Ubuntu1804,upgrade to Ubuntu Pro.

There are two options available to upgrade your N1 VM to an N4 VM:

Option 1: If your N1 VM uses theVirtIO network interface, then you mustcreate a new N4 VM. N4 supports only thegvnic network interface, andHyperdisk Balanced disks. You create a snapshot of your Persistent Disk boot and datadisks, create Hyperdisk Balanced disks from those snapshots, attach the Hyperdisk Balanced disks,and create the new N4 VM with the Hyperdisk Balanced disks.

You can also choose to create a new Hyperdisk Balanced boot disk using a more recentversion of the Ubuntu OS. In this scenario, you can create a new Hyperdisk Balanced diskfrom the boot disk snapshot, but you attach that disk as a non-boot disk to theN4 VM. Then you can copy non-system data from the restored snapshot to the newboot disk.

Option 2: If your N1 VM uses thegvnic network interface, the operatingsystem has an NVMe storage device driver, doesn't have anyattached Local SSD disks or GPUs, and isn't part of a managed instance group(MIG), then you can change the machine type from N1 to N4, but you still mustchange your Persistent Disk disk types to Hyperdisk Balanced disks. You must first detach yourPersistent Disk boot and data disks, create snapshots of the disks, create Hyperdisk Balanceddisks using the snapshots as the source, then attach the new Hyperdisk Balanced disks toyour N4 VM after you change the machine type. If your VM has attached GPUs, thenyou mustdetach them first.

The time to snapshot a disk is dependent upon multiple factors such as the totalnumber of TBs on a disk. For example, if you take a snapshot of a 1 TB diskthat is 85% full, it might take 5 minutes for the snapshot to complete. But, ifyou take a snapshot of a 100 TB disk that is 85% full, it might take 11minutes to complete. Google recommends you perform test snapshots of your disksbefore you start the migration process to understand how long snapshottingtakes.

gcloud

Option 1: Create a new N4 VM with snapshotted disks:

  1. Stop the VM by usinggcloud compute instances stop:

    gcloud compute instances stopVM_NAME \  --zone=ZONE

    Replace the following:

    • VM_NAME The name of your currentn1-standard-8 VM.
    • ZONE: The zone where the VM is located.
  2. Snapshot your disks. Use thegcloud compute snapshots createto create a snapshot of both the Persistent Disk boot disk and data diskattached to the VM.

    gcloud compute snapshots createSNAPSHOT_NAME \    --source-disk=SOURCE_DISK_NAME \    --source-disk-zone=SOURCE_DISK_ZONE

    Replace the following:

    • SNAPSHOT_NAME: The name of the snapshot youwant to create.
    • SOURCE_DISK_NAME: The name of your source disk.
    • SOURCE_DISK_ZONE: The zone of your source disk.
  3. Create a new Hyperdisk Balanced disk for the data disk by repeating the previousstep and specifying the data disk information instead of the boot disk.gcloud compute disks create:

    gcloud compute disks createDISK_NAME \    --project=PROJECT_NAME \    --type=DISK_TYPE \    --size=DISK_SIZE \    --zone=ZONE \    --source-snapshot=SNAPSHOT_NAME \    --provisioned-iops=PROVISIONED_IOPS \    --provisioned-throughput=PROVISIONED_THROUGHPUT

    Replace the following:

    • DISK_NAME: The name of the new diskyou are creating from the snapshotted disk.
    • PROJECT_NAME: The name of your project.
    • DISK_TYPE: The new disk type—in this example,it's a Hyperdisk Balanced disk.
    • DISK_SIZE: The size of the disk(example:100GB).
    • ZONE: The zone where the new disk is located.
    • SNAPSHOT_NAME: The name of the snapshot sourcedisk.
    • Optional:PROVISIONED_IOPS: The IOPSperformance for the disk (example:3600).
    • Optional:PROVISIONED_THROUGHPUT: Thethroughput performance to provision the disk (example:290).
  4. Repeat the previous step foreach snapshotted disk.

  5. Create then4-standard-8 VM and attach the Hyperdisk Balanced disks using thegcloud compute instances create:

    gcloud compute instances createVM_NAME \    --project=PROJECT_NAME \    --zone=ZONE \    --machine-type=NEW_MACHINE_TYPE \    --boot-disk-device-name=BOOT_DISK_NAME \    --disk=name=NON_BOOT_DISK_NAME, boot=no \    --network-interface=nic-type=GVNIC

    Replace the following:

    • VM_NAME: The name of the new VM instance.
    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where the new VM is located.
    • NEW_MACHINE_TYPE: The machine type, in thisexample it'sn4-standard-8.
    • BOOT_DISK_NAME Thename of the Hyperdisk Balanced boot disk you created from the source disk snapshotattached to then1-standard-8 VM.
    • NON_BOOT_DISK_NAME The name of theHyperdisk Balanced data disk you created from the source snapshot disk attached tothen1-standard-8 VM.
  6. Start then4-standard-8 VM using thegcloud compute instances start:

    gcloud compute instances startVM_NAME

    ReplaceVM_NAME with name of the new VM.

Option 2: Perform an in-place machine upgrade:

This option is only available if your N1 VM uses thegvnic network interface, the operating system has an NVMe storage device driver, doesn't have any attached Local SSD disks or GPUs, and isn't a part of a managed instance group (MIG). Performing this procedure with an N1 VM with aVirtIO network interface generates aVM incompatibility error.

  1. Stop the VM.
  2. Detach the disksfrom the VM.
  3. Create asnapshotof the boot and data disks.
  4. Create Hyperdisk Balanced boot and data disksusing a disk snapshot as the source for each disk.
  5. Set the machine typeto an N4 VM.
  6. Attach the Hyperdisk Balanced boot disk and the Hyperdisk Balanced data disk.
  7. Start the N4 VM.

REST

Option 1: Create a new N4 VM with snapshotted disks:

  1. Stop the VM by using theinstances.stop method:

     POST https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONE/instances/VM_NAME/stop

    Replace the following:

    • PROJECT_NAME: The project ID.
    • ZONE: The zone containing the VM.
    • VM_NAME: The name of your currentn1-standard-8 VM.
  2. Snapshot your disks using thedisks.createSnapshot methodto create a snapshot of both the Persistent Disk boot disk and datadisk attached to the instance.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONE/disks/DISK_NAME/createSnapshot

    In the body of the request, include the name for your new snapshottedPersistent Disk disk.

    For example:

    {    "name": "SNAPSHOT_NAME"}

    Replace the following:

    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where your disk is located.
    • DISK_NAME: The disk you plan to snapshot.
    • SNAPSHOT_NAME: A name for the snapshot, suchashdb-boot-disk orhdb-data-disk.
  3. Create a Hyperdisk Balanced disk using thedisks.insert method.You perform this step two times: once to include thename of yourHyperdisk Balanced boot disk; and a second time to include thename of yourdata disks. Use thesourceSnapshot for the new Hyperdisk Balanced boot and datadisks, thetype of disk, Hyperdisk Balanced, andsizeGB of the disk in therequest body.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONEdisks

    Replace the following:

    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where your disk is located.

    In the body of the request, include the following:

    For example:

    {    "name": "my-hdb-boot-disk" or "my-hdb-data-disk",    "sourceSnapshot": "projects/your-project/global/snapshots/SNAPSHOT_NAME",    "type": "projects/your-project/zones/us-central1-a/diskTypes/hyperdisk-balanced",    "sizeGb": "100"}'
  4. Use theinstances.insertmethod to create the new N4 VM.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONE/instances

    Replace the following:

    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where your disk is located.

    In the body of the request, include the following:

      {    "machineType":"projects/your-project/zones/us-central1-a/machineTypes/n4-standard-8" "name":"VM_NAME",    "disks": [      {        "boot": true,        "deviceName": "my-hdb-boot-disk",        "source": "projects/your-project/zones/us-central1-a/disks/my-hdb-boot-disk",        "type": "PERSISTENT"      },      {        "boot": false,        "deviceName": "my-hdb-data-disk",        "source": "projects/your-project/zones/us-central1-a/disks/my-hdb-data-disk",        "type": "PERSISTENT"      }      ],        "networkInterfaces":[          {            "network":"global/networks/NETWORK_NAME",            "subnetwork":"regions/REGION/subnetworks/SUBNET_NAME",            "nicType": "GVNIC"          }       ]     }

    Replace the following:

    • VM_NAME: The name of the VM.
    • NETWORK_NAME: The name of the network.
    • REGION: The name of the region.
    • SUBNET_NAME: The name of the subnet.
  5. Start the VM by using theinstances.start method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONE/instances/VM_NAME/start

    Replace the following:

    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where your VM is located.
    • VM_NAME: The name of the VM.

Option 2: Perform an in-place machine upgrade:

This option is only available if your N1 VM uses thegvnic network interface, doesn't have any attached Local SSD disks or GPUs, and isn't a part of a managed instance group (MIG). Performing this procedure with an N1 VM with aVirtIO network interface generates aVM incompatibility error.

  1. Stop the VM by using theinstances.stop method.

  2. Detach the disks by using theinstances.detachDisk methodmethod to detach the original Persistent Disk boot disk from the N1 VM.You also must detach any data disks from the VM.

    https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONE/instances/VM_NAME/detachDisk?deviceName=DISK_NAME

    Replace the following:

    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where your disk is located.
    • VM_NAME: The name of the sourceVM with thepd-ssd disk attached to it.
    • DISK_NAME: The disk you want to detach.
  3. Snapshot the disks. Use thedisks.createSnapshot methodto create a snapshot of both the Persistent Disk boot disk and datadisks attached to the instance.

  4. Create a Hyperdisk Balanced boot and data disks using thedisks.insert methodinclude thename of your Hyperdisk Balanced disk,sourceSnapshot for the newHyperdisk Balanced disk, thetype of disk, Hyperdisk Balanced, andsizeGB of the disk inthe request body.

  5. Perform an in-place machine type upgrade using theinstances.setMachineType methodinclude themachineType in the request body:

    POST  https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONEinstances/VM_NAME/setMachineTypeMACHINE_TYPE

    Replace the following:

    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where your disk is located.
    • VM_NAME: The name of the VM to upgrade.
    • MACHINE_TYPE: The new machine type.

    In the request body, include the following:

    { "machineType": "projects/PROJECT_NAME/zones/ZONE/machineTypes/MACHINE_TYPE",}
  6. Use theinstances.attachDisk methodto attach the newHyperdisk Balanced boot disk and the Hyperdisk Balanceddata disks to the N4 VM.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONE/instancesVM_NAMEattachDiskDISK_NAME

    Replace the following:

    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where your disk is located.
    • VM_NAME: The name of the source VMinstance with thepd-ssd disk attached to it.
    • DISK_NAME The disk you want to attach.

    In the request body, include the following:

    {"source": "projects/your-project/zones/us-central1-a/disks/my-hdb-boot-disk","deviceName":"my-hdb-boot-disk","boot":true}
    {"source": "projects/your-project/zones/us-central1-a/disks/my-hdb-data-disk","deviceName":"my-hdb-data-disk","boot":false}
  7. Start the N4 VM by using theinstances.start method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_NAME/zones/ZONEinstances/VM_NAME/start

    Replace the following:

    • PROJECT_NAME: The name of your project.
    • ZONE: The zone where your disk is located.
    • VM_NAME: The name of the VM.

Clean up

After you verify that you can connect to the new VM, and that your workloadis running as expected on the new VM, you can remove the resources thatare no longer needed:

  1. The snapshots you created for the disks attached to the source VM.
  2. Any snapshot schedules for the disks that were attached to the source VM.
  3. The temporary disk created to copy the Local SSD data to the new VM.
  4. The source VM and all attached disks.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.