Migrate an existing workload to a stateful managed instance group

If you have an existing stateful application on standalone (unmanaged)Compute Engine virtual machine (VM) instances, you can migrate thatapplication to astateful managed instance group (MIG).

By configuring a stateful MIG and using managed instances, you can get thefollowing benefits:

  • Preserved state: preservation of instance names, disks, and metadata even ifan instance is recreated.
  • Autohealing: automatic recreation of VMs with failed workloads within thesame zone.
  • Automated updates: graceful deployments of new instance configurations orsoftware versions to VMs in a MIG.
Note: This document provides step-by-step instructions and conceptualinformation. If you want to automate the migration with a Python3 script, seeHow to migrate a group of individual instances to a stateful MIG using Python scriptor themigration script on GitHub.

Limitations

  • You must stop your existing VMs to migrate their existing disks or,alternatively, to take consistent snapshots for use by the new managedinstances.
  • You must delete existing VMs if you want to reuse their VM names.
  • Your application must be capable of running on VMs with the same machine type.If your existing application requires multiple instances of different machine types, create multipleinstance templates and MIGs, one per machine type.
  • Your application must start when the VM starts. You can use a custom imageor a startup script. Each option is discussed below.
  • You cannot update operating system or software by rolling out boot image updates in a MIG if you choose to createstateful boot disks.
  • You can achieve multi-zone high availability only by creating redundant replicas in multiple zones and by configuring application-level data replication. Stateful MIG autoheals instances within the same zone only and does not orchestrate cross-zone failover.
  • You cannot use autoscaling with a stateful MIG.
  • Review the stateful MIGlimitations.

Costs

This tutorial uses billable components of Google Cloud including:

  • Compute Engine

Use thepricing calculator togenerate a cost estimate based on your projected usage.

Before you begin

This guide uses the gcloud CLI. You can access this tool usingCloud Shell. Or, if you want to run thegcloud CLI on your local computer instead,download and install the latest gcloud CLI.

Migration overview

  1. Understand the components that make up a stateful MIG.
  2. Review your existing setup to determine common VM specifications.
  3. Optionally, create a custom image to serve as a common boot disk image.
  4. Create an instance template to specify common VM configuration for the MIG.
  5. Create an empty MIG.
  6. Convert your existing VMs into managed instances in the MIG, includingper-instance configurations.
  7. Configure autohealing for the MIG to improve your application's resilience.
  8. Optionally, to reduce configuration overhead, replace per-instanceconfigurations with a stateful policy.

Components

You configure your stateful MIG's managed instances through several components:

  • Aninstance template contains commonconfiguration for VMs in the MIG, including machine type, boot disk image,optional specifications for additional disks, and an optionalstartup script.
  • An optionalcustom image containsyour application and serves as a common boot disk image.
  • Aper-instance configurationcontains instance-specific stateful items. For example, you can attach anexisting disk to a specific instance in the group. This disk might be detachedfrom an existing standalone instance, recovered from a snapshot, or a regionaldisk. The disk's device name does not need to be defined in your instancetemplate.
  • An optionalstateful policycontains common stateful items. For example, it defines all disks with aspecific device name (as defined in the instance template) as stateful for allinstances in the group.

Which components do you need to use?

The components that you need to use depends on your existing setup. Thefollowing table provides a high-level summary of some possible configurationsfor an application that runs on one or multiple instances. Later in thistutorial you'll review your existing setup to determine which of theseconfigurations you need to use.

Do you have any stateful data or configuration on your boot disks that you must maintain?How does your application start?
Application is configured on an existing boot diskApplication is configured with a startup script
No: boot disks are stateless
  1. Use an instance template with a custom image
  2. Add per-instance configurations (or a stateful policy) for stateful data disks
  1. Use an instance template with a startup script
  2. Add per-instance configurations (or a stateful policy) for stateful data disks
Yes: at least one boot disk is stateful
  1. Use an instance template with a custom image
  2. Add per-instance configurations (or a stateful policy) for stateful boot and data disks
  1. Use an instance template with a startup script
  2. Add per-instance configurations (or a stateful policy) for stateful boot and data disks

Review your existing setup

Review your existing standalone instances to inspect each instance's machinetype, disks, and metadata.

Use theinstances describe commandfor each of your instances.

gcloud compute instances describeINSTANCE_NAME

Answer the following questions to prepare for subsequent steps in this guide.

QuestionsImplications
VM properties
What is the machine type that you want to use for your group?Specify this machine type in your MIG's instance template.
How does your application start: is it pre-configured on a boot disk or is it installed, configured, and launched by a startup script?If your application is pre-configured on a boot disk, create a custom image then specify that image in your MIG's instance template.

If your application is launched by a startup script, specify that startup script in your MIG's instance template.

If your application requires both a custom boot disk image and a startup script, specify both in the instance template.
Do you want to preserve existing instance names?You must delete existing standalone instances to free up the instance names.

If your boot disks remain stateless and if you ever want to use automated rolling updates in your MIG, review the documentation forPreserving instance names.
Stateful items
For each instance, is there any instance-specific metadata that you need to preserve?Specify instance-specific metadata using per-instance configurations.
Are your boot disks stateful? In other words, is there any data that lives on any boot disk that you must preserve the state of?If you need to preserve the state of your boot disks, then you cannot update operating system or software by rolling out boot disk image updates.

Pro Tip: Consider storing data on an additional persistent disk and keeping the boot disk, containing the application, stateless. Such a configuration makes your application resilient to boot disk file system corruption. This configuration also simplifies VM updates because the MIG can recreate boot disks based on the immutable source image that you specify in the MIG's instance template.

Do all of the instances have the same kinds of disks? For example, do they all have one data disk? Or do they have and require unique disk configurations? If all instances have a common disk configuration, then define those common device names in your instance template–for example, `data-disk`. This lets you use a stateful policy to declare those disks as stateful across your MIG, with less overhead than per-instance configurations.
If you were to grow the group, is the size of the current disks sufficient?Specify the disk sizes you need in your instance template. New instances will get the disks you specify, provided those disks are not redefined in a stateful policy or per-instance configurations.

This guide starts by creating per-instance configurations for existing statefuldisks. But you can convert those configurations to a stateful policy laterprovided that the disks have common device names that you declare in the group'sinstance template.

Example setup

This guide uses the following basic example to illustrate the migration steps.Suppose you have a stateful application running on three standaloneCompute Engine VMs. Assume the following VM specifications:

  • Each VM has the same machine type.
  • Each VM exists in the same project and zone.
  • Each VM's boot disk has the same application, which is configured on the bootdisk to start when the VM starts.
  • Each VM's boot disk does not contain any other data or configuration that youmust preserve.
  • Each VM has a secondary persistent disk with stateful data, that is, data forwhich you must maintain the current state.

Edit the values below for use throughout this tutorial.

- Machine type:n2-standard-2- Project:my-project- Zone:europe-west1-c- Name of one of the VMs to migrate:my-instance-1

Create a custom image

If your application or any of its requirements are already configured on anexisting boot disk, create a custom image that you can reuse. Alternatively, ifyour application is installed, configured, and launched solely by using astartup script, skip this step and proceed toCreate an instance template.

In the example scenario discussed earlier, the boot disk of each existingstandalone VM contains the configured application. So you can follow the stepstocreate a custom imagebased on any one of those VMs.

  1. Stop one of the instances.

    gcloud compute instances stopmy-instance-1
  2. Determine the source for the disk by describing the instance.

    gcloud compute instances describemy-instance-1

    The output is similar to the following:

    ...disks:– autoDelete: trueboot: true  ...  source: https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-c/disks/my-instance-1  ...

    Locate thesource field in the output, and note the full URL of the bootdisk in that field.

  3. Use theimages create command toprepare a custom image that uses the same source.

    gcloud compute images createmy-boot-image \    --source-disk=https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-c/disks/my-instance-1

    The output is similar to the following:

    Created [https://www.googleapis.com/compute/v1/projects/my-project/global/images/my-boot-image].

Create an instance template

An instance template is an immutable Compute Engine resource that storesVM configuration. Once you create a template, you cannot update it. If you needto change it later, create a new template then roll out the new template to thegroup.

Follow the steps inCreating a new instance template,using the following settings.

  • Machine type: Specify amachine type that works for all of yourexisting instances.

  • Startup script: If you launch your application using a startup script,specify that script.

  • Boot disk:

    • Image: Specify a common boot disk image for all VMs in the MIG.For example, if you created a custom image based on an existing VM's bootdisk, specify that. If you need to use an existing boot disk for aspecific VM, you can explicitly specify the boot disk for that VM witha per-instance configuration, when you convert that VM to a managedinstance, as explained later in this document.
    • Device name: Specify a device name that reflects the purpose of thedisk–for example,boot-disk. This lets you configure a single statefulpolicy to preserve all disks in the MIG with that device name.
    • Size: Specify a boot disk size that is sufficient for existinginstances as well as future instances, in case you want to add any.
  • Additional disks: By default, when you add instances to the MIG, the MIGcreates disks based on the template. Note that an instance template does notsupport configuring regional disks, but you can configure regional disks laterusing per-instance configurations instead.

    • Device name: For each disk, specify a device name that reflects thepurpose of the disk–for example,data-disk.
    • Size: Specify a disk size that is sufficient for future instances, incase you add any.

For the purpose of this migration, the specification that matters mostfor each additional disk is the device name, which you will use as a key forspecifying which disks are stateful. Having a common device name for similardisks enables you to use a common stateful policy to preserve all of thosedisks across the MIG. The specification of size or image for additionaldisks in the instance template will only be used for creating new disks fornew instances that you might create beyond those that you aremigrating. When migrating existing instances, you will preserve existingdata disks by detaching them from the original instances and thenre-attaching those same disks to the new managed instances, as explainedlater in this document.

Pro Tip: Specify custom device names for your disks, such asboot-disk ordata-disk, instead of using autogenerated names. Meaningful device names make your stateful configuration easier to set up, read, and understand.

The followinginstance-templates create commandcreates a template for the example scenario. The command includes an--imageflag that points to the custom boot image created earlier, as well as anadditional data disk.

gcloud compute instance-templates createmy-instance-template \ --machine-type=n2-standard-2 \ --image=https://www.googleapis.com/compute/v1/projects/my-project/global/images/my-boot-image \ --boot-disk-device-name=boot-disk \ --create-disk=mode=rw,size=100,type=pd-standard,device-name=data-disk

The output is similar to the following:

Created [https://www.googleapis.com/compute/v1/projects/my-project/global/instanceTemplates/my-instance-template].NAME                     MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMPmy-instance-templaten2-standard-2               2021-04-27T11:02:07.552-07:00

Note the URL of the template, which you can find in the first line of theoutput.

Create a managed instance group

The next step is to create a managed instance group (MIG). To create asingle-zone MIG, follow the instructions toCreate a MIG in a single zone.Or, if you want to protect against zonal failure by using a regional MIG, followthe instructions toCreate a MIG with VMs in multiple zones in a region.

Note: Regional managed instance groups do not perform automatic stateful cross-zone failover. If your application supports data replication, then you can achieve resilience against zonal failure by doing the following: Examples of applications that support this approach include Cassandra, ElasticSearch, and Kafka.

When you create your MIG, include the following specifications:

  • Set the group size to0. You will add instances later.
  • If you are creating a regional MIG, set theinstance redistribution typetoNONE so that the MIG does not automatically redistribute instances acrosszones.

The followinginstance-groups managed create commandcreates a zonal MIG for the example setup described earlier. To create aregional MIG, replace--zone=ZONE with--region=REGION.

gcloud compute instance-groups managed createmy-mig \    --size=0 \    --template=https://www.googleapis.com/compute/v1/projects/my-project/global/instanceTemplates/my-instance-template \    --zone=europe-west1-c

The output is similar to the following:

Created [https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-c/instanceGroupManagers/my-mig].NAME    LOCATION        SCOPE  BASE_INSTANCE_NAME  SIZE  TARGET_SIZE  INSTANCE_TEMPLATE         AUTOSCALEDmy-migeurope-west1-c  zone   my-mig              0     0my-instance-template      no

After creating that resource you can use it to interact with the MIG, forexample to set policies on the group, and to add or remove instances from thegroup.

Convert existing VMs to managed instances

For each of your existing unmanaged VMs, use the following procedure to turn itinto a managed instance in your MIG. This procedure migrates existing disks tothe new managed instances. Alternatively, you cancreate snapshots ofexisting disks and thencreate disks based on those snapshotsfor use by the managed instances.

  1. Describe the existing VM.

    gcloud compute instances describemy-instance-1

    Make a note of items that you want to preserve from the existing VM, whichcan include the following:

    • Instance name
    • Boot disk
    • Secondary disks
    • Instance metadata
  2. Stop the existing VM.

    gcloud compute instances stopmy-instance-1
  3. Detach all stateful disks, including the boot disk if you plan to reuse it.

    gcloud compute instances detach-diskmy-instance-1 --disk=my-data-disk-1
  4. Delete the existing VM so that you can create another one with the samename. If you don't want to preserve instance names, you can delete theexisting VM later to stop paying for it.

    gcloud compute instances deletemy-instance-1
  5. Follow the steps tocreate a managed instance.

    For example, the following command creates a managed instance with the samename as the original VM and reuses the original data disk. The boot disk forthe VM is created from the image that's specified in the group's instancetemplate.

    gcloud compute instance-groups managed create-instancemy-mig \    --instance=my-instance-1    \    --stateful-metadata=role=primary      \    --stateful-disk=device-name=data-disk,source=https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-c/disks/my-data-disk-1 \    --zone=europe-west1-c

    If you need to reuse a boot disk from an old VM, use the same command withan additional--stateful-disk flag. Use the same device name for the bootdisk as you specified in the instance template–for example:

    gcloud compute instance-groups managed create-instancemy-mig \    --instance=my-instance-1 \    --stateful-metadata=role=secondary    \    --stateful-disk=device-name=data-disk,source=https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-c/disks/my-data-disk-1 \    --stateful-disk=device-name=boot-disk,source=https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-c/disks/my-instance-1-boot-disk \    --zone=europe-west1-c
  6. Repeat the steps for each of your existing unmanaged VMs.

If you want to view the resulting per-instance configurations, run theinstance-configs list command.

gcloud compute instance-groups managed instance-configs listmy-mig \    --zone=europe-west1-c

To view thepreserved stateof an instance, run thedescribe-instance command.

gcloud compute instance-groups managed describe-instancemy-mig \  --instance=my-instance-1 \  --zone=europe-west1-c

For more information, seeApplying, viewing, and removing stateful configuration in MIGs.

Configuring autohealing

MIGs automatically heal managed instances that stop running. To further improvethe availability of your application and to verify that your application isresponding,set up an application-based health check and autohealing.See theexample health check setupfor sample commands.

Using a stateful policy instead of per-instance configurations

A stateful policy lets you declare disks that have a common device name asstateful across the MIG. A single stateful policy is less work to manage thanmultiple per-instance configurations. For example, with a stateful policy, youcan designate all disks with device namedata-disk to be stateful for allinstances in the MIG.

If your MIG meets the following conditions, you can replace per-instanceconfigurations with a stateful policy:

  • All VMs have the same device name (eg,data-disk) for similar statefuldisks. This device name is defined in the MIG's instance template.
  • No VM has unique stateful metadata specified in a per-instanceconfiguration. If you do have stateful metadata defined in a per-instanceconfiguration, then you can remove the disk from the per-instanceconfiguration, but you must keep the per-instance configuration to maintainthat instance-specific stateful metadata.

Use the following steps to replace multiple per-instance configurations with asingle stateful policy.

  1. Configure stateful disks in a stateful policy. Follow the instructions inSetting and updating stateful configuration for disks in an existing MIG.

    For the example scenario, use the following command. It declares that alldisks in the MIG that have a specific device name will be preserved.

    gcloud compute instance-groups managed updatemy-mig \  --stateful-disk=device-name=data-disk,auto-delete=never
  2. If you need to preserve instance-specific metadata,update the per-instance configuration.Otherwise,delete the per-instance configuration.Apply the configuration change immediately with the--update-instanceflag. For example, to delete the per-instance configuration, use thefollowing command:

    gcloud compute instance-groups managed instance-configs deletemy-mig \  --instances=my-instance-1 \  --update-instance
  3. (Optional.) Verify that the stateful items are now stored in the preservedstate from policy (preservedStateFromPolicy) for each managed instance.For more information, seeViewing the preserved states of managed instances.

Adding more VMs

If you need to add VMs to grow your application, you can add extra VMs byincreasing your MIG's sizeor bymanually creatingmore instances. The MIG creates all its VMs, including their persistent disks,based on the group's instance template. If the group has a stateful policy, anyitems you list in the stateful policy are preserved across restart, recreation,autohealing, and update operations for all new and existing instances in thegroup. If you need to configure stateful disks or metadata only for specific VMsin your group, useper-instance configurations.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.