Create and use Spot VMs

This page explains how to create and manageSpot VMs, including the following:

  • How to create, start, and identify Spot VMs
  • How to detect, handle, and test preemption of Spot VMs
  • Best practices for Spot VMs

Spot VMs are virtual machine (VM) instances with thespot provisioning model.Spot VMs are available at a discount of up to 91% off of the defaultprice of standard VMs. However, Compute Engine might reclaim the resourcesby preempting Spot VMs at any time.Spot VMs are recommended only for fault-tolerant applications thatcan withstand VM preemption. Make sure your application canhandle preemption before you decide tocreate Spot VMs.

Caution: Spot VMs are not covered by any Service Level Agreement andare excluded from theCompute Engine SLA.For more information, seelimitations of Spot VMs.

If you want to create and manage Spot VMs with TPUs, then see theCloud TPU documentation for Spot VMs instead.

Before you begin

Create a Spot VM

Create a Spot VM using the Google Cloud console,gcloud CLI, or the Compute Engine API.A Spot VM is any VM that is configured to use the spotprovisioning model:

  • VM provisioning model set toSpot in the Google Cloud console
  • --provisioning-model=SPOT in the gcloud CLI
  • "provisioningModel": "SPOT" in the Compute Engine API
Note: You cannot set theautomatic restart and host maintenance optionsfor a Spot VM.

Console

  1. In the Google Cloud console, go to theCreate an instance page.

    Go to Create an instance

  2. In the navigation menu, clickAdvanced. In theAdvanced pane that appears, complete the following steps:

    1. In theProvisioning model section, selectSpotfrom theVM provisioning model list.
    2. Optional: To select thetermination actionthat happens when Compute Engine preempts the VM, completethe following steps:

      1. Expand theVM provisioning model advanced settings section.
      2. In theOn VM termination list, select one of the followingoptions:
        • To stop the VM during preemption, selectStop (default).
        • To delete the VM during preemption, selectDelete.
  3. Optional: Specify other configuration options. For more information, seeConfiguration options during instance creation.

  4. To create and start the VM, clickCreate.

gcloud

To create a VM from the gcloud CLI, use thegcloud compute instances create command.To create Spot VMs, you must include the--provisioning-model=SPOT flag. Optionally, you can also specify atermination action for Spot VMs by also including the--instance-termination-action flag.

gcloud compute instances createVM_NAME \    --provisioning-model=SPOT \    --instance-termination-action=TERMINATION_ACTION

Replace the following:

  • VM_NAME:name of thenew VM.
  • TERMINATION_ACTION: Optional: specify whichaction to take when Compute Engine preempts theVM, eitherSTOP (default behavior) orDELETE.

For more information about the options you can specify when creating a VM, seeConfiguration options during instance creation.For example, to create Spot VMs with a specified machine type and image,use the following command:

gcloud compute instances createVM_NAME \    --provisioning-model=SPOT \    [--image=IMAGE | --image-family=IMAGE_FAMILY] \    --image-project=IMAGE_PROJECT \    --machine-type=MACHINE_TYPE \    --instance-termination-action=TERMINATION_ACTION

Replace the following:

  • VM_NAME:name of thenew VM.
  • IMAGE: specify one of the following:
    • IMAGE: a specific version of apublic image or the image family. For example, a specific image is--image=debian-10-buster-v20200309.
    • Animage family.This creates the VM from the most recent, non-deprecated OS image.For example, if you specify--image-family=debian-10,Compute Engine creates a VM from the latest version of theOS image in the Debian 10 image family.
  • IMAGE_PROJECT: theprojectcontaining the image.For example, if you specifydebian-10 as the image family, specifydebian-cloud as the image project.
  • MACHINE_TYPE: thepredefinedorcustom,machine type for the new VM.
  • TERMINATION_ACTION: Optional: specify whichaction to take when Compute Engine preempts theVM, eitherSTOP (default behavior) orDELETE.

    To get a list of the machine types available in a zone, use thegcloud compute machine-types list command with the--zones flag.

Terraform

You can use aTerraformresource to create a spot instance using scheduling block

resource "google_compute_instance" "spot_vm_instance" {  name         = "spot-instance-name"  machine_type = "f1-micro"  zone         = "us-central1-c"  boot_disk {    initialize_params {      image = "debian-cloud/debian-11"    }  }  scheduling {    preemptible                 = true    automatic_restart           = false    provisioning_model          = "SPOT"    instance_termination_action = "STOP"  }  network_interface {    # A default network is created for all GCP projects    network = "default"    access_config {    }  }}

REST

To create a VM from the Compute Engine API, use theinstances.insert method.You must specify a machine type and name for the VM. Optionally, you canalso specify an image for the boot disk.

To create Spot VMs, you must include the"provisioningModel": spot field.Optionally, you can also specify a termination action for Spot VMs by alsoincluding the"instanceTerminationAction" field.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{ "machineType": "zones/ZONE/machineTypes/MACHINE_TYPE", "name": "VM_NAME", "disks": [   {     "initializeParams": {       "sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE"     },     "boot": true   } ] "scheduling": {     "provisioningModel": "SPOT",     "instanceTerminationAction": "TERMINATION_ACTION" }, ...}

Replace the following:

  • PROJECT_ID: theproject idof the project to create the VM in.
  • ZONE: thezoneto create the VM in. The zone must also support themachine type to use for the new VM.
  • MACHINE_TYPE: thepredefinedorcustom,machine type for the new VM.
  • VM_NAME: thename of thenew VM.
  • IMAGE_PROJECT: theproject containing the image.For example, if you specifyfamily/debian-10 as the image family,specifydebian-cloud as the image project.
  • IMAGE: specify one of the following:
    • A specific version of a public image. For example, a specific image is"sourceImage": "projects/debian-cloud/global/images/debian-10-buster-v20200309"wheredebian-cloud is theIMAGE_PROJECT.
    • Animage family.This creates the VM from the most recent, non-deprecated OS image.For example, if you specify"sourceImage": "projects/debian-cloud/global/images/family/debian-10"wheredebian-cloud is theIMAGE_PROJECT,Compute Engine creates a VM from the latest version of theOS image in the Debian 10 image family.
  • TERMINATION_ACTION: Optional: specify whichaction to take when Compute Engine preempts theVM, eitherSTOP (default behavior) orDELETE.

For more information about the options you can specify when creating a VM, seeConfiguration options during instance creation.

Go

import("context""fmt""io"compute"cloud.google.com/go/compute/apiv1""cloud.google.com/go/compute/apiv1/computepb""google.golang.org/protobuf/proto")// createSpotInstance creates a new Spot VM instance with Debian 10 operating system.funccreateSpotInstance(wio.Writer,projectID,zone,instanceNamestring)error{// projectID := "your_project_id"// zone := "europe-central2-b"// instanceName := "your_instance_name"ctx:=context.Background()imagesClient,err:=compute.NewImagesRESTClient(ctx)iferr!=nil{returnfmt.Errorf("NewImagesRESTClient: %w",err)}deferimagesClient.Close()instancesClient,err:=compute.NewInstancesRESTClient(ctx)iferr!=nil{returnfmt.Errorf("NewInstancesRESTClient: %w",err)}deferinstancesClient.Close()req:=&computepb.GetFromFamilyImageRequest{Project:"debian-cloud",Family:"debian-11",}image,err:=imagesClient.GetFromFamily(ctx,req)iferr!=nil{returnfmt.Errorf("getImageFromFamily: %w",err)}diskType:=fmt.Sprintf("zones/%s/diskTypes/pd-standard",zone)disks:=[]*computepb.AttachedDisk{{AutoDelete:proto.Bool(true),Boot:proto.Bool(true),InitializeParams:&computepb.AttachedDiskInitializeParams{DiskSizeGb:proto.Int64(10),DiskType:proto.String(diskType),SourceImage:proto.String(image.GetSelfLink()),},Type:proto.String(computepb.AttachedDisk_PERSISTENT.String()),},}req2:=&computepb.InsertInstanceRequest{Project:projectID,Zone:zone,InstanceResource:&computepb.Instance{Name:proto.String(instanceName),Disks:disks,MachineType:proto.String(fmt.Sprintf("zones/%s/machineTypes/%s",zone,"n1-standard-1")),NetworkInterfaces:[]*computepb.NetworkInterface{{Name:proto.String("global/networks/default"),},},Scheduling:&computepb.Scheduling{ProvisioningModel:proto.String(computepb.Scheduling_SPOT.String()),},},}op,err:=instancesClient.Insert(ctx,req2)iferr!=nil{returnfmt.Errorf("insert: %w",err)}iferr=op.Wait(ctx);err!=nil{returnfmt.Errorf("unable to wait for the operation: %w",err)}instance,err:=instancesClient.Get(ctx,&computepb.GetInstanceRequest{Project:projectID,Zone:zone,Instance:instanceName,})iferr!=nil{returnfmt.Errorf("createInstance: %w",err)}fmt.Fprintf(w,"Instance created: %v\n",instance)returnnil}

Java

importcom.google.cloud.compute.v1.AccessConfig;importcom.google.cloud.compute.v1.AccessConfig.Type;importcom.google.cloud.compute.v1.Address.NetworkTier;importcom.google.cloud.compute.v1.AttachedDisk;importcom.google.cloud.compute.v1.AttachedDiskInitializeParams;importcom.google.cloud.compute.v1.ImagesClient;importcom.google.cloud.compute.v1.InsertInstanceRequest;importcom.google.cloud.compute.v1.Instance;importcom.google.cloud.compute.v1.InstancesClient;importcom.google.cloud.compute.v1.NetworkInterface;importcom.google.cloud.compute.v1.Scheduling;importcom.google.cloud.compute.v1.Scheduling.ProvisioningModel;importjava.io.IOException;importjava.util.UUID;importjava.util.concurrent.ExecutionException;importjava.util.concurrent.TimeUnit;importjava.util.concurrent.TimeoutException;publicclassCreateSpotVm{publicstaticvoidmain(String[]args)throwsIOException,ExecutionException,InterruptedException,TimeoutException{// TODO(developer): Replace these variables before running the sample.// Project ID or project number of the Google Cloud project you want to use.StringprojectId="your-project-id";// Name of the virtual machine to check.StringinstanceName="your-instance-name";// Name of the zone you want to use. For example: "us-west3-b"Stringzone="your-zone";createSpotInstance(projectId,instanceName,zone);}// Create a new Spot VM instance with Debian 11 operating system.publicstaticInstancecreateSpotInstance(StringprojectId,StringinstanceName,Stringzone)throwsIOException,ExecutionException,InterruptedException,TimeoutException{Stringimage;// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.try(ImagesClientimagesClient=ImagesClient.create()){image=imagesClient.getFromFamily("debian-cloud","debian-11").getSelfLink();}AttachedDiskattachedDisk=buildAttachedDisk(image,zone);StringmachineTypes=String.format("zones/%s/machineTypes/%s",zone,"n1-standard-1");// Send an instance creation request to the Compute Engine API and wait for it to complete.Instanceinstance=createInstance(projectId,zone,instanceName,attachedDisk,true,machineTypes,false);System.out.printf("Spot instance '%s' has been created successfully",instance.getName());returninstance;}// disks: a list of compute_v1.AttachedDisk objects describing the disks//     you want to attach to your new instance.// machine_type: machine type of the VM being created. This value uses the//     following format: "zones/{zone}/machineTypes/{type_name}".//     For example: "zones/europe-west3-c/machineTypes/f1-micro"// external_access: boolean flag indicating if the instance should have an external IPv4//     address assigned.// spot: boolean value indicating if the new instance should be a Spot VM or not.privatestaticInstancecreateInstance(StringprojectId,Stringzone,StringinstanceName,AttachedDiskdisk,booleanisSpot,StringmachineType,booleanexternalAccess)throwsIOException,ExecutionException,InterruptedException,TimeoutException{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.try(InstancesClientclient=InstancesClient.create()){InstanceinstanceResource=buildInstanceResource(instanceName,disk,machineType,externalAccess,isSpot);InsertInstanceRequestbuild=InsertInstanceRequest.newBuilder().setProject(projectId).setRequestId(UUID.randomUUID().toString()).setZone(zone).setInstanceResource(instanceResource).build();client.insertCallable().futureCall(build).get(60,TimeUnit.SECONDS);returnclient.get(projectId,zone,instanceName);}}privatestaticInstancebuildInstanceResource(StringinstanceName,AttachedDiskdisk,StringmachineType,booleanexternalAccess,booleanisSpot){NetworkInterfacenetworkInterface=networkInterface(externalAccess);Instance.Builderbuilder=Instance.newBuilder().setName(instanceName).addDisks(disk).setMachineType(machineType).addNetworkInterfaces(networkInterface);if(isSpot){// Set the Spot VM settingScheduling.Builderscheduling=builder.getScheduling().toBuilder().setProvisioningModel(ProvisioningModel.SPOT.name()).setInstanceTerminationAction("STOP");builder.setScheduling(scheduling);}returnbuilder.build();}privatestaticNetworkInterfacenetworkInterface(booleanexternalAccess){NetworkInterface.Builderbuild=NetworkInterface.newBuilder().setNetwork("global/networks/default");if(externalAccess){AccessConfig.BuilderaccessConfig=AccessConfig.newBuilder().setType(Type.ONE_TO_ONE_NAT.name()).setName("External NAT").setNetworkTier(NetworkTier.PREMIUM.name());build.addAccessConfigs(accessConfig.build());}returnbuild.build();}privatestaticAttachedDiskbuildAttachedDisk(StringsourceImage,Stringzone){AttachedDiskInitializeParamsinitializeParams=AttachedDiskInitializeParams.newBuilder().setSourceImage(sourceImage).setDiskSizeGb(10).setDiskType(String.format("zones/%s/diskTypes/pd-standard",zone)).build();returnAttachedDisk.newBuilder().setInitializeParams(initializeParams)// Remember to set auto_delete to True if you want the disk to be deleted// when you delete your VM instance..setAutoDelete(true).setBoot(true).build();}}

Python

from__future__importannotationsimportreimportsysfromtypingimportAnyimportwarningsfromgoogle.api_core.extended_operationimportExtendedOperationfromgoogle.cloudimportcompute_v1defget_image_from_family(project:str,family:str)->compute_v1.Image:"""    Retrieve the newest image that is part of a given family in a project.    Args:        project: project ID or project number of the Cloud project you want to get image from.        family: name of the image family you want to get image from.    Returns:        An Image object.    """image_client=compute_v1.ImagesClient()# List of public operating system (OS) images: https://cloud.google.com/compute/docs/images/os-detailsnewest_image=image_client.get_from_family(project=project,family=family)returnnewest_imagedefdisk_from_image(disk_type:str,disk_size_gb:int,boot:bool,source_image:str,auto_delete:bool=True,)->compute_v1.AttachedDisk:"""    Create an AttachedDisk object to be used in VM instance creation. Uses an image as the    source for the new disk.    Args:         disk_type: the type of disk you want to create. This value uses the following format:            "zones/{zone}/diskTypes/(pd-standard|pd-ssd|pd-balanced|pd-extreme)".            For example: "zones/us-west3-b/diskTypes/pd-ssd"        disk_size_gb: size of the new disk in gigabytes        boot: boolean flag indicating whether this disk should be used as a boot disk of an instance        source_image: source image to use when creating this disk. You must have read access to this disk. This can be one            of the publicly available images or an image from one of your projects.            This value uses the following format: "projects/{project_name}/global/images/{image_name}"        auto_delete: boolean flag indicating whether this disk should be deleted with the VM that uses it    Returns:        AttachedDisk object configured to be created using the specified image.    """boot_disk=compute_v1.AttachedDisk()initialize_params=compute_v1.AttachedDiskInitializeParams()initialize_params.source_image=source_imageinitialize_params.disk_size_gb=disk_size_gbinitialize_params.disk_type=disk_typeboot_disk.initialize_params=initialize_params# Remember to set auto_delete to True if you want the disk to be deleted when you delete# your VM instance.boot_disk.auto_delete=auto_deleteboot_disk.boot=bootreturnboot_diskdefwait_for_extended_operation(operation:ExtendedOperation,verbose_name:str="operation",timeout:int=300)->Any:"""    Waits for the extended (long-running) operation to complete.    If the operation is successful, it will return its result.    If the operation ends with an error, an exception will be raised.    If there were any warnings during the execution of the operation    they will be printed to sys.stderr.    Args:        operation: a long-running operation you want to wait on.        verbose_name: (optional) a more verbose name of the operation,            used only during error and warning reporting.        timeout: how long (in seconds) to wait for operation to finish.            If None, wait indefinitely.    Returns:        Whatever the operation.result() returns.    Raises:        This method will raise the exception received from `operation.exception()`        or RuntimeError if there is no exception set, but there is an `error_code`        set for the `operation`.        In case of an operation taking longer than `timeout` seconds to complete,        a `concurrent.futures.TimeoutError` will be raised.    """result=operation.result(timeout=timeout)ifoperation.error_code:print(f"Error during{verbose_name}: [Code:{operation.error_code}]:{operation.error_message}",file=sys.stderr,flush=True,)print(f"Operation ID:{operation.name}",file=sys.stderr,flush=True)raiseoperation.exception()orRuntimeError(operation.error_message)ifoperation.warnings:print(f"Warnings during{verbose_name}:\n",file=sys.stderr,flush=True)forwarninginoperation.warnings:print(f" -{warning.code}:{warning.message}",file=sys.stderr,flush=True)returnresultdefcreate_instance(project_id:str,zone:str,instance_name:str,disks:list[compute_v1.AttachedDisk],machine_type:str="n1-standard-1",network_link:str="global/networks/default",subnetwork_link:str=None,internal_ip:str=None,external_access:bool=False,external_ipv4:str=None,accelerators:list[compute_v1.AcceleratorConfig]=None,preemptible:bool=False,spot:bool=False,instance_termination_action:str="STOP",custom_hostname:str=None,delete_protection:bool=False,)->compute_v1.Instance:"""    Send an instance creation request to the Compute Engine API and wait for it to complete.    Args:        project_id: project ID or project number of the Cloud project you want to use.        zone: name of the zone to create the instance in. For example: "us-west3-b"        instance_name: name of the new virtual machine (VM) instance.        disks: a list of compute_v1.AttachedDisk objects describing the disks            you want to attach to your new instance.        machine_type: machine type of the VM being created. This value uses the            following format: "zones/{zone}/machineTypes/{type_name}".            For example: "zones/europe-west3-c/machineTypes/f1-micro"        network_link: name of the network you want the new instance to use.            For example: "global/networks/default" represents the network            named "default", which is created automatically for each project.        subnetwork_link: name of the subnetwork you want the new instance to use.            This value uses the following format:            "regions/{region}/subnetworks/{subnetwork_name}"        internal_ip: internal IP address you want to assign to the new instance.            By default, a free address from the pool of available internal IP addresses of            used subnet will be used.        external_access: boolean flag indicating if the instance should have an external IPv4            address assigned.        external_ipv4: external IPv4 address to be assigned to this instance. If you specify            an external IP address, it must live in the same region as the zone of the instance.            This setting requires `external_access` to be set to True to work.        accelerators: a list of AcceleratorConfig objects describing the accelerators that will            be attached to the new instance.        preemptible: boolean value indicating if the new instance should be preemptible            or not. Preemptible VMs have been deprecated and you should now use Spot VMs.        spot: boolean value indicating if the new instance should be a Spot VM or not.        instance_termination_action: What action should be taken once a Spot VM is terminated.            Possible values: "STOP", "DELETE"        custom_hostname: Custom hostname of the new VM instance.            Custom hostnames must conform to RFC 1035 requirements for valid hostnames.        delete_protection: boolean value indicating if the new virtual machine should be            protected against deletion or not.    Returns:        Instance object.    """instance_client=compute_v1.InstancesClient()# Use the network interface provided in the network_link argument.network_interface=compute_v1.NetworkInterface()network_interface.network=network_linkifsubnetwork_link:network_interface.subnetwork=subnetwork_linkifinternal_ip:network_interface.network_i_p=internal_ipifexternal_access:access=compute_v1.AccessConfig()access.type_=compute_v1.AccessConfig.Type.ONE_TO_ONE_NAT.nameaccess.name="External NAT"access.network_tier=access.NetworkTier.PREMIUM.nameifexternal_ipv4:access.nat_i_p=external_ipv4network_interface.access_configs=[access]# Collect information into the Instance object.instance=compute_v1.Instance()instance.network_interfaces=[network_interface]instance.name=instance_nameinstance.disks=disksifre.match(r"^zones/[a-z\d\-]+/machineTypes/[a-z\d\-]+$",machine_type):instance.machine_type=machine_typeelse:instance.machine_type=f"zones/{zone}/machineTypes/{machine_type}"instance.scheduling=compute_v1.Scheduling()ifaccelerators:instance.guest_accelerators=acceleratorsinstance.scheduling.on_host_maintenance=(compute_v1.Scheduling.OnHostMaintenance.TERMINATE.name)ifpreemptible:# Set the preemptible settingwarnings.warn("Preemptible VMs are being replaced by Spot VMs.",DeprecationWarning)instance.scheduling=compute_v1.Scheduling()instance.scheduling.preemptible=Trueifspot:# Set the Spot VM settinginstance.scheduling.provisioning_model=(compute_v1.Scheduling.ProvisioningModel.SPOT.name)instance.scheduling.instance_termination_action=instance_termination_actionifcustom_hostnameisnotNone:# Set the custom hostname for the instanceinstance.hostname=custom_hostnameifdelete_protection:# Set the delete protection bitinstance.deletion_protection=True# Prepare the request to insert an instance.request=compute_v1.InsertInstanceRequest()request.zone=zonerequest.project=project_idrequest.instance_resource=instance# Wait for the create operation to complete.print(f"Creating the{instance_name} instance in{zone}...")operation=instance_client.insert(request=request)wait_for_extended_operation(operation,"instance creation")print(f"Instance{instance_name} created.")returninstance_client.get(project=project_id,zone=zone,instance=instance_name)defcreate_spot_instance(project_id:str,zone:str,instance_name:str)->compute_v1.Instance:"""    Create a new Spot VM instance with Debian 10 operating system.    Args:        project_id: project ID or project number of the Cloud project you want to use.        zone: name of the zone to create the instance in. For example: "us-west3-b"        instance_name: name of the new virtual machine (VM) instance.    Returns:        Instance object.    """newest_debian=get_image_from_family(project="debian-cloud",family="debian-11")disk_type=f"zones/{zone}/diskTypes/pd-standard"disks=[disk_from_image(disk_type,10,True,newest_debian.self_link)]instance=create_instance(project_id,zone,instance_name,disks,spot=True)returninstance

To create multiple Spot VMs with the same properties, you cancreate an instance template,and use the template to create amanaged instance group (MIG).For more information, seebest practices.

Start Spot VMs

Like other VMs, Spot VMs start upon creation. Likewise, ifSpot VMs are stopped, you canrestart the VMsto resume theRUNNING state.You can stop and restart preempted Spot VMsas many times as you would like, as long as there is capacity.For more information, seeVM instance life cycle.

If Compute Engine stops one or more Spot VMs in an autoscalingmanaged instance group (MIG) or Google Kubernetes Engine (GKE) cluster, thegroup restarts the VMs when the resources become available again.

Identify a VM's provisioning model and termination action

Identify a VM'sprovisioning modelto see if it is a standard VM, Spot VM, orpreemptible VM.For a Spot VM, you can also identify thetermination action.You can identify a VM's provisioning model and termination action using theGoogle Cloud console, gcloud CLI, or the Compute Engine API.

Console

  1. Go to theVM instances page.

    Go to the VM instances page

  2. Click theName of the VM you want to identify. TheVM instance details page opens.

  3. Go to theManagement section at the bottom of the page. In theAvailability policies subsection, check the following options:

    • If theVM provisioning model is set toSpot, the VM is aSpot VM.
      • On VM termination indicates which action to take whenCompute Engine preempts the VM, eitherStop orDelete the VM.
    • Otherwise, if theVM provisioning model is set toStandardor:
      • If thePreemptibility option is set toOn, the VM is apreemptible VM.
      • Otherwise, the VM is a standard VM.

gcloud

To describe a VM from the gcloud CLI, use thegcloud compute instances describe command:

gcloud compute instances describeVM_NAME

whereVM_NAME is thename of the VMthat you want to check.

In the output, check thescheduling field to identify the VM:

  • If the output includes theprovisioningModel field set toSPOT,similar to the following, the VM is a Spot VM.

    ...scheduling:...provisioningModel: SPOTinstanceTerminationAction:TERMINATION_ACTION...

    whereTERMINATION_ACTION indicates whichaction to take when Compute Engine preempts theVM, either stop (STOP) or delete (DELETE) the VM. If theinstanceTerminationAction field is missing, the default value isSTOP.

  • Otherwise, if the output includes theprovisioningModel field set tostandard or if the output omits theprovisioningModel field:

    • If the output includes thepreemptible field set totrue, theVM is a preemptible VM.
    • Otherwise, the VM is a standard VM.

REST

To describe a VM from the Compute Engine API, use theinstances.get method:

GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME

Replace the following:

  • PROJECT_ID: theproject idof the project that the VM is in.
  • ZONE: thezonewhere the VM is located.
  • VM_NAME: thename of the VMthat you want to check.

In the output, check thescheduling field to identify the VM:

  • If the output includes theprovisioningModel field set toSPOT,similar to the following, the VM is a Spot VM.

    {  ...  "scheduling":  {     ...     "provisioningModel": "SPOT",     "instanceTerminationAction": "TERMINATION_ACTION"     ...  },  ...}

    whereTERMINATION_ACTION indicates whichaction to take when Compute Engine preempts theVM, either stop (STOP) or delete (DELETE) the VM. If theinstanceTerminationAction field is missing, the default value isSTOP.

  • Otherwise, if the output includes theprovisioningModel field set tostandard or if the output omits theprovisioningModel field:

    • If the output includes thepreemptible field set totrue, theVM is a preemptible VM.
    • Otherwise, the VM is a standard VM.

Go

import("context""fmt""io"compute"cloud.google.com/go/compute/apiv1""cloud.google.com/go/compute/apiv1/computepb")// isSpotVM checks if a given instance is a Spot VM or not.funcisSpotVM(wio.Writer,projectID,zone,instanceNamestring)(bool,error){// projectID := "your_project_id"// zone := "europe-central2-b"// instanceName := "your_instance_name"ctx:=context.Background()client,err:=compute.NewInstancesRESTClient(ctx)iferr!=nil{returnfalse,fmt.Errorf("NewInstancesRESTClient: %w",err)}deferclient.Close()req:=&computepb.GetInstanceRequest{Project:projectID,Zone:zone,Instance:instanceName,}instance,err:=client.Get(ctx,req)iferr!=nil{returnfalse,fmt.Errorf("GetInstance: %w",err)}isSpot:=instance.GetScheduling().GetProvisioningModel()==computepb.Scheduling_SPOT.String()varisSpotMessagestringif!isSpot{isSpotMessage=" not"}fmt.Fprintf(w,"Instance %s is%s spot\n",instanceName,isSpotMessage)returninstance.GetScheduling().GetProvisioningModel()==computepb.Scheduling_SPOT.String(),nil}

Java

importcom.google.cloud.compute.v1.Instance;importcom.google.cloud.compute.v1.InstancesClient;importcom.google.cloud.compute.v1.Scheduling;importjava.io.IOException;importjava.util.concurrent.ExecutionException;importjava.util.concurrent.TimeoutException;publicclassCheckIsSpotVm{publicstaticvoidmain(String[]args)throwsIOException,ExecutionException,InterruptedException,TimeoutException{// TODO(developer): Replace these variables before running the sample.// Project ID or project number of the Google Cloud project you want to use.StringprojectId="your-project-id";// Name of the virtual machine to check.StringinstanceName="your-route-name";// Name of the zone you want to use. For example: "us-west3-b"Stringzone="your-zone";booleanisSpotVm=isSpotVm(projectId,instanceName,zone);System.out.printf("Is %s spot VM instance - %s",instanceName,isSpotVm);}// Check if a given instance is Spot VM or not.publicstaticbooleanisSpotVm(StringprojectId,StringinstanceName,Stringzone)throwsIOException{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.try(InstancesClientclient=InstancesClient.create()){Instanceinstance=client.get(projectId,zone,instanceName);returninstance.getScheduling().getProvisioningModel().equals(Scheduling.ProvisioningModel.SPOT.name());}}}

Python

fromgoogle.cloudimportcompute_v1defis_spot_vm(project_id:str,zone:str,instance_name:str)->bool:"""    Check if a given instance is Spot VM or not.    Args:        project_id: project ID or project number of the Cloud project you want to use.        zone: name of the zone you want to use. For example: "us-west3-b"        instance_name: name of the virtual machine to check.    Returns:        The Spot VM status of the instance.    """instance_client=compute_v1.InstancesClient()instance=instance_client.get(project=project_id,zone=zone,instance=instance_name)return(instance.scheduling.provisioning_model==compute_v1.Scheduling.ProvisioningModel.SPOT.name)

Manage preemption of Spot VM

To learn how to manage preemption of Spot VM, review thefollowing sections:

Handle preemption with a shutdown script

When Compute Engine preempts a Spot VM, youcan use ashutdown script to try to perform cleanupactions before the VM is preempted. For example, you can gracefully stop arunning process and copy a checkpoint file toCloud Storage.Notably, the maximum length of the shutdown period is shorter for a preemptionnotice than for a user-initiated shutdown. For more information about theshutdown period for a preemption notice, seePreemption processin the conceptual documentation for Spot VMs.

The following is an example of a shutdown script that you can add to arunning Spot VM or add while creating a newSpot VM. This script runs when the VM starts to shut down,before the operating system's normalkill command stops all remainingprocesses. After gracefully stopping the desired program, the scriptperforms a parallel upload of a checkpoint file to a Cloud Storage bucket.

#!/bin/bashMY_PROGRAM="PROGRAM_NAME" # For example, "apache2" or "nginx"MY_USER="LOCAL_USER"CHECKPOINT="/home/$MY_USER/checkpoint.out"BUCKET_NAME="BUCKET_NAME" # For example, "my-checkpoint-files" (without gs://)echo "Shutting down!  Seeing if ${MY_PROGRAM} is running."# Find the newest copy of $MY_PROGRAMPID="$(pgrep -n "$MY_PROGRAM")"if [[ "$?" -ne 0 ]]; then  echo "${MY_PROGRAM} not running, shutting down immediately."  exit 0fiecho "Sending SIGINT to $PID"kill -2 "$PID"# Portable waitpid equivalentwhile kill -0 "$PID"; do   sleep 1doneecho "$PID is done, copying ${CHECKPOINT} to gs://${BUCKET_NAME} as ${MY_USER}"su "${MY_USER}" -c "gcloud storage cp $CHECKPOINT gs://${BUCKET_NAME}/"echo "Done uploading, shutting down."

This script assumes the following:

  • The VM was created with at least read/write access to Cloud Storage.For instructions about how to create a VM with the appropriate scopes,see theauthentication documentation.

  • You have an existing Cloud Storage bucket and permission to write toit.

To add this script to a VM, configure the script to work with anapplication on your VM and add it to the VM's metadata.

  1. Copy or download the shutdown script:

    • Copy the preceding shutdown script after replacing the following:

      • PROGRAM_NAME is the name of the process orprogram you want to shut down. For example,apache2 ornginx.
      • LOCAL_USER is the username you are loggedinto the virtual machine as.
      • BUCKET_NAME is the name of theCloud Storage bucket where you want to save the program'scheckpoint file. Note the bucket name does not start withgs:// in this case.
    • Download the shutdown script to your local workstation and then replace the following variablesin the file:

      • [PROGRAM_NAME] is the name of the process or program you want to shutdown. For example,apache2 ornginx.
      • [LOCAL_USER] is the username you are logged into the virtual machine as.
      • [BUCKET_NAME] is the name of the Cloud Storage bucket where youwant to save the program's checkpoint file. Note the bucket name does notstart withgs:// in this case.
  2. Add the shutdown script to anew VMor anexisting VM.

Detect preemption of Spot VMs

Determine if Spot VMs were preempted by Compute Engineusing theGoogle Cloud console,gcloud CLI or theCompute Engine API.

Console

You can check if a VM was preempted by checking thesystem activity logs.

  1. In the Google Cloud console, go to theLogs page.

    Go to Logs

  2. Select your project and clickContinue.

  3. Addcompute.instances.preempted to thefilter by label or text search field.

  4. Optionally, you can also entera VM name if you want to see preemption operations for a specificVM.

  5. Press enter to apply the specified filters. The Google Cloud consoleupdates the list of logs to show only the operations where a VMwas preempted.

  6. Select an operation in the list to see details about the VM thatwas preempted.

gcloud

Use thegcloud compute operations list commandwith afilter parameter toget a list of preemption events in your project.

gcloud compute operations list \    --filter="operationType=compute.instances.preempted"

Optionally, you can use additional filter parameters to further scopethe results. For example, to see preemption events only for instanceswithin a managed instance group, use the following command:

gcloud compute operations list \    --filter="operationType=compute.instances.preempted AND targetLink:instances/BASE_INSTANCE_NAME"

whereBASE_INSTANCE_NAME is the base namespecified as a prefix for the names of all the VMs in thismanaged instance group.

The output is similar to the following:

NAME                  TYPE                         TARGET                                        HTTP_STATUS STATUS TIMESTAMPsystemevent-xxxxxxxx  compute.instances.preempted  us-central1-f/instances/example-instance-xxx  200         DONE   2015-04-02T12:12:10.881-07:00

An operation type ofcompute.instances.preempted indicates that theVM instance was preempted. You can use thegcloud compute operations describe commandto get more information about a specific preemption operation.

gcloud compute operations describeSYSTEM_EVENT \    --zone=ZONE

Replace the following:

  • SYSTEM_EVENT: the system event from theoutput of thegcloud compute operations list command—for example,systemevent-xxxxxxxx.
  • ZONE: the zone of thesystem event—for example,us-central1-f.

The output is similar to the following:

...operationType: compute.instances.preemptedprogress: 100selfLink: https://compute.googleapis.com/compute/v1/projects/my-project/zones/us-central1-f/operations/systemevent-xxxxxxxxstartTime: '2015-04-02T12:12:10.881-07:00'status: DONEstatusMessage: Instance was preempted....

REST

To get a list of recent system operations for a specific project and zone,use thezoneOperations.get method.

GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/operations

Replace the following:

Optionally, to scope the response to show only preemption operations,you can add a filter to your API request:

operationType="compute.instances.preempted"

Alternatively, to see preemption operationsfor a specific VM, add atargetLink parameter to the filter:

operationType="compute.instances.preempted" ANDtargetLink="https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/VM_NAME

Replace the following:+PROJECT_ID: theproject id.+ZONE: thezone.+VM_NAME: the name of a specific VM in this zone and project.

The response contains a list of recent operations. For example, apreemption looks similar to the following:

{  "kind": "compute#operation",  "id": "15041793718812375371",  "name": "systemevent-xxxxxxxx",  "zone": "https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-f",  "operationType": "compute.instances.preempted",  "targetLink": "https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-f/instances/example-instance",  "targetId": "12820389800990687210",  "status": "DONE",  "statusMessage": "Instance was preempted.",  ...}

Alternatively, you can determine if a VM was preempted frominside the VM itself. This is useful if you want to handle a shutdown due to aCompute Engine preemption differently from a normalshutdown in ashutdown script. To do this, simply checkthe metadata server for thepreempted value in your VM'sdefault metadata.

For example, usecurl from within your VM to obtain the value forpreempted:

curl "http://metadata.google.internal/computeMetadata/v1/instance/preempted" -H "Metadata-Flavor: Google"TRUE

If this value isTRUE, the VM was preempted by Compute Engine,otherwise it isFALSE.

If you want to use this outside of a shutdown script, you can append?wait_for_change=trueto the URL. This performs a hanging HTTP GET requestthat only returns when the metadata has changed and the VM has beenpreempted.

curl "http://metadata.google.internal/computeMetadata/v1/instance/preempted?wait_for_change=true" -H "Metadata-Flavor: Google"TRUE

Test preemption settings

You can run simulated maintenance events on your VMs to force them topreempt. Use this feature to test how your apps handle Spot VMs.ReadSimulate a host maintenance eventto learn how to test maintenance events on your instances.

Best practices

Here are some best practices to help you get the most out of Spot VMs.

  • Use instance templates. Rather than creating Spot VMs one at atime, you can useinstance templatesto create multiple Spot VMs with the same properties. Instancetemplates are required for using MIGs. Alternatively, you can alsocreate multiple Spot VMsusing the bulk instance API.

  • Use MIGs to regionally distribute and automatically recreateSpot VMs. UseMIGsto make workloads on Spot VMs more flexible and resilient.For example, useregional MIGs to distributeVMs across multiple zones, which helps mitigate resource-availability errors.Additionally, useautohealingto automatically recreate Spot VMs after they are preempted.

  • Pick smaller machine types. Resources for Spot VMs come out ofexcess and backup Google Cloud capacity. Capacity for Spot VMsis often easier to get forsmaller machine types, meaning machinetypes with less resources like vCPUs and memory. You might find more capacityfor Spot VMs by selecting a smaller custom machine type, butcapacity is even more likely for smaller predefined machine types. Forexample, compared to capacity for then2-standard-32 predefinedmachine type, capacity for then2-custom-24-96 custom machine type ismore likely, but capacity for then2-standard-16 predefined machine typeis even more likely.

  • Run large clusters of Spot VMs during off peak times.The load on Google Cloud data centers varies withlocation and time of day, but generally lowest on nights and weekends. As such,nights and weekends are the best times to run large clusters of Spot VMs.

  • Design your applications to be fault and preemption tolerant.It's important to be prepared for the fact that there are changes inpreemption patterns at different points in time. For example,if a zone suffers a partial outage, large numbers of Spot VMscould be preempted to make room for standard VMs that need to bemoved as part of the recovery. In that small window of time, thepreemption rate would look very different than on any other day. If yourapplication assumes that preemptions are always done in small groups,you might not be prepared for such an event.

  • Retry creating Spot VMs that have been preempted.If your Spot VMs have been preempted, try creating new Spot VMsonce or twice before falling back to standard VMs. Depending on yourrequirements, it might be a good idea to combine standard VMs and Spot VMsin your clusters to ensure that work proceeds at an adequate pace.

  • Use shutdown scripts.Manage shutdown and preemption notices with ashutdown script that can save a job's progressso that it can pick up where it left off, rather than start over from scratch.

What's next?

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.