Migrate to Cloud SQL from an XtraBackup physical file

MySQL  |  PostgreSQL  |  SQL Server

This page describes how to migrate a MySQL database from an external server toCloud SQL by using aPercona XtraBackup for MySQL physical file.

Cloud SQL supports the migration of MySQL databases on external serversto Cloud SQL for MySQL instances by using Percona XtraBackup. You generate physicalfiles with the XtraBackup utility and then upload them toCloud Storage.By using physical files, you can improve the overall speed of yourmigration by up to 10 times over a regular logical dump file-based migration.

Cloud SQL supports physical file-based migration forMySQL 5.7 and 8.0. MySQL 5.6 and 8.4 are not supported. Migration fromAmazon Aurora or MySQL on Amazon RDS databases is not supported.In addition, the target replica instancein Cloud SQL for MySQL must be installed with the same MySQL major version as yourexternal server. However, the target replica can use a later minor version.For example, if your external database is using MySQL 8.0.31, then yourtarget replica must be Cloud SQL for MySQL version 8.0.31 or later.

Note: The procedure in this document uses the Cloud SQL for MySQL Admin API.You can also use Database Migration Service to perform this migration.For more information about using Database Migration Service, seeMigrate your databases by using a Percona XtraBackup physical file.

Before you begin

This section provides the steps you need to take before you migrate your MySQLdatabase to Google Cloud.

Set up a Google Cloud project

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. Enable the Cloud SQL Admin API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

    Enable the API

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  6. Verify that billing is enabled for your Google Cloud project.

  7. Enable the Cloud SQL Admin API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

    Enable the API

  8. Make sure you have the Cloud SQL Admin, Storage Admin, and Compute Viewer roles on your user account.

    Go to the IAM page

Set up a Cloud Storage bucket

If you haven't done so already, then create aCloud Storage bucket.

Install the Google Cloud SDK

To usegcloud CLI commands on your external server,install the Google Cloud SDK.

Prepare the external server for the migration

  1. Install one of the following versions of the XtraBackup utility on yourexternal server.

    For MySQL 8.0, you must install a version of XtraBackup that is equalor above your source server version. For more information, seeServer version and backup version comparison in the Percona XtraBackup documentation.

  2. Ensure that your external server meetsall the necessary requirementsfor replication. For more information, seeSet up the external server for replication.

    In addition to the external server requirements for replication,migration from a XtraBackup physical file has the following requirements:

    • Your MySQL database must be an on-premises database or a self-managedMySQL database on a Compute Engine VM. Migration fromAmazon Aurora or MySQL on Amazon RDS databases is not supported.
    • You must configure theinnodb_data_file_path parameter withonly one data file that uses the default data filenameibdata1.If your database is configured with two data files or has a data file witha different name, then you can't migrate the database using an XtraBackupphysical file. For example, a database configured withinnodb_data_file_path=ibdata01:50M:autoextend is not supportedfor the migration.
    • Theinnodb_page_size parameter on your source externaldatabase must be configured with the default value16384.
  3. If you haven't set one up already,create a replication user account.You'll need the username and password for this user account.

Perform the migration

Complete all the steps in the following sections to migrate your external MySQLdatabase to Cloud SQL.

Create and prepare the XtraBackup physical file

  1. On the external server, use XtraBackup to do afull backup of thesource database. For more information about taking a full backup,seeCreate a full backupin the Percona XtraBackup documentation.

    Other types of backup, such as incremental and partial backup, are not supported.

    To improve the performance of the backup process, do the following:

    For example:

    sudoxtrabackup--backup\--target-dir=XTRABACKUP_PATH\--user=USERNAME\--password=PASSWORD\--parallel=THREADS

    Replace the following variables:

    • XTRABACKUP_PATH: the location of the output backup file
    • USERNAME: a user that hasBACKUP_ADMIN privileges on the source database
    • PASSWORD: the password for the user
    • THREADS: the number of threads to use when copying multiple data files concurrently while creating a backup
  2. Use the XtraBackup utility to prepare the backup file. The file must bein a consistent state. For more information about preparing a full backup,seePrepare a full backup. For example:

    sudoxtrabackup--prepare--target-dir=XTRABACKUP_PATH\--use-memory=MEMORY

    Replace the following variables:

    • XTRABACKUP_PATH: the location of the output backup file
    • MEMORY: the memory allocated for preparation. Specify 1GB to 2GB. For more information about the-use-memory option, see thePercona XtraBackup documentation.

    The time required to prepare the backup file can vary depending on the size of the database.

Upload the XtraBackup physical file to Cloud Storage

Use thegcloud CLI to upload the backup file to Cloud Storage.

gcloudstoragersyncXTRABACKUP_PATHCLOUD_STORAGE_BUCKET--recursive

ReplaceXTRABACKUP_PATH with the location of the output backup file andCLOUD_STORAGE_BUCKET with the path of the Cloud Storage bucket.

There is no limit to the size of your XtraBackup files.However, there is a 5 TB limit for the size ofeach single file that you can upload to a Cloud Storage bucket.

Define the source representation instance

  1. Create asource.json file that defines the source representationinstance for your external server. A source representation instance providesmetadata for the external server in Cloud SQL.

    In yoursource.json file, provide the following basic informationabout your external server.

    {"name":"SOURCE_NAME","region":"REGION","databaseVersion":"DATABASE_VERSION","onPremisesConfiguration":{"hostPort":"SOURCE_HOST:3306","username":"REPLICATION_USER_NAME","password":"REPLICATION_USER_PASSWORD","dumpFilePath":"CLOUD_STORAGE_BUCKET""caCertificate":"SOURCE_CERT","clientCertificate":"CLIENT_CERT","clientKey":"CLIENT_KEY"}}
    PropertyDescription
    SOURCE_NAMEThe name of the source representation instance to create.
    REGIONTheregion where you want the source representation instance to reside. Specify the same region where you'll create the target Cloud SQL replica instance.
    DATABASE_VERSIONThe database version running on your external server. The only supported options areMYSQL_5_7 orMYSQL_8_0.
    SOURCE_HOSTThe IPv4 address and port for the external server or the DNS address for the external server. If you use a DNS address, then it can contain up to 60 characters.
    USERNAMEThe replication user account on the external server.
    PASSWORDThe password for the replication user account.
    CLOUD_STORAGE_BUCKETThe name of the Cloud Storage bucket that contains the XtraBackup physical file.
    CLIENT_CA_CERTThe CA certificate on the external server.Include only if SSL/TLS is used on the external server.
    CLIENT_CERTThe client certificate on the external server.Required only forserver-client authentication.Include only if SSL/TLS is used on the external server.
    CLIENT_KEYThe private key file for the client certificate on the external server.Required only forserver-client authentication. Include only if SSL/TLS is used on theexternal server.
  2. Create the source representation instance by make a request to theCloud SQL Admin API with the followingcurl command.In the data for the request, provide thesource.json file that you created.

    gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data@./source.json\-XPOST\https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances
    PropertyDescription
    PROJECT_IDThe ID for your project in Google Cloud.

Identify a target replica instance

Create a file that identifies the target replica in Cloud SQL forthe migration. You can migrate data to either a new instance by creating areplica, or you can use an existing Cloud SQL instance by demoting areplica.

Option 1: Create a replica instance

  1. To create a replica instance, use the following examplereplica.json file:

    {"name":"REPLICA_NAME","region":"REGION","databaseVersion":"DB_VERSION","settings":{"tier":"INSTANCE_TIER","dataDiskSizeGb":"DISK_SIZE_GB","edition":"EDITION_NAME"},"masterInstanceName":"SOURCE_NAME"}
    PropertyDescription
    REPLICA_NAMEThe name of the Cloud SQL replica to create.
    REGIONSpecify the same region that you assigned to the source representation instance.
    DATABASE_VERSIONThe database version to use with the Cloud SQL replica. The options for this version areMYSQL_5_7 orMYSQL_8_0. This database major version must match the database version that you specified for the external server. You can also specify a minor version, but the minor version must be the same or a later version than the version installed on the external server. For a list of available strings for MySQL, seeSqlDatabaseVersion.
    INSTANCE_TIERThe type of machine to host your replica instance. You must specify a machine type that matches with the edition of your instance and the architecture type of your external server. For example, if you selectENTERPRISE_PLUS for theedition field, then you must specify a db-perf-optimized machine type. For a list of supported machine types, seeMachine Type.
    DISK_SIZE_GBThe storage size for the Cloud SQL replica, in GB.Important: You must specify a storage size that is larger than the size of the physical file that you uploaded to Cloud Storage.
    EDITION_NAMEThe Cloud SQL edition to use for the replica. The possible values areENTERPRISE_PLUS (MySQL 8.0 only) orENTERPRISE.
    SOURCE_NAMEThe name that you assigned to the source representation instance.
  2. Create the target replica instance by making a request to theCloud SQL Admin API with the followingcurl command.In the data for the request, provide the JSON file that you created.

    gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data@./replica.json\-XPOST\https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances
    PropertyDescription
    PROJECT_IDThe ID for your project in Google Cloud.

Option 2: Use an existing replica instance

  1. Ensure that the existing replica instance has thefollowing attributes:

    • Same architecture type (x86 or ARM) as theexternal server.
    • At least the same amount of free disk space as the physical files that you uploaded to theCloud Storage bucket. The instance must have sufficient disk todownload the same amount of data from Cloud Storage.
  2. To use an existing replica instance, use the following examplereplica.json file:

    {"demoteContext":{"sourceRepresentativeInstanceName":"SOURCE_NAME"}}
    PropertyDescription
    SOURCE_NAMEThe name that you assigned to the source representation instance.
  3. Demote the existing target replica instance by making a request to thedemote Cloud SQL Admin API with thefollowingcurl command. In the data for therequest, provide the JSON file that you created.

    gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data@./replica.json\-XPOST\https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/EXISTING_INSTANCE_ID/demote
    PropertyDescription
    PROJECT_IDThe ID for your project in Google Cloud.
    EXISTING_INSTANCE_IDThe ID for the existing replica instance that you want to use for the migration.

Verify your migration settings

Check that your instances are set up correctly for the migration by running thefollowing command.

Important: In themigrationType field, you must specify the valuePHYSICAL. If you don't specify this value, then the verification fails.
gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data'{             "syncMode": "SYNC_MODE",             "skipVerification": false,             "migrationType":"PHYSICAL"               }'\-XPOST\https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/REPLICA_NAME/verifyExternalSyncSettings
PropertyDescription
SYNC_MODESpecifyoffline to configure the migration as a one-time process. To set up continuous replication from the external server, specifyonline.
PROJECT_IDThe ID of your project in Google Cloud.
REPLICA_NAMEThe name that you assigned to the target replica instance.

As an initial response, this verification step returnsaservice account. You must provide this service accountwith Cloud Storage permissions to continue with the migration process.The insufficient permissions error message is expected.The following is an example response:

{    "kind": "sql#externalSyncSettingError",    "type": "INSUFFICIENT_GCS_PERMISSIONS",    "detail": "Service accountp703314288590-df3om0@my-project.iam.gserviceaccount.com              is missing necessary permissions storage.objects.list and              storage.objects.get to access Google Cloud Storage bucket"}

Add Cloud Storage permissions to the returned service account

To add the required permissions, do the following:

  1. In the Google Cloud console, go to the Cloud StorageBuckets page.

    Go to Buckets

  2. Click thePermissions tab.

  3. ClickGrant Access.

  4. In theNew principals field, type the name of the service accountreturned in the verification response. For example, in the sample output of thethe previous step, the returned service account name isp703314288590-df3om0@my-project.iam.gserviceaccount.com.

  5. In theSelect a role drop-down, select theStorage Object Viewer role.

  6. ClickSave.

Run the verification again

After you have added the required permissions to the service account, re-runthe verification step to make sure the service account has access to theCloud Storage bucket.

The verification step checks for the following:

  • Connectivity between the Cloud SQL replica and the external server ispresent, but only if the migration is continuous
  • Replication user privileges are sufficient
  • Versions are compatible
  • The Cloud SQL replica isn't already replicating
  • Binlogs are enabled on the external server

If any issues are detected, then Cloud SQL returns an error message.

Add users to the Cloud SQL replica

Note: If you don't need to add any database user accounts to theCloud SQL replica, then you can skip this step.

You can't import or migrate database user accounts from the external server.If you need to add any database user accounts to the Cloud SQLreplica, then add the accounts before you start the replication.For more information, seeManage users with built-in authentication.

Start the migration

After you have completed verification and no errors are returned, then you areready to start the migration. To migrate your external server, use thestartExternalSync API.

Important: In themigrationType field, you must specify the valuePHYSICAL. If you don't specify this value, then the migration fails.

Use the following command:

gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data'{               "syncMode": "SYNC_MODE",               "skipVerification": false,               "migrationType":"PHYSICAL"              }'\-XPOST\https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/REPLICA_NAME/startExternalSync
PropertyDescription
SYNC_MODESpecifyoffline to configure the migration as a one-time process. To set up continuous replication from the external server, specifyonline.
PROJECT_IDThe ID of your project in Google Cloud.
REPLICA_NAMEThe name that you assigned to the target replica instance.

Monitor the migration

To check the status of your migration, you can do the following:

  1. Retrieve the operation ID of the migration job from the response ofthestartExternalSync API.For example:

    {"kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/my-project/instances/replica-instance", "status": "PENDING", "user": "user@example.com", "insertTime": "******", "operationType": "START_EXTERNAL_SYNC", "name": "******", "targetId": "replica-instance", "selfLink": "https://sqladmin.googleapis.com/v1/projects/my-project/operations/OPERATION_ID", "targetProject": "my-project"}
  2. Use the operation ID in the following command.

    gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\-XGET\https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/START_EXTERNAL_SYNC_OPERATION_ID
    PropertyDescription
    PROJECT_IDThe ID for your project in Google Cloud.
    START_EXTERNAL_SYNC_OPERATION_IDThe operation IDof your migration job.

Monitor replication

When the target replica instance in Cloud SQL finishesthe initial data load, the instance connects to the external server and appliesall updates that were made after the export operation.

To monitor the status of replication, seeConfirm your replication status.

After the Cloud SQL replica has received all the changes from the external server and there's no replication delay on the Cloud SQL replica,connect to your database. Run the appropriate database commands to make surethat the contents are as expected when compared with the external server.

After you have promoted the target replica to a standalone instance, you candelete the XtraBackup physical files in your Cloud Storage bucket.Retain your external server until the necessary validations are done.

Limitations

This section lists limitations with the XtraBackup migration process:

  • You must use Percona XtraBackup to backup up your data to the Cloud Storagebucket. Other backup utilities are not supported.
  • Migration is not supported to earlier database major or minor versions.For example, you can't migrate from MySQL 8.0 to 5.7 orfrom MySQL 8.0.36 to 8.0.16.
  • Database migration from a XtraBackup physical file is only supported foron-premises MySQL databases or a self-managed MySQL databaserunning on a Compute Engine VM. Migration from Amazon Aurora or MySQL on AmazonRDS databases is not supported.
  • You can only migrate from a full backup. Other backup types, such as incrementalor partial backups, are not supported.
  • Database migration does not include database users or privileges.
  • You must set the binary log format toROW. If you configure thebinary log to any other format, such asSTATEMENT orMIXED,then replication might fail.
  • Cloud Storage limits the size of a file that you can upload to a bucket to 5 TB.
  • You can't migrate any plugins from your external database.
  • If you have configuredhigh availability for your instance,then the SLA doesn't apply until the initial phase of the migration completes.This phase is considered complete when all data from the XtraBackupphysical files has been imported to the Cloud SQL instance.
  • You can't migrate to or from a MySQL 8.4 database.
  • Migration of databases between machines withdifferent architecture types isn't supported. For example, you can't migrate a MySQL database hosted on a machine with ARM architecture to a machine with x86 architecture.

Troubleshoot

This section lists common troubleshooting scenarios.

Failure to import

If you encounter an error message similar toAttempt 1/2: import failedwhen you migrate, then you need to specifyPHYSICAL for themigrationType when youstart the migration.

If you don't specify amigrationType, then the type defaults toLOGICAL.

Cancel or stop a migration

If you need to cancel or stop a migration, then you can run the following command:

gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\-XPOST\https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/REPLICA_NAME/restart
PropertyDescription
PROJECT_IDThe ID of your project in Google Cloud.
REPLICA_NAMEThe name that you assigned to the target replica instance.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.