Database Migration Service for PostgreSQL FAQ

MySQL  |  PostgreSQL  |  PostgreSQL to AlloyDB



What is Database Migration Service?
Database Migration Service is a service that makes it easier for you to migrate your data to Google Cloud. Database Migration Service helps you lift and shift your PostgreSQL workloads into Cloud SQL.
Which sources are supported?
  • Amazon RDS 9.6.10+, 10.5+, 11.1+, 12, 13, 14, 15, 16, 17.
  • Amazon Aurora 10.11+, 11.6+, 12.4+, 13.3+, 14.6+, 15.2+, 16, 17.
  • Self-managed PostgreSQL (on premises or on any cloud VM that you fully control) 9.4, 9.5, 9.6, 10, 11, 12, 13, 14, 15, 16, 17.
  • Cloud SQL for PostgreSQL 9.6, 10, 11, 12, 13, 14, 15, 16, 17.
  • Microsoft Azure Database for PostgreSQL Flexible Server: 11+
Which destinations are supported?
  • Cloud SQL for PostgreSQL 9.6, 10, 11, 12, 13, 14, 15, 16, 17.
Is there cross-version support?
Database Migration Service supports PostgreSQL-to-Cloud SQL migrations across any major version, where the destination is the same or higher version than the source database.
Which data, schema, and metadata components are migrated?
Database Migration Service migrates schema, data, and metadata from the source to the destination. All of thefollowing data, schema, and metadata components are migrated as part of the database migration:

Data Migration
  • All schemas and all tables from the selected database.
Schema Migration
  • Naming
  • Primary key
  • Data type
  • Ordinal position
  • Default value
  • Nullability
  • Auto-increment attributes
  • Secondary indexes
Metadata Migration
  • Stored Procedures
  • Functions
  • Triggers
  • Views
  • Foreign key constraints
Which changes are replicated during continuous migration?
Only DML changes are automatically updated during the migration. Managing DDL so that the source anddestination database(s) remain compatible is the responsibility of the user, and can be achieved intwo ways:
  1. Stop writes to the source and run the DDL commands in both the source and the destination. Beforerunning DDL commands on the destination, grant thecloudsqlexternalsync role to the Cloud SQL user applyingthe DDL changes. To enable querying or changing the data, grant thecloudsqlexternalsync role tothe relevant Cloud SQL users.
  2. Use thepglogical.replicate_ddl_command to run DDL on the source and destination at a consistent point. The user running this command must have the same username on both the source and the destination, and should be the superuser or the owner of the artifact being migrated (for example, the table, sequence, view, or database).

    Here are a few examples of using thepglogical.replicate_ddl_command.

    To add a column to a database table, run the following command:

    select pglogical.replicate_ddl_command('ALTER TABLE[schema].[table] add column surname varchar(20)', '{default}');

    To change the name of a database table, run the following command:

    select pglogical.replicate_ddl_command('ALTER TABLE[schema].[table] RENAME TO[table_name]','{default}');

    To create a database table, run the following commands:

    1. select pglogical.replicate_ddl_command(command := 'CREATE TABLE[schema].[table] (id INTEGER PRIMARY KEY, name VARCHAR);', replication_sets := ARRAY['default'']);
    2. select pglogical.replication_set_add_table('default', '[schema].[table]');
What isn't migrated?

To add users to the Cloud SQL destination instance, navigate to the instance and add users from theUsers tab, or add them from the PostgreSQL client. Learn more aboutcreatingand managing PostgreSQL users.

Large objects can't bereplicated because PostgreSQL's logical decoding facility doesn'tsupport decoding changes to large objects. For tables that havecolumn type oid referencing largeobjects, the rows are still synced, and new rows are replicated. However, trying to accessthe large object on the destination database(read usinglo_get,export usinglo_export, or check the catalogpg_largeobject for the given oid), fails with a message saying that the large object doesn't exist.

For tables that don't have primary keys, Database Migration Service supports migration of theinitial snapshot andINSERT statements during the change data capture (CDC) phase. You should migrateUPDATE andDELETE statements manually.

Database Migration Service doesn't migrate data from materialized views, just the view schema. To populate the views, run the following command:REFRESH MATERIALIZED VIEWview_name.

TheSEQUENCE states (for example,last_value) on the new Cloud SQL destination might vary from the sourceSEQUENCE states.

Which networking methods are used?
To create a migration in Database Migration Service, connectivity must be establishedbetween the source and the Cloud SQL destination instance. There are a variety of methods supported.Choose the one that works best for the specific workload.
Networking methodDescriptionProsCons
IP allowlistWorks by configuring the source database server to accept connections from the public IP of the Cloud SQL instance. If you choose this method, then Database Migration Service guides you through the setup process during the migration creation.
  • Easy to configure.
  • Recommended for short-lived migration scenarios (POC or small database migrations).
  • Firewall configuration may require assistance from IT.
  • Exposes the source database to a public IP.
  • The connection isn't encrypted by default. Requires enabling SSL on the source database to encrypt the connection.
Reverse SSH tunnel through cloud-hosted VMEstablishes connectivity from the destination to the source through a secure reverse SSH tunnel. Requires a bastion host VM in the Google Cloud project and a machine (for example, a laptop on the network) that has connectivity to the source. Database Migration Service collects the required information at migration creation time, and auto-generates the script for setting it up.
  • Easy to configure.
  • Doesn't require any custom firewall configuration.
  • Recommended for short-lived migration scenarios (POC or small database migrations).
  • You own and manage the Bastion VM.
  • May incur additional costs.
VPC peeringThis method works by configuring the VPCs to communicate with one another. This is only applicable if both the source and destination are hosted in Google Cloud. Recommended for long-running or high-volume migrations.
  • Google Cloud solution.
  • Easy to configure.
  • High-bandwidth
Only available when the source is hosted in Google Cloud.
VPNSets up an IPSec VPN tunnel connecting the internal network and Google Cloud VPC through a secure connection over the public Internet. Use Google Cloud VPN or any VPN solution that is set up for the internal network.
  • Robust and scalable connectivity solution.
  • Medium-high bandwidth.
  • Security built-in.
  • Offered as Google Cloud solutions or from other 3rd parties.
  • Additional cost.
  • Non-trivial configuration (unless already in-place).
Cloud InterconnectUses a highly available, low latency connection between the on-premises network and Google Cloud.Highest bandwidth, ideal for long-running high-volume migrations.
  • Additional cost.
  • Connection isn't secure by default.
  • Non-trivial configuration (unless already in-place).
What are the known limitations?
SeeKnown limitations.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-07-18 UTC.