Configure Cloud SQL and the external server for replication Stay organized with collections Save and categorize content based on your preferences.
This page describes how to configure your external server for replication toCloud SQL, create a source representation instance on Cloud SQL, andreplicate the data to Cloud SQL. You need to go through all the steps onthis page before proceeding to the replication steps.
An alternative to the steps described on this page is theDatabase Migration Service, which offerscontinuous replication or one-time database migration from an externalserver to Cloud SQL.
Fastmigration for Cloud SQL is now available. This feature improves the performance of data migrations from an external source to a destination Cloud SQL instance.
Before you begin
Terminology
External server. The PostgreSQL server external to Cloud SQL thatyou want to replicate data from. It's also referred to as the sourcedatabase or the external database server. It can be another Cloud SQLinstance or any other database server, such as on-premises, Amazon RelationalDatabase Service (RDS), and so on.
Source representation instance. A mock of a Cloud SQL instancethat represents the external server to the Cloud SQL replica.It's visible in the Google Cloud console and appears like a regularCloud SQL instance, but it doesn't contain data, requireconfiguration or maintenance, or affect billing.
Cloud SQL replica. The Cloud SQL instance that replicatesfrom the external server. Also known as the external primaryread replica.
Replication user account. The PostgreSQL user account on the externalserver with sufficient permissions to allow replication between theexternal server and the Cloud SQL replica.
Managed import. The process of importing data directly from the externalserver to the Cloud SQL replica. In this situation, Cloud SQLconnects to the external server using the replication user account andruns the data dump directly on the external server to import data to theCloud SQL replica.
Set up a Google Cloud project
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Make sure that billing is enabled for your Google Cloud project.
Enable the Cloud SQL Admin API.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Make sure that billing is enabled for your Google Cloud project.
Enable the Cloud SQL Admin API.
Make sure you have the Cloud SQL Admin, Storage Admin, and Compute Viewer roles on your user account.
Install the Google Cloud SDK
To configure replication,install Google Cloud SDKfor your external server. You might want to install the SDK on yourexternal server unless it's already installed elsewhere.
Set up the external server for replication
Cloud SQL supports continuous migrations from source databases toCloud SQL destination databases.
Supported source databases for PostgreSQL include:
- Self-managed PostgreSQL (on premises or on any cloud VM that you fully control) 9.4, 9.5, 9.6, 10, 11, 12, 13, 14, 15, 16, and 17
- Amazon RDS 9.6.10+, 10.5+, 11.1+, 12, 13, 14, 15, 16, and 17
- Amazon Aurora 10.11+, 11.6+, 12.4+, 13.3+, 14.6+, 15.2+, 16, and 17
- Microsoft Azure Database for PostgreSQL Flexible Server 11 and later
- Cloud SQL 9.6, 10, 11, 12, 13, 14, 15, 16, and 17
Configuring your source requires configuring both the source instance andunderlying source databases.
External server checklist
If the Cloud SQL replica is enabled with a private IP address because theoutgoing private IP address isn't static, configure theexternal server's firewall to allow the internal IP range allocated for theprivate services access of the VPC network that the Cloud SQL replica uses as its private network.
The source database server's firewall must be configured to allow the entireinternal IP range allocated for theprivate service connection of the VPC network that the Cloud SQL destination instance uses as theprivateNetwork field of itsipConfiguration settings.
To find theinternal IP range:
In the Google Cloud console, go to theVPC networks page.
Select the VPC network that you want to use.
Click thePrivate service connection tab.
Configure your source instance
To configure your source instance, follow these steps:
- If your source instance does not include the
postgres
database,create it. - Install the pglogical package on the source instance.
Set the following parameters, as needed.
If the source PostgreSQL instance is Amazon RDS, then include these parameters in a new parameter group and attach the parameter group to the instance.
If the source is Cloud SQL, set the
cloudsql.logical_decoding
andcloudsql.enable_pglogical
flags toon
.To enable flags in Cloud SQL, seeConfiguring database flags.
Note: Thepglogical
extension can logcredentials in plain text on the source instance. This behavior is caused bythe extension, and is unrelated to Cloud SQL.Set
shared_preload_libraries
to includepglogical
by using the following command: Note: To view a list of the existing libraries for your instance, run theALTERSYSTEMSETshared_preload_libraries='pglogical';
show shared_preload_libraries
command.Set
wal_level
tological
by using the following command:ALTERSYSTEMSETwal_level='logical';
If the source PostgreSQL instance is Amazon RDS, to enable WAL logs at the
logical
level, set therds.logical_replication
parameter to1
.Set
wal_sender_timeout
to0
by using the following command:ALTERSYSTEMSETwal_sender_timeout=0;
The value
0
disables the timeout mechanism that's used to terminate inactive replication connections.Setmax_replication_slots to the maximum number of replication slots that the source instance can support. Use the following command, after replacingMAX_REPLICATION_SLOTS with the number:
ALTERSYSTEMSETmax_replication_slots=MAX_REPLICATION_SLOTS;
Cloud SQL requires one slot for each database that's migrated. Specify at least the number of subscriptions expected to connect, with some reserves for table synchronization.
For example, if the source instance has 5 databases and 2 migration jobs are created for the source, then the number of replication slots must be at least 5 * 2 = 10, in addition to the number of replication slots that you already use.
Setmax_wal_senders to at least the same as
max_replication_slots
, in addition to the number of senders already used on your instance. Use the following command, replacingMAX_WAL_SENDERS with the total number of WAL sender processes running simultaneously:ALTERSYSTEMSETmax_wal_senders=MAX_WAL_SENDERS;
For example, if the
max_replication_slots
parameter is set to10
, and you're already using 2 senders, then the number of WAL sender processes running at the same time would be 10 + 2 = 12.Setmax_worker_processes to at least the number of databases in the source instance, in addition to the number of worker processes already used on your instance. Use the following command, after replacingMAX_WORKER_PROCESSES with the total number:
ALTERSYSTEMSETmax_worker_processes=MAX_WORKER_PROCESSES;
- If the source PostgreSQL instance is Microsoft Azure Database for PostgreSQL Flexible Server, then perform the following actions to support migrating data from an external server to a Cloud SQL instance:
Set
shared_preload_libraries
to includepglogical
by using the following command: Note: To view a list of the existing libraries for your instance, run theALTERSYSTEMSETshared_preload_libraries='pglogical';
show shared_preload_libraries
command.- Set
wal_level
tological
. For more information, seeLogical replication and logical decoding in Azure Database for PostgreSQL - Flexible Server. Setmax_replication_slots to the maximum number of replication slots that the source instance can support. Use the following command, after replacingMAX_REPLICATION_SLOTS with the number:
ALTERSYSTEMSETmax_replication_slots=MAX_REPLICATION_SLOTS;
Cloud SQL requires one slot for each database that's migrated. Specify at least the number of subscriptions expected to connect, with some reserves for table synchronization.
For example, if the source instance has 5 databases and 2 migration jobs are created for the source, then the number of replication slots must be at least 5 * 2 = 10, in addition to the number of replication slots that you already use.
Setmax_wal_senders to at least the same as
max_replication_slots
, in addition to the number of senders already used on your instance. Use the following command, replacingMAX_WAL_SENDERS with the total number of WAL sender processes running simultaneously:ALTERSYSTEMSETmax_wal_senders=MAX_WAL_SENDERS;
For example, if the
max_replication_slots
parameter is set to10
, and you're already using 2 senders, then the number of WAL sender processes running at the same time would be 10 + 2 = 12.Setmax_worker_processes to at least the number of databases in the source instance, in addition to the number of worker processes already used on your instance. Use the following command, after replacingMAX_WORKER_PROCESSES with the total number:
ALTERSYSTEMSETmax_worker_processes=MAX_WORKER_PROCESSES;
- Setazure.extensions to include
pglogical
. For more information, seeServer parameters in Azure Database for PostgreSQL - Flexible Server.Note:By default, Microsoft Azure has the
require_secure_transport
parameter set toon
. In this case, when you useDatabase Migration Service to create your source connection profile for Microsoft Azure, specify eitherServer-only authentication orServer-client authentication for the SSL/TLS configuration. If a connection is made over a public network (by using IP allowlists), then SSL/TLS encryption can be used to connect between Microsoft Azure and Cloud SQL securely.If the
require_secure_transport
parameter is set tooff
, then specifyNone for the SSL/TLS configuration. The Cloud SQL destination instance connects to the source Microsoft Azure database without encryption.
- If the Cloud SQL replica will use a private IP, configure the external server's firewall to allow the internal IP range allocated for theprivate services access of the VPC network of the replica.
- To apply the configuration changes, restart the source instance.
The parameters that you're setting in this step apply to a PostgreSQL database server that's running. You can also make these changes persistent by including them in thepostgresql.conf
file.
Enable replication delay monitoring for PostgreSQL versions preceding 9.6
If you're migrating from a PostgreSQL version lower than 9.6, then thereplication delay metric isn't available by default. You can use one of threealternatives to track this metric and ensure minimal downtime when you promotethe database:
Option 1: Enable the Cloud SQL external server to track the replicationdelay by granting access to a specific query. Using a user with the
SUPERUSER
privilege, perform the following:Define the following function to allow the external server toquery for the replication delay.
CREATEORREPLACEFUNCTIONpg_stat_replication_user()RETURNSTABLE(pidinteger,usesysidoid,usernamename,application_nametext,client_addrinet,client_hostnametext,client_portinteger,backend_starttimestampwithtimezone,backend_xminxid,statetext,sent_locationpg_lsn,write_locationpg_lsn,flush_locationpg_lsn,replay_locationpg_lsn,sync_priorityinteger,sync_statetext)LANGUAGESQLSECURITYDEFINERAS$$SELECT*FROMpg_catalog.pg_stat_replication;$$;
Grant the
EXECUTE
permission to the user by running thefollowing commands:REVOKE EXECUTE ON FUNCTION pg_stat_replication_user() FROM public;
GRANT EXECUTE ON FUNCTION pg_stat_replication_user() to {replication_user};
Option 2: Grant the
SUPERUSER
privilege directly to the userused to connect to the source instance. This allows the external server to readthe replication delay directly.Option 3: Track the replication delay independently by using thefollowing query:
Note: For PostgreSQL versions earlier than 10, run thiscommand as asuperuser
.SELECTcurrent_timestamp,application_name,pg_xlog_location_diff(pg_current_xlog_location(),pg_stat_replication.sent_location)ASsent_location_lag,pg_xlog_location_diff(pg_current_xlog_location(),pg_stat_replication.write_location)ASwrite_location_lag,pg_xlog_location_diff(pg_current_xlog_location(),pg_stat_replication.flush_location)ASflush_location_lag,pg_xlog_location_diff(pg_current_xlog_location(),pg_stat_replication.replay_location)ASreplay_location_lagFROMpg_stat_replicationWHEREapplication_namelike'cloudsql%';
In this option, Cloud SQL doesn't reflect the replication delay metricin the graphs or API responses.
Configure your source databases
The Cloud SQL external server migrates all databases under your sourceinstance other than the following:
- For on-premise sources: template databases
template0
andtemplate1
- For Amazon RDS sources:
template0
,template1
, andrdsadmin
- For Cloud SQL sources: template databases
template0
andtemplate1
Do the following on each database in your source instance that isn't includedin the preceding list:
To install the
pglogical
extension, run the following command onevery database on your source instance:CREATEEXTENSIONIFNOTEXISTSpglogical
For tables that don't have primary keys, Cloud SQL supports themigration of the initial snapshot and
Note: SeeDebugging and other toolsto learn how to generate a query to list tables in a PostgreSQL databasewithout primary keys. You mustalter these tables so that they have a primary key each. For tables thatdon't have primary keys, only the initial snapshot andINSERT
statements duringthe change data capture (CDC) phase. MigrateUPDATE
andDELETE
statements manually.INSERT
statements are migrated.Note: For PostgreSQL 9.4, install the
pglogical_origin
extension on every database by running thefollowing commands:CREATEEXTENSIONIFNOTEXISTSpglogical_origin;CREATEEXTENSIONIFNOTEXISTSpglogical;
Connect to the instance and run the following commands to set privileges forthe user on each of the migrated databases, as well as the default
postgres
database.The user that you use to connect to the source instance is configured as theuser in theConnection Profiles page.You can create a new user or reuse an existing one.
On all schemas on each database to migrate, except for the informationschema and schemas starting with
pg_
, run the following command:GRANTUSAGEonSCHEMAschematousername;GRANTSELECTonALLSEQUENCESinSCHEMAschematousername;GRANTSELECTonALLTABLESinSCHEMAschematousername;
On each database to migrate, run the following command:
GRANTUSAGEonSCHEMApglogicaltoPUBLIC;
To get replication information from source databases, run the followingcommand on all databases:
GRANTSELECTonALLTABLESinSCHEMApglogicaltousername;
Note:If the following error message appears, then you can ignore it.
postgres=> GRANT SELECT on ALL TABLES in SCHEMA pglogical to demo;
ERROR: permission denied for table node_interfaceIf your source is Amazon RDS, then run the following command:
GRANTrds_replicationtousername;
If your source is Microsoft Azure Database for PostgreSQL version earlier than 14, then run the following command:
GRANTSELECTonALLTABLESinSCHEMApglogicaltousername;
You can ignore an error that appears on the
pglogical.node
interface.If your source is Microsoft Azure Database for PostgreSQL version 14 and later, then run the following command:
GRANTpg_read_all_datatousername;
Otherwise, run the following command:
ALTERUSERusernamewithREPLICATION;
Set up a source representation instance
The source representation instance references the external server. Itcontains only the request data from the external server. Create the request dataand use it in acurl
command that creates the source representation instancein Cloud SQL.
Create the request data
The request data contains basic information about your external serverin JSON format. The request data can be configured for aCloud SQL replica on a public or private networkand should contain this information.
If you want to migrate a subset of databases from the source representationinstance to the destination Cloud SQL instance, then use theselectedObjects
parameter to specify the database names. If you don't use this parameter orprovide an empty list as the value for the parameter, then all databases aremigrated from the source to the destination.source.json
{"name":"SOURCE_NAME","region":"REGION","databaseVersion":"DATABASE_VERSION","onPremisesConfiguration":{"selectedObjects":"SELECTED_OBJECTS","hostPort":"SOURCE_HOST","username":"USERNAME","password":"PASSWORD","caCertificate":"SOURCE_CERT","clientCertificate":"CLIENT_CERT","clientKey":"CLIENT_KEY"}}
managed import example
//exampleofsource.jsonforexternalserverthat//-initiatesreplicationfromaCloudSQLmanagedimport//-doesn'tuseSSL/TLS{"name":"cloudsql-source-instance","region":"us-central1","databaseVersion":"POSTGRES_9_6","onPremisesConfiguration":{"selectedObjects":[{"database":"db1"},{"database":"db2"}],"hostPort":"192.0.2.0:3306","username":"replicationUser","password":"486#@%*@"}}
Property | Description |
---|---|
SOURCE_NAME | The name of the source representation instance to create. |
REGION | Theregion where you want the source representation instance to reside. |
DATABASE_VERSION | The database version running on your external server. The options arePOSTGRES_9_6 ,POSTGRES_10 ,POSTGRES_11 ,POSTGRES_12 ,POSTGRES_13 ,POSTGRES_14 ,POSTGRES_15 ,POSTGRES_16 , orPOSTGRES_17 . |
SELECTED_OBJECTS | A comma-separated list of databases that you're migrating from the source representation instance to the destination Cloud SQL instance. |
SOURCE HOST | The IPv4 address and port for the external server, or the DNS address for the external server. If the external server is hosted on Cloud SQL, then the port is5432 . |
USERNAME | The replication user account on the external server. |
PASSWORD | The password for the replication user account. |
BUCKET | The name of the bucket that contains the dump file. Include only if you're setting up replication with a dump file that exists in a Cloud Storage bucket. |
DUMP_FILE | A file in the bucket that contains the data from the external server. |
CLIENT_CA_CERT | The CA certificate on the external server. Include only if SSL/TLS is used on the external server. |
CLIENT_CERT | The client certificate on the external server. Required only forserver-client authentication. Include only if SSL/TLS is used on the external server. |
CLIENT_KEY | The private key file for the client certificate on the external server. Required only forserver-client authentication. Include only if SSL/TLS is used on the external server. |
SSL_OPTION | Whether SSL/TLS encryption is used to establish a connection between the source Microsoft Azure database and the destination Cloud SQL database. You can specify the following values for this parameter:
The default value for this parameter is |
Create a source representation instance
Before you start this step,create a JSON file that contains your source request data.
Then, to create the source representation instance in Cloud SQL, open aterminal and run the following commands:
curl
gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data@JSON_PATH\-XPOST\https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT-ID/instances
example
gcloudauthloginACCESS_TOKEN="$(gcloud auth print-access-token)"curl--header "Authorization: Bearer ${ACCESS_TOKEN}" \--header 'Content-Type: application/json' \--data @./source.json \-XPOST\https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances
Property | Description |
---|---|
PROJECT_ID | The ID for your project in Google Cloud. |
JSON_PATH | The path to theJSON file that contains therequest data for the external server. |
Update a source representation instance
If you update therequest data from the externalserver, you can update the existing source representation instance touse the modified values.
Modify the request data
Update the request data to include any fields that have changed. This includes thehostPort
,username
,password
,caCertificate
,clientCertificate
, andclientKey
fields. After updating the request data, use it in acurl
command to update the instance in Cloud SQL.
selectedObjects
parameter. If you don't use this parameter orprovide an empty list as the value for the parameter, then all databases aremigrated from the source to the destination.The following example shows updating theusername
andpassword
fields with a different username and password:
source.json
{"name":"SOURCE_NAME","region":"REGION","databaseVersion":"DATABASE_VERSION","onPremisesConfiguration":{"selectedObjects":"SELECTED_OBJECTS","username":"NEW_USERNAME","password":"NEW_PASSWORD"}}
managed import example
//exampleofsource.jsonforexternalserverthat//-initiatesreplicationfromaCloudSQLmanagedimport//-doesn'tuseSSL/TLS{"name":"cloudsql-source-instance","region":"us-central1","databaseVersion":"POSTGRES_9_6","onPremisesConfiguration":{"selectedObjects":[{"database":"db1"},{"database":"db3"}],"username":"newReplicationUser","password":"525#@%*@"}}
Property | Description |
---|---|
SOURCE_NAME | The name of the source representation instance. |
REGION | Theregion where thesource representation instance resides. |
DATABASE_VERSION | The database version running on yourexternal server. The options arePOSTGRES_9_6 ,POSTGRES_10 ,POSTGRES_11 ,POSTGRES_12 ,POSTGRES_13 ,POSTGRES_14 ,POSTGRES_15 ,POSTGRES_16 , orPOSTGRES_17 . |
SELECTED_OBJECTS | An updated comma-separated list of databases thatyou're migrating from the source representation instanceinstance to the destination Cloud SQLinstance. |
NEW_USERNAME | The new replication user account on the externalserver. |
NEW_PASSWORD | The password for the new account. |
Modify a source representation instance
Before you start this step,create a JSON file that contains your modified request data.
Then, to modify the source representation instance in Cloud SQL, open aterminal and run the following commands:
curl
gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data@JSON_PATH\-XPATCH\https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT-ID/instances/SOURCE_NAME
example
gcloudauthloginACCESS_TOKEN="$(gcloud auth print-access-token)"curl--header "Authorization: Bearer ${ACCESS_TOKEN}" \--header 'Content-Type: application/json' \--data @./source.json \-XPATCH\https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances/cloudsql-source-instance
Property | Description |
---|---|
PROJECT_ID | The ID for your project in Google Cloud. |
JSON_PATH | The path to theJSON file that contains therequest data for the external server. |
SOURCE_NAME | The name of the source representation instance. |
Set up a Cloud SQL replica
The Cloud SQL replica eventually contains the data fromthe external server. In this step, you create the requestdata and use it in acurl
command that creates theCloud SQL replica in Cloud SQL.
Create the request data
The request data contains basic information about your externalserver and Cloud SQL replica in JSON format. The request data can beconfigured for a Cloud SQL replica on a public or private networkand should contain this information:
replica.json
{"settings":{"tier":"TIER","dataDiskSizeGb":"DISK_SIZE","ipConfiguration":{"ipv4Enabled":"PUBLIC_IP_STATUS","privateNetwork":"projects/PROJECT_ID/global/networks/NETWORK_NAME"},"availabilityType":"AVAILABILITY_TYPE"},"masterInstanceName":"SOURCE_REPRESENTATION_INSTANCE_NAME","region":"SOURCE_REGION","databaseVersion":"DATABASE_VERSION","name":"REPLICA_NAME"}
example
{"settings":{"tier":"db-custom-4-15360","dataDiskSizeGb":"100"},"masterInstanceName":"source-instance","region":"us-central1","databaseVersion":"POSTGRES_16","name":"replica-instance"}
Property | Description |
---|---|
TIER | Thetype of machine to host your replicainstance. If you don't know which machine type to use,start withdb-custom-2-7680 . You canchange its size and other supported valueslater if needed. |
DISK_SIZE | The storage size for the Cloud SQL replica, in GB. |
PUBLIC_IP_STATUS | Determines whether the instance isassigned a public IP address. By default, thevalue of this property istrue .To turn off the assignment of a public IP address for thereplica, set the value tofalse .If your project hastheconstraints/sql.restrictPublicIp organizationpolicyenabled, then to create the Cloud SQLreplica, you must set the value of theipv4Enabled property tofalse . For more informationabout turning off public IP address assignment, seeDisable public IP. |
PROJECT_ID | If the Cloud SQL replica is on a private network,then include theprivateNetwork property in thereplica.json file.ForPROJECT_ID, specify the ID of yourproject in Google Cloud. |
NETWORK_NAME | The name of the private network to use with theCloud SQL replica. |
AVAILABILITY_TYPE | The availability type of the Cloud SQLreplica. By default, the value isZONAL . To makethe replica HA, set the value toREGIONAL . Tolearn about the allowed values,seeSqlAvailabilityType.After you create an external server HA replica,you cannot change it to a non-HA replica. Thisis true conversely as well. You cannot change anexternal server non-HA replica to an HA replica. Manual failover might lead to unrecoverablemigration if attempted when the instance isstill loading the initial data, or to temporarydowntime if the instance is already replicatingfrom the source. Check thereplication status. |
SOURCE_REPRESENTATION_INSTANCE_NAME | The name of the sourcerepresentation instance. |
SOURCE_REGION | The region assigned to the source representationinstance. |
DATABASE_VERSION | The database version to use with theCloud SQL replica. The options for this version arePOSTGRES_9_6 ,POSTGRES_10 ,POSTGRES_11 ,POSTGRES_12 ,POSTGRES_13 ,POSTGRES_14 ,POSTGRES_15 ,POSTGRES_16 , orPOSTGRES_17 .Match the database version running on your externalserver, or set the value to no more thanone version later. |
REPLICA_NAME | The name of the Cloud SQL replica to create. |
Create the Cloud SQL replica
Before you start this step,create a JSON file that contains your replica request data.Then, to create a Cloud SQL replica, open a Cloud Shell terminal and run these commands:
curl
gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data@JSON_PATH\-XPOST\https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT-ID/instances
example
gcloudauthloginACCESS_TOKEN="$(gcloudauthprint-access-token)"curl--header"Authorization: Bearer${ACCESS_TOKEN}"\--header'Content-Type: application/json'\--data@./replica.json\-XPOST\https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances
Property | Description |
---|---|
PROJECT_ID | The ID of your project in Google Cloud, whichmust be the same as that of the source instance. |
JSON_PATH | The path to theJSON file that contains the requestdata for the Cloud SQL replica. |
Verify your setup
To ensure your instances were set up correctly, go to theCloud SQL Instances page.
You should see your source representation instance and the Cloud SQLreplica, in a listing similar to the following:
Instance ID | Type | Public IP |
---|---|---|
(-) source-representation-instance | Database external primary | 10.68.48.3:5432 |
replica-instance | Database read replica | 34.66.48.59 |
Also make sure that you have thecloudsql.instances.migrate
permission onthe Cloud SQL replica. This permission is included in thecloudsql.admin
orcloudsql.editor
IAM roles.
Add users to the Cloud SQL replica
Note: You can skip this step if you do not need to add any additional database useraccounts to the Cloud SQL replica.You cannot import database user accounts from the external server, butyou cancreate them on a Cloud SQL replica.Do this before you replicate from the external server.
Get the Cloud SQL replica's outgoing IP address
You can use the outgoing IP address of the Cloud SQL replica to create asecure connection between the external server and theCloud SQL replica. You won't be charged for this IP address.
Console
To get the outgoing IP address for the replica, do the following:
In the Google Cloud console, go to theCloud SQL Instances page.
Next to the Cloud SQL replica's public IP address, hold the pointerover theMore info tooltip and retrieve the outgoing IP address.
Note that the outgoing IP address is not the IP addressdisplayed in the main listing for the replica in the Google Cloud console.
gcloud
To get the outgoing IP address for the replica, run the following command:
gcloudsqlinstancesdescribeREPLICA_NAME--format="default(ipAddresses)"
Property | Description |
---|---|
REPLICA_NAME | The name of the Cloud SQL replica whose outgoing public IP address you want to retrieve. |
Allow incoming connections on the external server
The Cloud SQL replica needs to connect to the external serverfor replication to succeed. You must configure the network firewall for yourexternal server to accept connections from theCloud SQL replica'soutgoing IP address if thefollowing conditions apply:
- The external server is behind a firewall or some other networkrestriction.
- Your Cloud SQL replica is using a public IP.
To connect to the Cloud SQL replica, you use the replica'sprimary IP address. This IP address is displayed in the Google Cloud console.
Update the source representation instance to allow replication to the Cloud SQL replica
After you set up the source representation instance for the Cloud SQLreplica, you might need toupdate the source representation instance. For example, thesescenarios require an update to your configurations:
- The host, port, or IP of the external server changes.
- You want to use a different PostgreSQL replication user.
- The password of the PostgreSQL replication user changes.
- The SSL certificates used to securely connect to theexternal server change.
Seed the Cloud SQL replica
For the initial loading of data from the external server into the Cloud SQLreplica, use a managed import. It uses a service that extracts data from theexternal server and imports it into the Cloud SQL instance directly.For more information, seeUsing a managed import to set up replication from external databases.
Monitor replication
When the Cloud SQL replica finishes the initial data load, it connects tothe external server and applies all updates that were made after the exportoperation.Confirm your replication status.
It's important to check the replication status before promoting thereplica to a standalone instance. If the replication process isn't successfullycompleted, a promoted replica doesn't have all the changes from yourexternal server.
If replication delay is not trending toward 0,take steps to address it.You might want to check these metrics:/postgresql/external_sync/initial_sync_complete
,postgresql/external_sync/max_replica_byte_lag
, anddatabase/replication/state
. View the list ofCloud SQL metrics.
If you want to migrate a subset of databases from the source representationinstance to the destination Cloud SQL instance, then check thefollowing per-database metrics:
Metric | Description |
---|---|
per_database/postgresql/external_sync/initial_sync_complete | Understand the migration phase of a database. If the value for this metric is0 , then the database is still part of the initial data dump. If the value is1 , then the database has completed the initial data dump and is in the change data capture (CDC) phase. |
per_database/postgresql/external_sync/replication_byte_lag | Know the replication lag for a database (in bytes). |
After the Cloud SQL replica has caught up with the external server andthere's no replication delay on the Cloud SQL replica, connect to yourdatabase. Run the appropriate database commands to make sure that the contentsare as expected when compared with the external server. Retain your externalserver until the necessary validations are done.
Set up a cascading replica
After migration, you can create cascading read replicas under your Cloud SQL replica beforepromoting the Cloud SQL replica.
To create a cascading replica, run the following commands:
Console
In the Google Cloud console, go to theCloud SQL Instances page.
- Click theReplicas tab for the replica that will act as a parent for the replica that you want to create.
- ClickCreate replica.
- On theCreate read replica page, update the instance ID, and any other configuration options, including the name, region, and zone.
- ClickCreate.
Cloud SQL creates a replica. You're returned to the instance page for the parent replica.
- Follow steps 4-6 for each new cascading replica that you want to create.
gcloud
- Create the new replica by specifying your Cloud SQL replica as the primary instance using the
--master-instance-name
flag: - REPLICA_NAME: the unique ID for the replica that you are creating
- PARENT_REPLICA_NAME: the name of the Cloud SQL replica
After the replica is created, you can see that the changes made to the primary instance are replicated through all the replicas in the cascading replicas chain.
gcloudsqlinstancescreateREPLICA_NAME\--master-instance-name=PARENT_REPLICA_NAME\
curl
- To create a replica under the parent replica, edit the following JSON code sample and save it to a file called
request.json
:{"masterInstanceName":"EXTERNAL_SERVER_REPLICA_NAME","project":"PROJECT_ID","name":"REPLICA_NAME","region":"REPLICA_REGION","settings":{"tier":"MACHINE_TYPE",}}
- Run the following command:
curl-XPOST-H"Authorization: Bearer "$(gcloudauthprint-access-token)-H"Content-Type: application/json; charset=utf-8"-d@request.json"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances"
Promote the replica
Promote your replica by following these steps:
- Promote the replica to a primary instance.
- Add read replicas to your instance.
- Optional:Configure your instance for high availability (HA).To prevent additional downtime, you can enable HA whilesetting up a replica by setting
AVAILABILITY_TYPE
toREGIONAL
.
Limitations
If you install extensions on your external source databases that Cloud SQL doesn't support, then when you migrate the databases to a destination instance, Cloud SQL won't migrate these extensions. To ensure a smooth migration, verify that no objects or applications reference the extensions. Before proceeding with the migration, we recommend removing the extensions along with any references from the source databases.
For more information about the extensions that Cloud SQL supports, seeConfigure PostgreSQL extensions.
If you install the
pg_cron
extension on your external source databases, then when you migrate the databases to a destination instance, Cloud SQL doesn't migrate the extension or anycron
settings associated with the extension. After you migrate the databases and promote the replica, Google recommends that you re-enable thepg_cron
extension on each migrated database.You can't migrate data from version 11 of a source Microsoft Azure server toversion 11 of a destination Cloud SQL for PostgreSQL instance. To resolve this,migrate the data to a Cloud SQL instance that has a version of 12 orlater.
What's next
- Learn how to use amanaged import to set up replication from external databases.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-14 UTC.