Known issues Stay organized with collections Save and categorize content based on your preferences.
This page lists known issues with Cloud SQL for PostgreSQL, along withways you can avoid or recover from these issues.
If you are experiencing issues with your instance, make sure you also reviewthe information inDiagnosing Issues.Instance connection issues
Expired SSL/TLS certificates
If your instance is configured to use SSL, go to theCloud SQL Instances pagein the Google Cloud console and open the instance. Open itsConnections page, select theSecurity tab and make sure that your server certificate is valid. If it has expired, you mustadd a new certificate and rotate to it.
Cloud SQL Auth Proxy version
If you are connecting using the Cloud SQL Auth Proxy, make sure you are using themost recent version. For more information, seeKeeping the Cloud SQL Auth Proxy up to date.
Not authorized to connect
If you try to connect to an instance that does not exist in that project,the error message only says that you are not authorized to access thatinstance.
Can't create a Cloud SQL instance
If you see the
Failed to create subnetwork. Router status is temporarilyunavailable. Please try again later. Help Token: [token-ID]
errormessage, try to create the Cloud SQL instance again.
The following only works with the default user ('postgres'):
gcloud sql connect --user
If you try to connect using this command with any other user, the errormessage saysFATAL: database 'user' does not exist. Theworkaround is to connect using the default user ('postgres'), then usethe
"\c"
psql command to reconnect as the different user.
PostgreSQL connections hang when IAM db proxy authentication is enabled.
When theCloud SQL Auth Proxy is started using TCP sockets and with the
-enable_iam_login
flag,then a PostgreSQL client hangs during TCP connection. One workaroundis to usesslmode=disable
in the PostgreSQL connectionstring. For example:psql"host=127.0.0.1 dbname=postgres user=me@google.com sslmode=disable"
Another workaround is tostart the Cloud SQL Auth Proxy using Unix sockets.This turns off PostgreSQL SSL encryption and lets the Cloud SQL Auth Proxy do the SSLencryption instead.
Administrative issues
Only one long-running Cloud SQL import or export operation can run at a time on an instance. When you start an operation, make sure you don't need to perform other operations on the instance. Also, when you start the operation, you cancancel it.
PostgreSQL imports data in a single transaction. Therefore, if you cancel the import operation, then Cloud SQL doesn't persist data from the import.
Issues with importing and exporting data
If your Cloud SQL instance uses PostgreSQL 17, but your databases use PostgreSQL 16 and earlier, then you can't use Cloud SQL to import these databases into your instance. To do this, useDatabase Migration Service.
If you use Database Migration Service to import a PostgreSQL 17 database into Cloud SQL, then it's imported as a PostgreSQL 16 database.
For PostgreSQL versions 15 and later, if the target database is created from
template0
, then importing data might fail and you might see apermission denied for schema public
error message. To resolve this issue, provide public schema privileges to thecloudsqlsuperuser
user by running theGRANT ALL ON SCHEMA public TO cloudsqlsuperuser
SQL command.Exporting many large objects cause instance to become unresponsive
If your database contains many large objects (blobs), exporting the databasecan consume so much memory that the instance becomes unresponsive. This canhappen even if the blobs are empty.
Cloud SQL doesn't support customized tablespaces but it does support data migration from customized tablespaces to the default tablespace,
pg_default
, in destination instance. For example, if you own a tablespace nameddbspace
is located at/home/data
, after migration, all the data insidedbspace
is migrated to thepg_default
. But Cloud SQL will not create a tablespace named "dbspace" on its disk.If you're trying to import and export data from a large database (for example,a database that has 500 GB of data or greater), then the import and exportoperations might take a long time to complete. In addition, other operations(for example, the backup operation) aren't available for you to performwhile the import or export is occurring. A potential option to improve theperformance of the import and export process is torestore a previous backup using
gcloud
or the API.
- Cloud Storage supports amaximum single-object size up five terabytes.If you have databases larger than 5TB, the export operation toCloud Storage fails. In this case, you need to break down yourexport files into smaller segments.
Transaction logs and disk growth
Logs are purged once daily, not continuously. When the number of days of logretention is configured to be the same as the number of backups, a day oflogging might be lost, depending on when the backup occurs. For example, settinglog retention to seven days and backup retention to seven backups means thatbetween six and seven days of logs will be retained.
We recommend setting the number of backups to at least one more than the days oflog retention to guarantee a minimum of specified days of log retention.
Note: Replica instances see a storage increase whenreplication is suspended and then resumed later. The increase is caused when theprimary instance sends the replica the transaction logs for the period of timewhen replication was suspended. The transaction logs updates the replica to thecurrent state of the primary instance.Issues related to Cloud Monitoring or Cloud Logging
Instances with the following region names are displayed incorrectly in certaincontexts, as follows:
us-central1
is displayed asus-central
europe-west1
is displayed aseurope
asia-east1
is displayed asasia
This issue occurs in the following contexts:
- Alerting in Cloud Monitoring
- Metrics Explorer
- Cloud Logging
You can mitigate the issue for Alerting in Cloud Monitoring, and for MetricsExplorer, by usingResource metadata labels.Use the system metadata labelregion
instead of thecloudsql_databasemonitored resource labelregion
.
Issue related to deleting a PostgreSQL database
When you delete a database created in Google Cloud console using yourpsql
client, you may encounter the following error:
ERROR: must be owner of database [DATABASE_NAME]
This is a permission error since the owner of a database created using apsql
client doesn't have Cloud SQLsuperuser
attributes.Databases created using the Google Cloud console are owned bycloudsqlsuperuser
and databases created using apsql
client are ownedby users connected to that database. Since Cloud SQL is a managed service,customers cannot create or have access to users withsuperuser
attributes.For more information, seeSuperuser restrictions and privileges.
Due to this limitation, databases created using the Google Cloud console canonly be deleted using the Google Cloud console, and databases created usingapsql
client can only be deleted by connecting as the owner of thedatabase.
To find the owner of a database, use the following command:
SELECTd.datnameasName,pg_catalog.pg_get_userbyid(d.datdba)asOwnerFROMpg_catalog.pg_databasedWHEREd.datname='DATABASE_NAME';
Replace the following:
- DATABASE_NAME: the name of the database that you want tofind owner information for.
If the owner of your database iscloudsqlsuperuser
, then useGoogle Cloud console to delete your database. If the owner of the databaseis apsql
client database user, then connect as the database owner andrun theDROP DATABASE
command.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-14 UTC.