Use point-in-time recovery (PITR) Stay organized with collections Save and categorize content based on your preferences.
This page describes how to use point-in-time recovery (PITR) to restore yourprimary Cloud SQL instance.
To learn more about PITR, seePoint-in-time recovery (PITR).
Note: This page contains features related to Cloud SQL editions.For more information about Cloud SQL editions, seeIntroduction to Cloud SQL editions.If you create a Cloud SQL Enterprise Plus edition instance, then PITR is enabled by default,regardless of the method used for creation. If you want todisable the feature,then you must do so manually.
If you create a Cloud SQL Enterprise edition instance in the Google Cloud console, then PITR isenabled by default. Otherwise, if you create the instance by using thegcloud CLI, Terraform, or the Cloud SQL Admin API, then PITR is disabled bydefault. In this case, if you want toenable the feature,then you must do so manually.
Log storage for PITR
Cloud SQL useswrite-ahead logging (WAL) archiving for PITR.On January 9, 2023, we launched storing write-ahead logs forPITR inCloud Storage. Since this launch, the followingconditions apply:
- All Cloud SQL Enterprise Plus edition instances store their write-ahead logsin Cloud Storage. Only Cloud SQL Enterprise Plus edition instances that you upgradefrom Cloud SQL Enterprise edition and had PITR enabled before January 9, 2023 continueto store their logson disk.
- Cloud SQL Enterprise edition instances created with PITR enabled before January 9, 2023continue to store their logson disk.
- If you upgrade a Cloud SQL Enterprise edition instance after August 15, 2024 that storestransaction logs for PITR on disk to Cloud SQL Enterprise Plus edition, then the upgradeprocess switches the storage location of the transaction logs used for PITR toCloud Storage foryou. For more information, seeUpgrade an instance to Cloud SQL Enterprise Plus edition by using in-place upgrade.
- All Cloud SQL Enterprise edition instances that you create with PITR enabled afterJanuary 9, 2023 store logs in Cloud Storage.
For instances that store write-ahead logs only on disk, you canswitch the storage location of the transaction logs used for PITR from disk toCloud Storage by usinggcloud CLI or the Cloud SQL Admin API without incurringany downtime. For moreinformation, seeSwitch transaction log storage to Cloud Storage.
Note: For instances that were created before January 9, 2023, Cloud SQLswitches the location of the transaction logs used for PITR automatically foryou. You can usegcloud CLI or the Cloud SQL Admin API tocheck where transaction logs are stored for your instance.Log retention period
To see whether an instance stores the logs used for PITR in Cloud Storage,useCheck the storage location of transaction logs used for PITR.
After you use a PostgreSQL client such aspsql
orpgAdmin
toconnect to a database of the instance, run the following command:show archive_command
. If any write-ahead logs arearchived in Cloud Storage, then you see-async_archive -remote_storage
appears.
All other existing instances that have PITR enabled continue to have their logsstored on disk.
If the logs are stored in Cloud Storage, then Cloud SQLuploads logs every five minutes or less. As a result, if a Cloud SQLinstance is available, then the instance can be recovered to the latest time.However, ifthe instance isn't available,then therecovery point objectiveis typically five minutes or less.Use thegcloud CLI or Admin API tocheck for the latest timeto which you can restore the instance, and perform the recovery to that time.
The write-ahead logs used with PITR are deletedautomatically with their associatedautomatic backup, which generally happens after the value set fortransactionLogRetentionDays
is met. This is the number of days of transaction logs that Cloud SQLretains for PITR. For Cloud SQL Enterprise Plus edition, you can set the value from 1 to 35, and forCloud SQL Enterprise edition, you can set the value from 1 to 7.
When you restore a backup on a Cloud SQL instance before enabling PITR, youlose the write-ahead logs that allow the operability of PITR.
Forcustomer-managed encryption key (CMEK)-enabled instances,write-ahead logs are encrypted using the latest version of theCMEK. To perform a restore, all versions of the key that were the latest for thenumber of days that you configured for theretained-transaction-log-days
parameter should be available.
For instances having write-ahead logs stored in Cloud Storage,the logs are stored in the same region as the primary instance. This log storage(up to 35 days for Cloud SQL Enterprise Plus edition and seven days for Cloud SQL Enterprise edition, the maximumlength for PITR) generates no additional cost per instance.
Logs and disk usage
If your instance has PITR enabled, and if the size of yourwrite-ahead logs on disk is causing an issue for your instance:
You canswitch the storage location of the logs used forPITR from disk to Cloud Storage without downtime by usinggcloud CLI or the Cloud SQL Admin API.
You canupgrade your instance to Cloud SQL Enterprise Plus edition.
You can increase the instance storage size, but the write-aheadlog size increase in disk usage might be temporary.
We recommend enablingautomatic storage increase to avoid unexpected storage issues. Thisrecommendation applies only if your instance has PITR enabledand your logs are stored on disk.
You can deactivate PITR if you want to delete logs and recoverstorage. Decreasing the write-ahead logs used doesn't shrink the size of the diskprovisioned for the instance.
Logs are purged once daily, not continuously. Setting log retention to two daysmeans that at least two days of logs, and at most three days of logs, areretained. We recommend setting the number of backups to one more than the daysof log retention.
For example, if you specify
7
for the value of thetransactionLogRetentionDays
parameter, then for thebackupRetentionSettings
parameter, set the number ofretainedBackups
to8
.
Enable PITR
When you create a new instance in the Google Cloud console, bothAutomatedbackups andEnable point-in-time recovery are automaticallyenabled.The following procedure enables PITR on anexistingprimary instance.
Console
In the Google Cloud console, go to theCloud SQL Instances page.
- Open the more actions menu
for the instance you want to enable PITR on and clickEdit.
- UnderCustomize your instance, expand theData Protection section.
- Select theEnable point-in-time recovery checkbox.
- In theDays of logs field, enter the number of days to retain logs, from 1-35 for Cloud SQL Enterprise Plus edition, or 1-7 for Cloud SQL Enterprise edition.
- ClickSave.
gcloud
- Display the instance overview:
gcloudsqlinstancesdescribeINSTANCE_NAME
- If you see
enabled: false
in thebackupConfiguration
section, enable scheduled backups:gcloudsqlinstancespatchINSTANCE_NAME\--backup-start-time=HH:MM
Specify the
backup-start-time
parameter using 24-hour time in UTC±00 time zone. - Enable PITR:
gcloudsqlinstancespatchINSTANCE_NAME\--enable-point-in-time-recovery
If you're enabling PITR on aprimary instance, you can also configure the number of days for which you wantto retain transaction logs by adding the following parameter:
--retained-transaction-log-days=RETAINED_TRANSACTION_LOG_DAYS
- Confirm your change:
gcloudsqlinstancesdescribeINSTANCE_NAME
In the
backupConfiguration
section, you seepointInTimeRecoveryEnabled: true
if the change was successful.
Terraform
To enable PITR, use aTerraform resource.
Note:The following default behavior apples:- If you create a Cloud SQL Enterprise Plus edition instance, then PITR is enabled by default, regardless of the method used for creation. If you want todisable the feature, then you must do so manually.
- If you create a Cloud SQL Enterprise edition instance in the Google Cloud console, then PITR is enabled by default. Otherwise, if you create the instance by using thegcloud CLI, Terraform, or the Cloud SQL Admin API, then PITR is disabled by default. In this case, if you want to enable the feature you must do so manually.
resource "google_sql_database_instance" "postgres_instance_pitr" { name = "" region = "us-central1" database_version = "POSTGRES_14" settings { tier = "db-custom-2-7680" backup_configuration { enabled = true point_in_time_recovery_enabled = true start_time = "20:55" transaction_log_retention_days = "3" } } # set `deletion_protection` to true, will ensure that one cannot accidentally delete this instance by # use of Terraform whereas `deletion_protection_enabled` flag protects this instance at the GCP level. deletion_protection = false}
Apply the changes
To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud Shell
- LaunchCloud Shell.
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (alsocalled aroot module).
- InCloud Shell, create a directory and a new file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.mkdirDIRECTORY && cdDIRECTORY && touch main.tf
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created
main.tf
.Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
- Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the
-upgrade
option:terraform init -upgrade
Apply the changes
- Review the configuration and verify that the resources that Terraform is going to create or update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
- Apply the Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Wait until Terraform displays the "Apply complete!" message.
- Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Delete the changes
To delete your changes, do the following:
- To disable deletion protection, in your Terraform configuration file set the
deletion_protection
argument tofalse
.deletion_protection = "false"
- Apply the updated Terraform configuration by running the following command and entering
yes
at the prompt:terraform apply
Remove resources previously applied with your Terraform configuration by running the following command and entering
yes
at the prompt:terraform destroy
REST v1
Before using any of the request data, make the following replacements:
- PROJECT_ID: the ID orproject number of the Google Cloud project that contains the instance
- INSTANCE_NAME: the name of the primary or read replica instance that you're configuring for high availability
- START_TIME: the time (in hours and minutes)
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME
Request JSON body:
{ "settings": { "backupConfiguration": { "startTime": "START_TIME", "enabled": true, "pointInTimeRecoveryEnabled": true } }}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X PATCH \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID"}
REST v1beta4
Before using any of the request data, make the following replacements:
- PROJECT_ID: the ID orproject number of the Google Cloud project that contains the instance
- INSTANCE_NAME: the name of the primary or read replica instance that you're configuring for high availability
- START_TIME: the time (in hours and minutes)
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME
Request JSON body:
{ "settings": { "backupConfiguration": { "startTime": "START_TIME", "enabled": true, "pointInTimeRecoveryEnabled": true } }}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X PATCH \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID"}
Perform PITR on an unavailable instance
Console
You might want to recover aninstance that isn't available to a different zone because of the following reasons:
- The zone in which the instance is configured isn't accessible. This instance has a
FAILED
state. - The instance is undergoing maintenance. This instance has a
MAINTENANCE
state.
To recover an unavailable instance, complete the following steps:
In the Google Cloud console, go to theCloud SQL Instances page.
- Find the row of the instance to clone.
- In theActions column, click the More Actions menu.
- ClickCreate clone.
- On theCreate a clone page, complete the following actions:
- In theInstance ID field, update the instance ID, if needed.
- ClickClone from an earlier point in time.
- In thePoint in time field, select a date and time from which you want to clone data. This recovers the state of the instance from that point in time.
- ClickCreate clone.
While the clone initializes, you're returned to the instance listing page.
gcloud
You might want to recover aninstance that isn't available to a different zone because the zone in which the instance is configured isn't accessible.
gcloudsqlinstancescloneSOURCE_INSTANCE_NAMETARGET_INSTANCE_NAME\--point-in-timeDATE_AND_TIME_STAMP\--preferred-zoneZONE_NAME\--preferred-secondary-zoneSECONDARY_ZONE_NAME
The user or service account that's running thegcloud sql instances clone
command must have thecloudsql.instances.clone
permission. For moreinformation about required permissions to rungcloud CLI commands, seeCloud SQL permissions.
REST v1
You might want to recover aninstance that isn't available to a different zone because the zone in which the instance is configured isn't accessible.
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- SOURCE_INSTANCE_NAME: the name of the source instance.
- TARGET_INSTANCE_NAME: the name of the target (cloned) instance.
- DATE_AND_TIME_STAMP: a date-and-time stamp for the source instance in theUTC time zone and in theRFC 3339 format (for example,
2012-11-15T16:19:00.094Z
). - ZONE_NAME: Optional. The name of the primary zone for the target instance. This is used to specify a different primary zone for the Cloud SQL instance that you want to clone. For a regional instance, this zone replaces the primary zone, but the secondary zone remains the same as the instance.
- SECONDARY_ZONE_NAME: Optional. The name of the secondary zone for the target instance. This is used to specify a different secondary zone for the regional Cloud SQL instance that you want to clone.
HTTP method and URL:
POST https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/SOURCE_INSTANCE_NAME/clone
Request JSON body:
{ "cloneContext": { "destinationInstanceName": "TARGET_INSTANCE_NAME", "pointInTime": "DATE_AND_TIME_STAMP", "preferredZone": "ZONE_NAME", "preferredSecondaryZone": "SECONDARY_ZONE_NAME" }}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/SOURCE_INSTANCE_NAME/clone"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/SOURCE_INSTANCE_NAME/clone" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/TARGET_INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "CLONE", "name": "OPERATION_ID", "targetId": "TARGET_INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID", "instanceUid": "INSTANCE_ID"}
The user or service account that's using theinstances.clone
API method must have thecloudsql.instances.clone
permission. For more information about required permissions to use API methods, seeCloud SQL permissions.
REST v1beta4
You might want to recover aninstance that isn't available to a different zone because the zone in which the instance is configured isn't accessible.
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- SOURCE_INSTANCE_NAME: the name of the source instance.
- TARGET_INSTANCE_NAME: the name of the target (cloned) instance.
- DATE_AND_TIME_STAMP: a date-and-time stamp for the source instance in theUTC time zone and in theRFC 3339 format(for example,
2012-11-15T16:19:00.094Z
). - ZONE_NAME: Optional. The name of the primary zone for the target instance. This is used to specify adifferent primary zone for the Cloud SQL instance that you want to clone. For a regional instance,this zone replaces the primary zone, but the secondary zone remains the same as the instance.
- SECONDARY_ZONE_NAME: Optional. The name of the secondary zone for the target instance. This is used to specify a different secondary zone for the regional Cloud SQL instance that you want to clone.
HTTP method and URL:
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/SOURCE_INSTANCE_NAME/clone
Request JSON body:
{ "cloneContext": { "destinationInstanceName": "TARGET_INSTANCE_NAME", "pointInTime": "DATE_AND_TIME_STAMP", "preferredZone": "ZONE_NAME", "preferredSecondaryZone": "SECONDARY_ZONE_NAME" }}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/SOURCE_INSTANCE_NAME/clone"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/SOURCE_INSTANCE_NAME/clone" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/TARGET_INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "CLONE", "name": "OPERATION_ID", "targetId": "TARGET_INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID", "instanceUid": "INSTANCE_ID"}
The user or service account that's using theinstances.clone
API method must have thecloudsql.instances.clone
permission. For more information about required permissions to use API methods, seeCloud SQL permissions.
If you try to create a PITR clone at a time after the latest recoverable time,then the following error message is given:
Thetimestampforpoint-in-timerecoveryisafterthelatestrecoverytimeofTimestampoflatestrecoverytime.Clonetheinstancewithatimethat'searlierthanthisrecoverytime.
Get the latest recovery time
For an available instance, you can perform PITR to the latest time. If theinstance is unavailable and the instance logs are stored inCloud Storage,then you can retrieve the latest recovery time and perform the PITR to that time.In both cases, you canrestore the instance to a different primary or secondary zoneby providing values for the preferred zones.
gcloud
Get the latest time to which you can recover a Cloud SQL instance that's not available.
ReplaceINSTANCE_NAME with the name of the instance that you're querying.
gcloudsqlinstancesget-latest-recovery-timeINSTANCE_NAME
REST v1
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID
- INSTANCE_NAME: the name of the instance for which you're querying for the latest recovery time
HTTP method and URL:
GET https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "kind": "sql#getLatestRecoveryTime", "latestRecoveryTime": "2023-06-20T17:23:59.648821586Z"}
REST v1beta4
Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID
- INSTANCE_NAME: the name of the instance for which you're querying for the latest recovery time
HTTP method and URL:
GET https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "kind": "sql#getLatestRecoveryTime", "latestRecoveryTime": "2023-06-20T17:23:59.648821586Z"}
Perform PITR
Console
In the Google Cloud console, go to theCloud SQL Instances page.
- Open the more actions menu
for the instance you want to recover and clickCreate clone.
- Optionally, on theCreate a clone page, update the ID of the new clone.
- SelectClone from an earlier point in time.
- Enter a PITR time.
- ClickCreate clone.
gcloud
Create a clone using PITR.
Replace the following:
- SOURCE_INSTANCE_NAME - Name of the instance you're restoring from.
- NEW_INSTANCE_NAME - Name for the clone.
- TIMESTAMP - UTC timezone for the source instance in RFC 3339 format. For example, 2012-11-15T16:19:00.094Z.
gcloudsqlinstancescloneSOURCE_INSTANCE_NAME\NEW_INSTANCE_NAME\--point-in-time'TIMESTAMP'
REST v1
Before using any of the request data, make the following replacements:
- project-id: The project ID
- target-instance-id: The target instance ID
- source-instance-id: The source instance ID
- restore-timestamp The point-in-time to restore up to
HTTP method and URL:
POST https://sqladmin.googleapis.com/v1/projects/project-id/instances/source-instance-id/clone
Request JSON body:
{ "cloneContext": { "kind": "sql#cloneContext", "destinationInstanceName": "target-instance-id", "pointInTime": "restore-timestamp" }}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/project-id/instances/source-instance-id/clone"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/project-id/instances/source-instance-id/clone" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/project-id/instances/target-instance-id", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "CREATE", "name": "operation-id", "targetId": "target-instance-id", "selfLink": "https://sqladmin.googleapis.com/v1/projects/project-id/operations/operation-id", "targetProject": "project-id"}
REST v1beta4
Before using any of the request data, make the following replacements:
- project-id: The project ID
- target-instance-id: The target instance ID
- source-instance-id: The source instance ID
- restore-timestamp The point-in-time to restore up to
HTTP method and URL:
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/source-instance-id/clone
Request JSON body:
{ "cloneContext": { "kind": "sql#cloneContext", "destinationInstanceName": "target-instance-id", "pointInTime": "restore-timestamp" }}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/source-instance-id/clone"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/source-instance-id/clone" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/target-instance-id", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "CREATE", "name": "operation-id", "targetId": "target-instance-id", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/operations/operation-id", "targetProject": "project-id"}
Deactivate PITR
Console
In the Google Cloud console, go to theCloud SQL Instances page.
- Open the more actions menu
for the instance you want to deactivate and selectEdit.
- UnderCustomize your instance, expand theData Protection section.
- ClearEnable point-in-time recovery.
- ClickSave.
gcloud
- Deactivate point-in-time recovery:
gcloudsqlinstancespatchINSTANCE_NAME\--no-enable-point-in-time-recovery
- Confirm your change:
gcloudsqlinstancesdescribeINSTANCE_NAME
In the
backupConfiguration
section, you seepointInTimeRecoveryEnabled: false
if the change was successful.
REST v1
Before using any of the request data, make the following replacements:
- project-id: The project ID
- instance-id: The instance ID
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id
Request JSON body:
{ "settings": { "backupConfiguration": { "enabled": false, "pointInTimeRecoveryEnabled": false } }}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X PATCH \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "operation-id", "targetId": "instance-id", "selfLink": "https://sqladmin.googleapis.com/v1/projects/project-id/operations/operation-id", "targetProject": "project-id"}
REST v1beta4
Before using any of the request data, make the following replacements:
- project-id: The project ID
- instance-id: The instance ID
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id
Request JSON body:
{ "settings": { "backupConfiguration": { "enabled": false, "pointInTimeRecoveryEnabled": false } }}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X PATCH \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "operation-id", "targetId": "instance-id", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/operations/operation-id", "targetProject": "project-id"}
Check the storage location of transaction logs used for PITR
You can check where your Cloud SQL instance is storing the transactionlogs used for PITR.
gcloud
To determine whether your instance stores logs for PITR on disk or Cloud Storage, use the following command:
gcloudsqlinstancesdescribeINSTANCE_NAME
ReplaceINSTANCE_NAME with the name of the instance.
For multiple instances in the same project, you can also check the storage location of the transaction logs. To determine the location for multiple instances, use the following command:
gcloudsqlinstanceslist--show-transactional-log-storage-state
Example response:
NAME DATABASE_VERSION LOCATION TRANSACTIONAL_LOG_STORAGE_STATEmy_01 POSTGRES_12 us-central-1 DISKmy_02 POSTGRES_12 us-central-1 CLOUD_STORAGE...
In the output of the command, thetransactionalLogStorageState
field or theTRANSACTIONAL_LOG_STORAGE_STATE
column provides information about where the transaction logs for PITR are stored for the instance. The possible transaction log storage states are the following:
DISK
: the instance stores the transaction logs used for PITR on disk. If you upgrade a Cloud SQL Enterprise edition instance to Cloud SQL Enterprise Plus edition, then the upgrade process switches the log storage location to Cloud Storage automatically. For more information, seeUpgrade an instance to Cloud SQL Enterprise Plus edition by using in-place upgrade. You can also choose to switch the storage location by usinggcloud CLI or the Cloud SQL Admin API without upgrading the edition of your instance and without incurring any downtime. For more information, seeSwitch transaction log storage to Cloud Storage.SWITCHING_TO_CLOUD_STORAGE
: the instance is switching the storage location for the PITR transaction logs to Cloud Storage.SWITCHED_TO_CLOUD_STORAGE
: the instance has completed the switching the storage location for PITR transaction logs from disk to Cloud Storage.CLOUD_STORAGE
: the instance stores the transaction logs used for PITR in Cloud Storage.
Switch transaction log storage to Cloud Storage
If your instance stores its transaction logs used for PITR ondisk, then you can switch the storage location toCloud Storage without incurring any downtime. The overall process of switchingthe storage location takes approximately the duration of the transaction log retentionperiod (days) to complete. As soonas you start the switch, transaction logs start accruing inCloud Storage. During the operation, you can check the status of the overallprocess by using the command inCheck the storage location of transaction logs used for PITR.
After the overall process of switching to Cloud Storage is complete,Cloud SQL uses transaction logs from Cloud Storage for PITR.
gcloud
To switch the storage location to Cloud Storage, use the following command:
gcloudsqlinstancespatchINSTANCE_NAME\--switch-transaction-logs-to-cloud-storage
ReplaceINSTANCE_NAME with the name of the instance. The instance must be a primary instance and not a replica instance. The response is similar to the following:
The following message is used for the patch API method.{"name": "INSTANCE_NAME", "project": "PROJECT_NAME", "switchTransactionalLogsToCloudStorageEnabled": "true"}Patching Cloud SQL instance...done.Updated[https://sqladmin.prod.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME].
If the command returns an error, then seeTroubleshoot the switch to Cloud Storage for possiblenext steps.
REST v1
Important: You can't make additional updates to the instance in the same request.Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- INSTANCE_ID: the instance ID. The instance must be a primary instance and not a replica instance.
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID
Request JSON body:
{ "switchTransactionLogsToCloudStorageEnabled": true}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X PATCH \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID", "status": "PENDING", "user": "user@example.com", "insertTime": "2024-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID"}
If the request returns an error, then seeTroubleshoot the switch to Cloud Storage for possiblenext steps.
REST v1beta4
Important: You can't make additional updates to the instance in the same request.Before using any of the request data, make the following replacements:
- PROJECT_ID: the project ID.
- INSTANCE_ID: the instance ID. The instance must be a primary instance and not a replica instance.
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID
Request JSON body:
{ "switchTransactionLogsToCloudStorageEnabled": true}
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
curl -X PATCH \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID"
PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
. Save the request body in a file namedrequest.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID", "status": "PENDING", "user": "user@example.com", "insertTime": "2024-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID"}
If the request returns an error, then seeTroubleshoot the switch to Cloud Storage for possiblenext steps.
Set transaction log retention
To set the number of days to retain write-ahead logs:
Console
In the Google Cloud console, go to theCloud SQL Instances page.
- Open the more actions menu
for the instance you want to set the transaction log on and selectEdit.
- UnderCustomize your instance, expand theData Protection section.
- In theEnable point-in-time recovery section, expandAdvanced options.
- Enter the number of days to retain logs, from 1-35 for Cloud SQL Enterprise Plus edition or 1-7 for Cloud SQL Enterprise edition.
- ClickSave.
Edit the instance to set the number of days to retain write-ahead logs. Replace the following: DAYS_TO_RETAIN: The number of days of transaction logs to keep. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days, with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is between 1 and 7 days, with a default of 7 days. If you don't specify a value, then Cloud SQL uses the default value. This is valid only when PITR is enabled. Keeping more days of transaction logs requires a bigger storage size.gcloud
gcloudsqlinstancespatchINSTANCE_NAME
--retained-transaction-log-days=DAYS_TO_RETAIN
Before using any of the request data, make the following replacements: DAYS_TO_RETAIN: the number of days to retain transaction logs. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days, with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is between 1 and 7 days, with a default of 7 days. If no value is specified, then the default value is used. This is valid only when PITR is enabled. Keeping more days of transaction logs requires a bigger storage size. HTTP method and URL: Request JSON body: To send your request, expand one of these options: Save the request body in a file named Save the request body in a file named You should receive a JSON response similar to the following:REST v1
PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID
{ "settings": { "backupConfiguration": { "transactionLogRetentionDays": "DAYS_TO_RETAIN" } }}
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
.request.json
, and execute the following command:curl -X PATCH \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID"PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
.request.json
, and execute the following command:$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID" | Select-Object -Expand ContentResponse
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID"}
Before using any of the request data, make the following replacements: DAYS_TO_RETAIN: the number of days to retain transaction logs. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days, with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is between 1 and 7 days, with a default of 7 days. If no value is specified, then the default value is used. This is valid only when PITR is enabled. Keeping more days of transaction logs requires a bigger storage size. HTTP method and URL: Request JSON body: To send your request, expand one of these options: Save the request body in a file named Save the request body in a file named You should receive a JSON response similar to the following:REST v1beta4
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID
{ "settings": { "backupConfiguration": { "transactionLogRetentionDays": "DAYS_TO_RETAIN" } }}
curl (Linux, macOS, or Cloud Shell)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
, or by usingCloud Shell, which automatically logs you into thegcloud
CLI . You can check the currently active account by runninggcloud auth list
.request.json
, and execute the following command:curl -X PATCH \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID"PowerShell (Windows)
Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by runninggcloud init
orgcloud auth login
. You can check the currently active account by runninggcloud auth list
.request.json
, and execute the following command:$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID" | Select-Object -Expand ContentResponse
{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID"}
Troubleshoot
Issue | Troubleshooting |
---|---|
OR
| The timestamp you provided is invalid. |
OR
| The timestamp that you provided is for a time where backups or when binlog coordinates could not be found. |
Troubleshoot the switch to Cloud Storage
The following table lists possible errors that might return with theINVALID REQUEST
code when you switch the storage location of thetransaction logs from disk to Cloud Storage.
Issue | Troubleshooting |
---|---|
Switching the storage location of the transaction logs used for PITR is not supported for instances with database type %s. | Make sure that you're running thegcloud CLI command or making the API request on a Cloud SQL for MySQL or Cloud SQL for PostgreSQL instance. Switching the storage location for transaction logs by using gcloud CLI or the Cloud SQL Admin API is not supported for Cloud SQL for SQL Server. |
PostgreSQL transactional logging is not enabled on this instance. | PostgreSQL uses write-ahead logging as the transaction logs for point-in-time recovery (PITR). To support PITR, PostgreSQL requires that you enable write-ahead logging on the instance. For more information about how to enable write-ahead logging, seeEnable PITR. |
This instance is already storing transaction logs used for PITR in Cloud Storage | To verify the storage location of the transaction logs, run the commandinCheck the storage location of transaction logs used for PITR. |
The instance is already switching transaction logs used for PITR from diskto Cloud Storage. | Wait for the switch operation to complete. To verify the status of the operation and the storage location of the transaction logs,run the command inCheck the storage location of transaction logs used for PITR. |
What's next
- Configure flags on your clone
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-16 UTC.