Export and import files in parallel

MySQL  |  PostgreSQL  |  SQL Server

This page describes exporting and importing files into Cloud SQL instances inparallel.

Note: If you're exporting because you want tocreate a new instance from the exported file, considerrestoring from a backup to a different instance orcloning the instance.

You can verify that the import or export operation for multiple files in parallel completed successfully bychecking the operation's status. You can also cancel the import of data into Cloud SQL instances and the export of data from the instances. For more information about cancelling an import or export operation, seeCancel the import and export of data.

Before you begin

Before you begin an export or import operation:

Export data from Cloud SQL for PostgreSQL to multiple files in parallel

The following sections contain information about exporting data from Cloud SQL for PostgreSQL to multiple files in parallel.

Required roles and permissions for exporting data from Cloud SQL for PostgreSQL to multiple files in parallel

To exportdata from Cloud SQL into Cloud Storage, the user initiating the export must have one of the following roles:

Additionally, the service account for the Cloud SQL instance must have one of the following roles:

  • Thestorage.objectAdmin Identity and Access Management (IAM) role
  • A custom role, including the following permissions:
    • storage.objects.create
    • storage.objects.list (for exporting files in parallel only)
    • storage.objects.delete (for exporting files in parallel only)

For help withIAM roles, seeIdentity and Access Management.

Note: The changes that you make to the IAM permissions and roles might take a few minutes to take effect. For more information, seeAccess change propagation.

Export data to multiple files in parallel

You can export data in parallel from multiple files that reside in Cloud SQL to Cloud Storage. To do this, use thepg_dump utility with the--jobs option.

If you plan to import your data into Cloud SQL, then follow theinstructions provided inExporting data from an external database serverso that your files are formatted correctly for Cloud SQL.

Note: If your data contains large objects (blobs), then the export might consume alarge amount of memory, impacting instance performance. For help,seeIssues with importing and exporting data.

gcloud

To export data from Cloud SQL to multiple files in parallel, complete the following steps:

  1. Create a Cloud Storage bucket.

    Note:You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.

  2. To find the service account for the Cloud SQL instance that you're exporting files from, use the
    gcloud sql instances describe command.
    gcloudsqlinstancesdescribeINSTANCE_NAME
  3. ReplaceINSTANCE_NAME with the name of your Cloud SQL instance.

    In the output, look for the value that's associated with theserviceAccountEmailAddress field.

  4. To grant thestorage.objectAdminIAM role to the service account, use thegcloud storage buckets add-iam-policy-binding command. For help with setting IAM permissions, seeUse IAM permissions.
  5. To export data from Cloud SQL to multiple files in parallel, use thegcloud sql export sql command:
    gcloud sql export sqlINSTANCE_NAME gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME \--offload \--parallel \--threads=THREAD_NUMBER \--database=DATABASE_NAME \--table=TABLE_EXPRESSION

    Make the following replacements:

    • INSTANCE_NAME: the name of the Cloud SQL instance from which you're exporting files in parallel.
    • BUCKET_NAME: the name of the Cloud Storage bucket.
    • BUCKET_PATH: the path to the bucket where the export files are stored.
    • FOLDER_NAME: the folder where the export files are stored.
    • THREAD_NUMBER: the number of threads that Cloud SQL uses to export files in parallel. For example, if you want to export three files at a time in parallel, then specify3 as the value for this parameter.
    • DATABASE_NAME (optional): the name of the databases inside of the Cloud SQL instance from which the export is made. If you don't specify a database, then Cloud SQL exports all user databases from the instance. For more information, seeExport an entire instance
    • TABLE_EXPRESSION: the tables to export from the specified database.
    Note: If you want to use serverless exports for up to 2 threads, then use theoffload parameter. If you want to export multiple files in parallel, then use theparallel parameter. Otherwise, remove these parameters from the command.

    Theexport sql command doesn't contain triggers or stored procedures, but does contain views. To export triggers or stored procedures, use a single thread for the export. This thread uses thepg_dump tool.

    After the export completes, you should have files in a folder in the Cloud Storage bucket in thepg_dump directory format.

  6. If you don't need the IAM role that you set inRequired roles and permissions for exporting from Cloud SQL for PostgreSQL, thenrevoke it.

REST v1

To export data from Cloud SQL to multiple files in parallel, complete the following steps:

  1. Create a Cloud Storage bucket:
    gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAME
    Make the following replacements:
    • BUCKET_NAME: the name of the bucket, subject tonaming requirements. For example,my-bucket.
    • PROJECT_NAME: the name of the Google Cloud project that contains the Cloud Storage bucket you're creating.
    • LOCATION_NAME: the location of the bucket where you want to store the files you're exporting. For example,us-east1.

    Note:You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.

  2. Provide your instance with thelegacyBucketWriterIAM role foryour bucket. For help with setting IAM permissions, seeUseIAM permissions.
  3. Export data from Cloud SQL to multiple files in parallel:

    Before using any of the request data, make the following replacements:

    • PROJECT_NAME: the name of the Google Cloud project that contains the Cloud Storage bucket you created.
    • INSTANCE_NAME: the name of the Cloud SQL instance from which you're exporting files in parallel.
    • BUCKET_NAME: the name of the Cloud Storage bucket.
    • BUCKET_PATH: the path to the bucket where the export files are stored.
    • FOLDER_NAME: the folder where the export files are stored.
    • DATABASE_NAME: the name of the database inside of the Cloud SQL instance from which the export is made. If you want to export all user databases from the instance, remove this field from the request body.
    • THREAD_NUMBER: the number of threads that Cloud SQL uses to export files in parallel. For example, if you want to export three files at a time in parallel, then specify3 as the value for this parameter.
    Note: Theoffload parameter enables you to use serverless exports for up to 2 threads. Theparallel parameter enables you to export multiple files in parallel. To use these features, set the values of these parameters toTRUE. Otherwise, set their values toFALSE.

    HTTP method and URL:

    POST https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export

    Request JSON body:

    { "exportContext":   {      "fileType": "SQL",      "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME",      "databases": ["DATABASE_NAME"],      "offload": [TRUE|FALSE],      "sqlExportOptions": {        "parallel": [TRUE|FALSE],        "threads": [THREAD_NUMBER]       }   }}

    To send your request, expand one of these options:

    curl (Linux, macOS, or Cloud Shell)

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export"

    PowerShell (Windows)

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    Response

    {  "kind": "sql#operation",  "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME",  "status": "PENDING",  "user": "user@example.com",  "insertTime": "2020-01-21T22:43:37.981Z",  "operationType": "UPDATE",  "name": "OPERATION_ID",  "targetId": "INSTANCE_NAME",  "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/operations/OPERATION_ID",  "targetProject": "PROJECT_NAME"}
  4. After the export completes, you should have files in a folder in the Cloud Storage bucket in thepg_dump directory format.

  5. If you don't need the IAM role that you set inRequired roles and permissions for exporting from Cloud SQL for PostgreSQL, thenrevoke it.
For the complete list of parameters for the request, see theCloud SQL Admin API page.

REST v1beta4

To export data from Cloud SQL to multiple files in parallel, complete the following steps:

  1. Create a Cloud Storage bucket:
    gcloudstoragebucketscreategs://BUCKET_NAME--project=PROJECT_NAME--location=LOCATION_NAME
    Make the following replacements:
    • BUCKET_NAME: the name of the bucket, subject tonaming requirements. For example,my-bucket.
    • PROJECT_NAME: the name of the Google Cloud project that contains the Cloud Storage bucket you're creating.
    • LOCATION_NAME: the location of the bucket where you want to store the files you're exporting. For example,us-east1.

    Note:You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.

  2. Provide your instance with thestorage.objectAdminIAM role foryour bucket. For help with setting IAM permissions, seeUseIAM permissions.
  3. Export data from Cloud SQL to multiple files in parallel:

    Before using any of the request data, make the following replacements:

    • PROJECT_NAME: the name of the Google Cloud project that contains the Cloud Storage bucket you created.
    • INSTANCE_NAME: the name of the Cloud SQL instance from which you're exporting files in parallel.
    • BUCKET_NAME: the name of the Cloud Storage bucket.
    • BUCKET_PATH: the path to the bucket where the export files are stored.
    • FOLDER_NAME: the folder where the export files are stored.
    • DATABASE_NAME: the name of the database inside of the Cloud SQL instance from which the export is made. If you want to export all user databases from the instance, remove this field from the request body.
    • THREAD_NUMBER: the number of threads that Cloud SQL uses to export files in parallel. For example, if you want to export three files at a time in parallel, then specify3 as the value for this parameter.
    Note: Theoffload parameter enables you to use serverless exports for up to 2 threads. Theparallel parameter enables you to export multiple files in parallel. To use these features, set the values of these parameters toTRUE. Otherwise, set their values toFALSE.

    HTTP method and URL:

    POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export

    Request JSON body:

    { "exportContext":   {      "fileType": "SQL",      "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME",      "databases": ["DATABASE_NAME"],      "offload": [TRUE|FALSE],      "sqlExportOptions": {        "parallel": [TRUE|FALSE],        "threads": [THREAD_NUMBER]       }   }}

    To send your request, expand one of these options:

    curl (Linux, macOS, or Cloud Shell)

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export"

    PowerShell (Windows)

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    Response

    {  "kind": "sql#operation",  "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME",  "status": "PENDING",  "user": "user@example.com",  "insertTime": "2020-01-21T22:43:37.981Z",  "operationType": "UPDATE",  "name": "OPERATION_ID",  "targetId": "INSTANCE_NAME",  "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/operations/OPERATION_ID",  "targetProject": "PROJECT_NAME"}
  4. After the export completes, you should have files in a folder in the Cloud Storage bucket in thepg_dump directory format.

  5. If you don't need the IAM role that you set inRequired roles and permissions for exporting from Cloud SQL for PostgreSQL, thenrevoke it.
For the complete list of parameters for the request, see theCloud SQL Admin API page.

Export and import an entire instance

You can export or import all user databases in an entire instance using adirectory-formatted, parallel export.

To export or import an entire instance, use the same commands as shown in theparallel export andparallel import sections, removing thedatabases ordatabase field, respectively. If youdon't specify a database, Cloud SQL runs a parallel export or importfor all user databases in the instance. This excludes system databases andCloud SQL databases used to manage internal operations.

In a directory-formatted parallel export to Cloud Storage, after the exportsuccessfully completes, the data associated with each database is stored in asubdirectory named for each database, similar to the following:

gs://example-bucket/test-folder/|----postgres/||----3929.dat.gz(tabledatafile)||----toc.dat(metadatafile)||----second_database/||----3930.dat.gz||----3931.dat.gz||----toc.dat

If you want to run a parallel import for an entire instance, and the instance'sfiles were created outside of Cloud SQL, this subdirectory structure isrequired in order for the operation to successfully complete.

When the entire instance dump structure is detected, the import databasespecified in the API is ignored. The operation detects the entire instance'sstructure from the directory name.

You can't run an entire instance export or import for other file formats.

You can't export or import an entire instance as a single SQL file or CSV file.

Import data from multiple files in parallel to Cloud SQL for PostgreSQL

The following sections contain information about importing data from multiple files in parallel to Cloud SQL for PostgreSQL.

Required roles and permissions for importing data from multiple files in parallel to Cloud SQL for PostgreSQL

To importdata from Cloud Storage into Cloud SQL, the user initiating the import must haveone of the following roles:

Additionally, the service account for the Cloud SQL instance must have one of the following roles:

  • Thestorage.objectAdmin IAM role
  • A custom role, including the following permissions:
    • storage.objects.get
    • storage.objects.list (for importing files in parallel only)

For help withIAM roles, seeIdentity and Access Management.

Note: The changes that you make to the IAM permissions and roles might take a few minutes to take effect. For more information, seeAccess change propagation.

Import data to Cloud SQL for PostgreSQL

You can import data in parallel from multiple files that reside in Cloud Storage to your database. To do this, use thepg_restore utility with the--jobs option.

Note: If your data contains large objects (blobs), then the import might consume alarge amount of memory, impacting instance performance. For help,seeIssues with importing and exporting data.

gcloud

To import data from multiple files in parallel into Cloud SQL, complete the following steps:

  1. Create a Cloud Storage bucket.
  2. Upload the files to your bucket.

    Note:Make sure that the files that you're uploading are in thepg_dump directory format. For more information, seeExport data from multiple files in parallel.

    For help with uploading files to buckets, seeUpload objects from files.

  3. To find the service account for the Cloud SQL instance that you're importing files to, use the
    gcloud sql instances describe command.
    gcloudsqlinstancesdescribeINSTANCE_NAME
  4. ReplaceINSTANCE_NAME with the name of your Cloud SQL instance.

    In the output, look for the value that's associated with theserviceAccountEmailAddress field.

  5. To grant thestorage.objectAdminIAM role to the service account, use thegcloud storage buckets add-iam-policy-binding utility. For help with setting IAM permissions, seeUse IAM permissions.
  6. To import data from multiple files in parallel into Cloud SQL, use thegcloud sql import sql command:
    gcloud sql import sqlINSTANCE_NAME gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME \--parallel \ --threads=THREAD_NUMBER \--database=DATABASE_NAME

    Make the following replacements:

    • INSTANCE_NAME: the name of the Cloud SQL instance to which you're importing files in parallel.
    • BUCKET_NAME: the name of the Cloud Storage bucket.
    • BUCKET_PATH: the path to the bucket where the import files are stored.
    • FOLDER_NAME: the folder where the import files are stored.
    • THREAD_NUMBER: the number of threads that Cloud SQL uses to import files in parallel. For example, if you want to import three files at a time in parallel, then specify3 as the value for this parameter.
    • DATABASE_NAME: the name of the database inside of the Cloud SQL instance from which the import is made. You must specify only one database.

    Note: If you want to import multiple files in parallel, then use theparallel parameter.

    If you use theparallel parameter, and you want to drop (clean) database objects before you recreate them, then use theclean parameter. If you use theparallel parameter, and you want to include theIF EXISTS SQL statement with eachDROP statement that's produced by theclean parameter, then use theif-exists parameter.

    Otherwise, remove these parameters from the command.

    If the command returns an error likeERROR_RDBMS, then review the permissions; this error is often due to permissions issues.

  7. If you don't need the IAM permissions that you set inRequired roles and permissions for importing to Cloud SQL for PostgreSQL, then usegcloud storage buckets remove-iam-policy-binding to remove them.

REST v1

To import data from multiple files in parallel into Cloud SQL, complete the following steps:

  1. Create a Cloud Storage bucket:
    gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAME
    Make the following replacements:
    • BUCKET_NAME: the name of the bucket, subject tonaming requirements. For example,my-bucket.
    • PROJECT_NAME: the name of the Google Cloud project that contains the Cloud Storage bucket you're creating.
    • LOCATION_NAME: the location of the bucket where you want to store the files you're importing. For example,us-east1.
  2. Upload the files to your bucket.

    Note:Make sure that the files that you're uploading are in thepg_dump directory format. For more information, seeExport data from multiple files in parallel.

    For help with uploading files to buckets, seeUpload objects from files.

  3. Provide your instance with thestorage.objectAdminIAM role for your bucket. For help with setting IAM permissions, seeUse IAM permissions.
  4. Import data from multiple files in parallel into Cloud SQL:

    Before using any of the request data, make the following replacements:

    • PROJECT_NAME: the name of the Google Cloud project that contains the Cloud Storage bucket you created.
    • INSTANCE_NAME: the name of the Cloud SQL instance to which you're importing files in parallel.
    • BUCKET_NAME: the name of the Cloud Storage bucket.
    • BUCKET_PATH: the path to the bucket where the import files are stored.
    • FOLDER_NAME: the folder where the import files are stored.
    • DATABASE_NAME: the name of the database inside of the Cloud SQL instance from which the import is made. You must specify only one database.
    • THREAD_NUMBER: the number of threads that Cloud SQL uses to import files in parallel. For example, if you want to import three files at a time in parallel, then specify3 as the value for this parameter.

    Note: Theoffload parameter enables you to use serverless imports for up to 2 threads. Theparallel parameter enables you to import multiple files in parallel.

    If you use theparallel parameter, then theclean parameter enables you to drop (clean) database objects before you recreate them. If you use theparallel parameter, then theifExists parameter enables you to include theIF EXISTS SQL statement with eachDROP statement that's produced by theclean parameter.

    To use these features, set the values of these parameters toTRUE. Otherwise, set their values toFALSE.

    HTTP method and URL:

    POST https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import

    Request JSON body:

    {  "importContext":    {       "fileType": "SQL",       "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME",       "databases": ["DATABASE_NAME"],       "offload": [TRUE|FALSE],       "sqlImportOptions": {         "parallel": [TRUE|FALSE],         "clean": [TRUE|FALSE],         "ifExists": [TRUE|FALSE],         "threads": [THREAD_NUMBER]        }    } }

    To send your request, expand one of these options:

    curl (Linux, macOS, or Cloud Shell)

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import"

    PowerShell (Windows)

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    Response

    {  "kind": "sql#operation",  "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME",  "status": "PENDING",  "user": "user@example.com",  "insertTime": "2020-01-21T22:43:37.981Z",  "operationType": "UPDATE",  "name": "OPERATION_ID",  "targetId": "INSTANCE_NAME",  "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/operations/OPERATION_ID",  "targetProject": "PROJECT_NAME"}

    To use a different user for the import, specify theimportContext.importUser property.

    For the complete list of parameters for the request, see theCloud SQL Admin API page.
  5. If you don't need the IAM permissions that you set inRequired roles and permissions for importing to Cloud SQL for PostgreSQL, then usegcloud storage buckets remove-iam-policy-binding to remove them.

REST v1beta4

To import data from multiple files in parallel into Cloud SQL, complete the following steps:

  1. Create a Cloud Storage bucket:
    gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAME
    Make the following replacements:
    • BUCKET_NAME: the name of the bucket, subject tonaming requirements. For example,my-bucket.
    • PROJECT_NAME: the name of the Google Cloud project that contains the Cloud Storage bucket you're creating.
    • LOCATION_NAME: the location of the bucket where you want to store the files you're importing. For example,us-east1.
  2. Upload the files to your bucket.

    Note:Make sure that the files that you're uploading are in thepg_dump directory format. For more information, seeExport data from multiple files in parallel.

    For help with uploading files to buckets, seeUpload objects from files.

  3. Provide your instance with thestorage.objectAdminIAM role for your bucket. For help with setting IAM permissions, seeUse IAM permissions.
  4. Import data from multiple files in parallel into Cloud SQL:

    Before using any of the request data, make the following replacements:

    • PROJECT_NAME: the name of the Google Cloud project that contains the Cloud Storage bucket you created.
    • INSTANCE_NAME: the name of the Cloud SQL instance from which you're importing files in parallel.
    • BUCKET_NAME: the name of the Cloud Storage bucket.
    • BUCKET_PATH: the path to the bucket where the import files are stored.
    • FOLDER_NAME: the folder where the import files are stored.
    • DATABASE_NAME: the name of the database inside of the Cloud SQL instance from which the import is made. You must specify only one database.
    • THREAD_NUMBER: the number of threads that Cloud SQL uses to import files in parallel. For example, if you want to import three files at a time in parallel, then specify3 as the value for this parameter.

    Note: Theoffload parameter enables you to use serverless imports for up to 2 threads. Theparallel parameter enables you to import multiple files in parallel.

    If you use theparallel parameter, then theclean parameter enables you to drop (clean) database objects before you recreate them. If you use theparallel parameter, then theifExists parameter enables you to include theIF EXISTS SQL statement with eachDROP statement that's produced by theclean parameter.

    To use these features, set the values of these parameters toTRUE. Otherwise, set their values toFALSE.

    HTTP method and URL:

    POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import

    Request JSON body:

     {   "importContext":     {        "fileType": "SQL",        "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME",        "databases": ["DATABASE_NAME"],        "offload": [TRUE|FALSE],        "sqlImportOptions": {          "parallel": [TRUE|FALSE],          "clean": [TRUE|FALSE],          "ifExists": [TRUE|FALSE],          "threads": [THREAD_NUMBER]         }     }  }

    To send your request, expand one of these options:

    curl (Linux, macOS, or Cloud Shell)

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import"

    PowerShell (Windows)

    Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

    Save the request body in a file namedrequest.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `
    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    Response

    {  "kind": "sql#operation",  "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME",  "status": "PENDING",  "user": "user@example.com",  "insertTime": "2020-01-21T22:43:37.981Z",  "operationType": "UPDATE",  "name": "OPERATION_ID",  "targetId": "INSTANCE_NAME",  "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/operations/OPERATION_ID",  "targetProject": "PROJECT_NAME"}

    To use a different user for the import, specify theimportContext.importUser property.

    For the complete list of parameters for the request, see theCloud SQL Admin API page.
  5. If you don't need the IAM permissions that you set inRequired roles and permissions for importing to Cloud SQL for PostgreSQL, then usegcloud storage buckets remove-iam-policy-binding to remove them.

Limitations

  • If you specify too many threads when you import or export data from multiplefiles in parallel, then you might use more memory than your Cloud SQLinstance has. If this occurs, then an internal error message appears. Checkthe memory usage of your instance and increase the instance's size, as needed.For more information, seeAbout instance settings.

  • When performing an export, commas in database names or table names in thedatabases ortables fields aren't supported.

  • Make sure that you have enough disk space for the initial dump file download.Otherwise, ano space left on disk error appears.

  • If your instance has only one virtual CPU (vCPU), then you can't import orexport multiple files in parallel. The number of vCPUs for your instance can'tbe smaller than the number of threads that you're using for the import orexport operation, and the number of threads must be at least two.

  • Multi-threaded (parallel) imports and exports aren't compatible withsingle-threaded imports and exports. For example, dump files generated by asingle-threaded export can only be imported by single-threaded imports.Similarly, dump files generated by parallel exports can only be imported byparallel imports.

  • Thepg_dump utility can't chunk any tables that you export. Therefore, ifyou have one very large table, then it can become a bottleneck for the speedof the export operation.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-07-14 UTC.