Manage tables
This document describes how to manage tables in BigQuery.You can manage your BigQuery tables in the following ways:
For information about how to restore (orundelete) a deleted table, seeRestore deleted tables.
For more information about creating and using tables including getting tableinformation, listing tables, and controlling access to table data, seeCreating and using tables.
Before you begin
Grant Identity and Access Management (IAM) roles that give users the necessary permissionsto perform each task in this document. The permissions required to perform atask (if any) are listed in the "Required permissions" section of the task.
Update table properties
You can update the following elements of a table:
Required permissions
To get the permissions that you need to update table properties, ask your administrator to grant you theData Editor (roles/bigquery.dataEditor) IAM role on a table. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the permissions required to update table properties. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to update table properties:
bigquery.tables.updatebigquery.tables.get
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Additionally, if you have thebigquery.datasets.create permission, you canupdate the properties of the tables of the datasets that you create.
Update a table's description
You can update a table's description in the following ways:
- Using the Google Cloud console.
- Using a data definition language (DDL)
ALTER TABLEstatement. - Using the bq command-line tool's
bq updatecommand. - Calling the
tables.patchAPI method. - Using the client libraries.
- Generating a description with Gemini in BigQuery.
To update a table's description:
Console
You can't add a description when you create a table using theGoogle Cloud console. After the table is created, you can add adescription on theDetails page.
In the left pane, clickExplorer:

If you don't see the left pane, clickExpand left pane to open the pane.
In theExplorer pane, expand your project, clickDatasets, andthen select a dataset.
ClickOverview> Tables, and then select a table.
Click theDetails tab, and then clickEdit details.
In theDescription section, add a new description or edit an existingdescription.
ClickSave.
SQL
Use theALTER TABLE SET OPTIONS statement.The following example updates thedescription of a table namedmytable:
In the Google Cloud console, go to theBigQuery page.
In the query editor, enter the following statement:
ALTERTABLEmydataset.mytableSETOPTIONS(description='Description of mytable');
ClickRun.
For more information about how to run queries, seeRun an interactive query.
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Issue the
bq updatecommand with the--descriptionflag. If you areupdating a table in a project other than your default project, add theproject ID to the dataset name in the following format:project_id:dataset.bqupdate\--description"description"\project_id:dataset.table
Replace the following:
description: the text describing the table in quotesproject_id: your project IDdataset: the name of the dataset that contains the tableyou're updatingtable: the name of the table you're updating
Examples:
To change the description of the
mytabletable in themydatasetdataset to"Description of mytable", enter the following command. Themydatasetdataset is inyour default project.bq update --description "Description of mytable" mydataset.mytable
To change the description of the
mytabletable in themydatasetdataset to"Description of mytable", enter the following command. Themydatasetdataset is in themyotherprojectproject, not your default project.bq update \--description "Description of mytable" \myotherproject:mydataset.mytable
API
Call thetables.patchmethod and use thedescription property in thetable resourceto update the table's description. Because thetables.update methodreplaces the entire table resource, thetables.patch method is preferred.
Go
Before trying this sample, follow theGo setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryGo API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.import("context""fmt""cloud.google.com/go/bigquery")// updateTableDescription demonstrates how to fetch a table's metadata and updates the Description metadata.funcupdateTableDescription(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()tableRef:=client.Dataset(datasetID).Table(tableID)meta,err:=tableRef.Metadata(ctx)iferr!=nil{returnerr}update:=bigquery.TableMetadataToUpdate{Description:"Updated description.",}if_,err=tableRef.Update(ctx,update,meta.ETag);err!=nil{returnerr}returnnil}
Java
Before trying this sample, follow theJava setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryJava API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Table;publicclassUpdateTableDescription{publicstaticvoidrunUpdateTableDescription(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";StringnewDescription="this is the new table description";updateTableDescription(datasetName,tableName,newDescription);}publicstaticvoidupdateTableDescription(StringdatasetName,StringtableName,StringnewDescription){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Tabletable=bigquery.getTable(datasetName,tableName);bigquery.update(table.toBuilder().setDescription(newDescription).build());System.out.println("Table description updated successfully to "+newDescription);}catch(BigQueryExceptione){System.out.println("Table description was not updated \n"+e.toString());}}}
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.# from google.cloud import bigquery# client = bigquery.Client()# project = client.project# dataset_ref = bigquery.DatasetReference(project, dataset_id)# table_ref = dataset_ref.table('my_table')# table = client.get_table(table_ref) # API requestasserttable.description=="Original description."table.description="Updated description."table=client.update_table(table,["description"])# API requestasserttable.description=="Updated description."
Gemini
You can generate a table description with Gemini inBigQuery by using data insights. Data insights is an automatedway to explore, understand, and curate your data.
For more information about data insights, including setup steps, requiredIAM roles, and best practices to improve the accuracy of thegenerated insights, seeGenerate data insights in BigQuery.
In the Google Cloud console, go to theBigQuery page.
In the left pane, clickExplorer:

In theExplorer pane, expand your project and dataset, then selectthe table.
In the details panel, click theSchema tab.
ClickGenerate.
Note: If you don't see theGenerate button, clickDescribe data.You might need to scroll to see this button.Gemini generates a table description and insights aboutthe table. It takes a few minutes for the information to bepopulated. You can view the generated insights on the table'sInsights tab.
To edit and save the generated table description, do the following:
ClickView column descriptions.
The current table description and the generated description aredisplayed.
In theTable description section, clickSave to details.
To replace the current description with the generated description,clickCopy suggested description.
Edit the table description as necessary, and then clickSave to details.
The table description is updated immediately.
To close thePreview descriptions panel, clickClose.
Update a table's expiration time
You can set a default table expiration time at the dataset level, or you can seta table's expiration time when the table is created. A table's expiration timeis often referred to as "time to live" or TTL.
When a table expires, it is deleted along with all of the data it contains.If necessary, you can undelete the expired table within the time travel windowspecified for the dataset, seeRestore deleted tables for moreinformation.
If you set the expiration when the table is created, the dataset's default tableexpiration is ignored. If you do not set a default table expiration at thedataset level, and you do not set a table expiration when the table is created,the table never expires and you mustdelete the tablemanually.
At any point after the table is created, you can update the table's expirationtime in the following ways:
- Using the Google Cloud console.
- Using a data definition language (DDL)
ALTER TABLEstatement. - Using the bq command-line tool's
bq updatecommand. - Calling the
tables.patchAPI method. - Using the client libraries.
To update a table's expiration time:
Console
You can't add an expiration time when you create a table using theGoogle Cloud console. After a table is created, you can add or update atable expiration on theTable Details page.
In the left pane, clickExplorer:

In theExplorer pane, expand your project, clickDatasets, andthen select a dataset.
ClickOverview> Tables, and then select a table.
Click theDetails tab and the clickEdit details.
ForExpiration time, selectSpecify date. Then select theexpiration date using the calendar widget.
ClickSave. The updated expiration time appears in theTable info section.
SQL
Use theALTER TABLE SET OPTIONS statement.The following example updates theexpiration time of a table namedmytable:
In the Google Cloud console, go to theBigQuery page.
In the query editor, enter the following statement:
ALTERTABLEmydataset.mytableSETOPTIONS(-- Sets table expiration to timestamp 2025-02-03 12:34:56expiration_timestamp=TIMESTAMP'2025-02-03 12:34:56');
ClickRun.
For more information about how to run queries, seeRun an interactive query.
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Issue the
bq updatecommand with the--expirationflag. If you areupdating a table in a project other than your default project,add the project ID to the dataset name in the following format:project_id:dataset.bqupdate\--expirationinteger\
project_id:dataset.tableReplace the following:
integer: the default lifetime (in seconds) forthe table. The minimum value is 3600 seconds (one hour). The expirationtime evaluates to the current time plus the integer value. If you specify0, the table expiration is removed, and the table never expires. Tableswith no expiration must be manually deleted.project_id: your project ID.dataset: the name of the dataset that containsthe table you're updating.table: the name of the table you're updating.
Examples:
To update the expiration time of the
mytabletable in themydatasetdataset to 5 days(432000 seconds), enter the following command. Themydatasetdataset is in yourdefault project.bq update --expiration 432000 mydataset.mytable
To update the expiration time of the
mytabletable in themydatasetdataset to 5 days(432000 seconds), enter the following command. Themydatasetdataset is in themyotherprojectproject, not your default project.bq update --expiration 432000 myotherproject:mydataset.mytable
API
Call thetables.patchmethod and use theexpirationTime property in thetable resourceto update the table expiration in milliseconds. Because thetables.updatemethod replaces the entire table resource, thetables.patch method ispreferred.
Go
Before trying this sample, follow theGo setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryGo API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.import("context""fmt""time""cloud.google.com/go/bigquery")// updateTableExpiration demonstrates setting the table expiration of a table to a specific point in time// in the future, at which time it will be deleted.funcupdateTableExpiration(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()tableRef:=client.Dataset(datasetID).Table(tableID)meta,err:=tableRef.Metadata(ctx)iferr!=nil{returnerr}update:=bigquery.TableMetadataToUpdate{ExpirationTime:time.Now().Add(time.Duration(5*24)*time.Hour),// table expiration in 5 days.}if_,err=tableRef.Update(ctx,update,meta.ETag);err!=nil{returnerr}returnnil}
Java
Before trying this sample, follow theJava setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryJava API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Table;importjava.util.concurrent.TimeUnit;publicclassUpdateTableExpiration{publicstaticvoidrunUpdateTableExpiration(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";// Update table expiration to one day.LongnewExpiration=TimeUnit.MILLISECONDS.convert(1,TimeUnit.DAYS);updateTableExpiration(datasetName,tableName,newExpiration);}publicstaticvoidupdateTableExpiration(StringdatasetName,StringtableName,LongnewExpiration){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Tabletable=bigquery.getTable(datasetName,tableName);bigquery.update(table.toBuilder().setExpirationTime(newExpiration).build());System.out.println("Table expiration updated successfully to "+newExpiration);}catch(BigQueryExceptione){System.out.println("Table expiration was not updated \n"+e.toString());}}}
Node.js
Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionupdateTableExpiration(){// Updates a table's expiration./** * TODO(developer): Uncomment the following lines before running the sample. */// const datasetId = 'my_dataset', // Existing dataset// const tableId = 'my_table', // Existing table// const expirationTime = Date.now() + 1000 * 60 * 60 * 24 * 5 // 5 days from current time in ms// Retreive current table metadataconsttable=bigquery.dataset(datasetId).table(tableId);const[metadata]=awaittable.getMetadata();// Set new table expiration to 5 days from current timemetadata.expirationTime=expirationTime.toString();const[apiResponse]=awaittable.setMetadata(metadata);constnewExpirationTime=apiResponse.expirationTime;console.log(`${tableId} expiration:${newExpirationTime}`);}Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.# Copyright 2022 Google LLC## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## https://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.importdatetimedefupdate_table_expiration(table_id,expiration):orig_table_id=table_idorig_expiration=expirationfromgoogle.cloudimportbigqueryclient=bigquery.Client()# TODO(dev): Change table_id to the full name of the table you want to update.table_id="your-project.your_dataset.your_table_name"# TODO(dev): Set table to expire for desired days days from now.expiration=datetime.datetime.now(datetime.timezone.utc)+datetime.timedelta(days=5)table_id=orig_table_idexpiration=orig_expirationtable=client.get_table(table_id)# Make an API request.table.expires=expirationtable=client.update_table(table,["expires"])# API requestprint(f"Updated{table_id}, expires{table.expires}.")
To update the default dataset partition expiration time:
Java
Before trying this sample, follow theJava setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryJava API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Dataset;importjava.util.concurrent.TimeUnit;// Sample to update partition expiration on a dataset.publicclassUpdateDatasetPartitionExpiration{publicstaticvoidmain(String[]args){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";// Set the default partition expiration (applies to new tables, only) in// milliseconds. This example sets the default expiration to 90 days.LongnewExpiration=TimeUnit.MILLISECONDS.convert(90,TimeUnit.DAYS);updateDatasetPartitionExpiration(datasetName,newExpiration);}publicstaticvoidupdateDatasetPartitionExpiration(StringdatasetName,LongnewExpiration){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Datasetdataset=bigquery.getDataset(datasetName);bigquery.update(dataset.toBuilder().setDefaultPartitionExpirationMs(newExpiration).build());System.out.println("Dataset default partition expiration updated successfully to "+newExpiration);}catch(BigQueryExceptione){System.out.println("Dataset partition expiration was not updated \n"+e.toString());}}}
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.# Copyright 2019 Google LLC## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## https://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.defupdate_dataset_default_partition_expiration(dataset_id:str)->None:fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dataset_id to the ID of the dataset to fetch.# dataset_id = 'your-project.your_dataset'dataset=client.get_dataset(dataset_id)# Make an API request.# Set the default partition expiration (applies to new tables, only) in# milliseconds. This example sets the default expiration to 90 days.dataset.default_partition_expiration_ms=90*24*60*60*1000dataset=client.update_dataset(dataset,["default_partition_expiration_ms"])# Make an API request.print("Updated dataset{}.{} with new default partition expiration{}".format(dataset.project,dataset.dataset_id,dataset.default_partition_expiration_ms))
Update a table's rounding mode
You can update a table'sdefault rounding modeby using theALTER TABLE SET OPTIONS DDL statement.The following example updates the default rounding mode formytable toROUND_HALF_EVEN:
ALTERTABLEmydataset.mytableSETOPTIONS(default_rounding_mode="ROUND_HALF_EVEN");
When you add aNUMERIC orBIGNUMERIC field to a table and do not specifyarounding mode, then the rounding modeis automatically set to the table's default rounding mode. Changing a table'sdefault rounding mode doesn't alter the rounding mode of existing fields.
Update a table's schema definition
For more information about updating a table's schema definition, seeModifying table schemas.
Rename a table
You can rename a table after it has been created by using theALTER TABLE RENAME TO statement.The following example renamesmytable tomynewtable:
ALTERTABLEmydataset.mytableRENAMETOmynewtable;
TheALTER TABLE RENAME TO statement recreates the table in the destinationdataset with the creation timestamp of the original table. If you haveconfigureddataset-level tableexpiration, the renamedtable might be immediately deleted if its original creation timestamp fallsoutside of the expiration window.
Limitations on renaming tables
- If you want to rename a table that has data streaming into it, you must stopthe streaming, commit any pending streams, and waitfor BigQuery to indicate that streamingis not in use.
- While a table can usually be renamed 5 hours after the last streamingoperation, it might take longer.
- Existing table ACLs and row access policies are preserved, but table ACL androw access policy updates made during the table rename are not preserved.
- You can't concurrently rename a table and run a DML statement on that table.
- Renaming a table removes allData Catalog tags(deprecated) andDataplex Universal Catalog aspectson the table.
- Any search index or vector index created on the table is dropped when the table is renamed.
- You can't rename external tables.
Copy a table
This section describeshow to create a full copy of a table. For information about other types of tablecopies, seetable clones andtable snapshots.
You can copy a table in the following ways:
- Use the Google Cloud console.
- Use the
bq cpcommand. - Use a data definition language (DDL)
CREATE TABLE COPYstatement. - Call thejobs.insertAPI method and configure a
copyjob. - Use the client libraries.
Limitations on copying tables
Table copy jobs are subject to the following limitations:
- You can't stop a table copy operation after you start it. A table copyoperation runs asynchronously and doesn't stop even when you cancel the job.You are also charged for data transfer for a cross-region table copy and forstorage in the destination region.
- When you copy a table, the name of the destination table must adhere to thesame naming conventions as when youcreate a table.
- Table copies are subject to BigQuerylimits on copy jobs.
- The Google Cloud console supports copying only one table at a time. Youcan't overwrite an existing table in the destination dataset. The table musthave a unique name in the destination dataset.
- Copying multiple source tables into a destination table is not supported bythe Google Cloud console.
When copying multiple source tables to a destination table using the API,bq command-line tool, or the client libraries, all source tables must have identicalschemas, including any partitioning or clustering.
Certain table schema updates, such as dropping or renamingcolumns, can cause tables to have apparently identical schemas but differentinternal representations. This might cause a table copy job to fail with theerror
Maximum limit on diverging physical schemas reached. In this case, youcan use theCREATE TABLE LIKEstatementto ensure that your source table's schema matches the destination table'sschema exactly.The time that BigQuery takes to copy tables might varysignificantly across different runs because the underlying storage is manageddynamically.
You can't copy and append a source table to a destination table that has morecolumns than the source table, and the additional columns havedefault values. Instead, you can run
INSERT destination_table SELECT * FROM source_tableto copy over the data.If the copy operation overwrites an existing table, then the table-levelaccess for the existing table is maintained.Tags fromthe source table aren't copied to the overwritten table, while tags on theexisting table are retained. However, when you copy tables across regions,tags on the existing table are removed.
If the copy operation creates a new table, then the table-level access for thenew table is determined by the access policies of the dataset in which the newtable is created. Additionally,tags are copied fromthe source table to the new table.
When you copy multiple source tables to a destination table, all source tablesmust have identical tags.
Required roles
To perform the tasks in this document, you need the following permissions.
Roles to copy tables and partitions
To get the permissions that you need to copy tables and partitions, ask your administrator to grant you theData Editor (roles/bigquery.dataEditor) IAM role on the source and destination datasets. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the permissions required to copy tables and partitions. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to copy tables and partitions:
bigquery.tables.getDataon the source and destination datasetsbigquery.tables.geton the source and destination datasetsbigquery.tables.createon the destination datasetbigquery.tables.updateon the destination dataset
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Permission to run a copy job
To get the permission that you need to run a copy job, ask your administrator to grant you theJob User (roles/bigquery.jobUser) IAM role on the source and destination datasets. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the bigquery.jobs.create permission, which is required to run a copy job.
You might also be able to get this permission withcustom roles or otherpredefined roles.
Copy a single source table
You can copy a single table in the following ways:
- Using the Google Cloud console.
- Using the bq command-line tool's
bq cpcommand. - Using a data definition language (DDL)
CREATE TABLE COPYstatement. - Calling the
jobs.insertAPI method, configuring acopyjob, and specifying thesourceTableproperty. - Using the client libraries.
The Google Cloud console and theCREATE TABLE COPY statement support onlyone source table and one destinationtable in a copy job. Tocopy multiple source filesto a destination table, you must use the bq command-line tool or the API.
To copy a single source table:
Console
In the left pane, clickExplorer:

In theExplorer pane, expand your project, clickDatasets, andthen select a dataset.
ClickOverview> Tables, and then select a table.
In the details pane, clickCopy.
In theCopy table dialog, underDestination:
- ForProject, choose the project that will store the copiedtable.
- ForDataset, select the dataset where you want to storethe copied table. The source and destination datasets must be in thesamelocation.
- ForTable, enter a name for the new table. The name mustbe unique in the destination dataset. You can't overwrite an existingtable in the destination dataset using the Google Cloud console. Formore information about table name requirements, seeTable naming.
ClickCopy to start the copy job.
SQL
Use theCREATE TABLE COPY statementto copy a table namedtable1 to a new table namedtable1copy:
In the Google Cloud console, go to theBigQuery page.
In the query editor, enter the following statement:
CREATETABLE
myproject.mydataset.table1copyCOPYmyproject.mydataset.table1;ClickRun.
For more information about how to run queries, seeRun an interactive query.
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Issue the
bq cpcommand. Optional flags can be used to control the writedisposition of the destination table:-aor--append_tableappends the data from the source table to anexisting table in the destination dataset.-for--forceoverwrites an existing table in the destination datasetand doesn't prompt you for confirmation.-nor--no_clobberreturns the following error message if the tableexists in the destination dataset:Table 'project_id:dataset.table' already exists, skipping.If-nis not specified, the default behavior is to prompt you to choosewhether to replace the destination table.--destination_kms_keyis the customer-managed Cloud KMS keyused to encrypt the destination table.
--destination_kms_keyis not demonstrated here. SeeProtecting data with Cloud Key Management Service keysfor more information.If the source or destination dataset is in a project other than your defaultproject, add the project ID to the dataset names in the following format:
project_id:dataset.(Optional) Supply the
--locationflag and set the value to yourlocation.bq--location=locationcp\-a-f-n\
project_id:dataset.source_table\project_id:dataset.destination_tableReplace the following:
location: the name of your location.The--locationflag isoptional. For example, if you are using BigQuery in theTokyo region, you can set the flag's value toasia-northeast1. You canset a default value for the location using the.bigqueryrcfile.project_id: your project ID.dataset: the name of the source or destinationdataset.source_table: the table you're copying.destination_table: the name of the table in thedestination dataset.
Examples:
To copy the
mydataset.mytabletable to themydataset2.mytable2table,enter the following command. Both datasets are in your default project.bq cp mydataset.mytable mydataset2.mytable2
To copy the
mydataset.mytabletable and to overwrite a destination tablewith the same name, enter the following command. The source dataset is inyour default project. The destination dataset is in themyotherprojectproject. The-fshortcut is used to overwrite the destination tablewithout a prompt.bqcp-f\mydataset.mytable\myotherproject:myotherdataset.mytable
To copy the
mydataset.mytabletable and to return an error if thedestination dataset contains a table with the same name, enter the followingcommand. The source dataset is in your default project. The destinationdataset is in themyotherprojectproject. The-nshortcut is used toprevent overwriting a table with the same name.bqcp-n\mydataset.mytable\myotherproject:myotherdataset.mytable
To copy the
mydataset.mytabletable and to append the data to adestination table with the same name, enter the following command. Thesource dataset is in your default project. The destination dataset is in themyotherprojectproject. The- ashortcut is used to append to thedestination table.bq cp -a mydataset.mytable myotherproject:myotherdataset.mytable
API
You can copy an existing table through the API by calling thebigquery.jobs.insertmethod, and configuring acopy job. Specify your location inthelocation property in thejobReference section of thejob resource.
You must specify the following values in your job configuration:
"copy": { "sourceTable": { // Required "projectId": string, // Required "datasetId": string, // Required "tableId": string // Required }, "destinationTable": { // Required "projectId": string, // Required "datasetId": string, // Required "tableId": string // Required }, "createDisposition": string, // Optional "writeDisposition": string, // Optional },WheresourceTable provides information about the table to becopied,destinationTable provides information about the newtable,createDisposition specifies whether to create thetable if it doesn't exist, andwriteDisposition specifieswhether to overwrite or append to an existing table.
C#
Before trying this sample, follow theC# setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryC# API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.usingGoogle.Apis.Bigquery.v2.Data;usingGoogle.Cloud.BigQuery.V2;usingSystem;publicclassBigQueryCopyTable{publicvoidCopyTable(stringprojectId="your-project-id",stringdestinationDatasetId="your_dataset_id"){BigQueryClientclient=BigQueryClient.Create(projectId);TableReferencesourceTableRef=newTableReference(){TableId="shakespeare",DatasetId="samples",ProjectId="bigquery-public-data"};TableReferencedestinationTableRef=client.GetTableReference(destinationDatasetId,"destination_table");BigQueryJobjob=client.CreateCopyJob(sourceTableRef,destinationTableRef).PollUntilCompleted()// Wait for the job to complete..ThrowOnAnyError();// Retrieve destination tableBigQueryTabledestinationTable=client.GetTable(destinationTableRef);Console.WriteLine($"Copied {destinationTable.Resource.NumRows} rows from table "+$"{sourceTableRef.DatasetId}.{sourceTableRef.TableId} "+$"to {destinationTable.FullyQualifiedId}.");}}
Go
Before trying this sample, follow theGo setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryGo API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.import("context""fmt""cloud.google.com/go/bigquery")// copyTable demonstrates copying a table from a source to a destination, and// allowing the copy to overwrite existing data by using truncation.funccopyTable(projectID,datasetID,srcID,dstIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// srcID := "sourcetable"// dstID := "destinationtable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()dataset:=client.Dataset(datasetID)copier:=dataset.Table(dstID).CopierFrom(dataset.Table(srcID))copier.WriteDisposition=bigquery.WriteTruncatejob,err:=copier.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}returnnil}
Java
Before trying this sample, follow theJava setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryJava API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;publicclassCopyTable{publicstaticvoidrunCopyTable(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DESTINATION_DATASET_NAME";StringdestinationTableId="MY_DESTINATION_TABLE_NAME";StringsourceDatasetName="MY_SOURCE_DATASET_NAME";StringsourceTableId="MY_SOURCE_TABLE_NAME";copyTable(sourceDatasetName,sourceTableId,destinationDatasetName,destinationTableId);}publicstaticvoidcopyTable(StringsourceDatasetName,StringsourceTableId,StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdsourceTable=TableId.of(sourceDatasetName,sourceTableId);TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,sourceTable).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Job not executed since it no longer exists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuery was unable to copy table due to an error: \n"+job.getStatus().getError());return;}System.out.println("Table copied successfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Table copying job was interrupted. \n"+e.toString());}}}
Node.js
Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.// Import the Google Cloud client library and create a clientconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTable(){// Copies src_dataset:src_table to dest_dataset:dest_table./** * TODO(developer): Uncomment the following lines before running the sample */// const srcDatasetId = "my_src_dataset";// const srcTableId = "my_src_table";// const destDatasetId = "my_dest_dataset";// const destTableId = "my_dest_table";// Copy the table contents into another tableconst[job]=awaitbigquery.dataset(srcDatasetId).table(srcTableId).copy(bigquery.dataset(destDatasetId).table(destTableId));console.log(`Job${job.id} completed.`);// Check the job's status for errorsconsterrors=job.status.errors;if(errors &&errors.length >0){throwerrors;}}
PHP
Before trying this sample, follow thePHP setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPHP API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.use Google\Cloud\BigQuery\BigQueryClient;use Google\Cloud\Core\ExponentialBackoff;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $sourceTableId = 'The BigQuery table ID to copy from';// $destinationTableId = 'The BigQuery table ID to copy to';$bigQuery = new BigQueryClient([ 'projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$sourceTable = $dataset->table($sourceTableId);$destinationTable = $dataset->table($destinationTableId);$copyConfig = $sourceTable->copy($destinationTable);$job = $sourceTable->runJob($copyConfig);// poll the job until it is complete$backoff = new ExponentialBackoff(10);$backoff->execute(function () use ($job) { print('Waiting for job to complete' . PHP_EOL); $job->reload(); if (!$job->isComplete()) { throw new Exception('Job has not yet completed', 500); }});// check if the job has errorsif (isset($job->info()['status']['errorResult'])) { $error = $job->info()['status']['errorResult']['message']; printf('Error running job: %s' . PHP_EOL, $error);} else { print('Table copied successfully' . PHP_EOL);}
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set source_table_id to the ID of the original table.# source_table_id = "your-project.source_dataset.source_table"# TODO(developer): Set destination_table_id to the ID of the destination table.# destination_table_id = "your-project.destination_dataset.destination_table"job=client.copy_table(source_table_id,destination_table_id)job.result()# Wait for the job to complete.print("A copy of the table created.")
Copy multiple source tables
You can copy multiple source tables to a destination table in the followingways:
- Using the bq command-line tool's
bq cpcommand. - Calling the
jobs.insertmethod, configuring acopyjob, and specifying thesourceTablesproperty. - Using the client libraries.
All source tables must have identical schemas andtags,and only one destination table is allowed.
Source tables must be specified as a comma-separated list. You can't usewildcards when you copy multiple source tables.
To copy multiple source tables, select one of the following choices:
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Issue the
bq cpcommand and include multiple source tables as acomma-separated list. Optional flags can be used to control the writedisposition of the destination table:-aor--append_tableappends the data from the source tables to anexisting table in the destination dataset.-for--forceoverwrites an existing destination table in thedestination dataset and doesn't prompt you for confirmation.-nor--no_clobberreturns the following error message if the tableexists in the destination dataset:Table 'project_id:dataset.table'already exists, skipping.If-nis not specified, the defaultbehavior is to prompt you to choose whether to replace the destinationtable.--destination_kms_keyis the customer-managed Cloud Key Management Service key used toencrypt the destination table.
--destination_kms_keyis not demonstrated here. SeeProtecting data with Cloud Key Management Service keysfor more information.If the source or destination dataset is in a project other than your defaultproject, add the project ID to the dataset names in the following format:
project_id:dataset.(Optional) Supply the
--locationflag and set the value to yourlocation.bq--location=locationcp\-a-f-n\
project_id:dataset.source_table,project_id:dataset.source_table\project_id:dataset.destination_tableReplace the following:
location: the name of your location.The--locationflag isoptional. For example, if you are using BigQuery in theTokyo region, you can set the flag's value toasia-northeast1. You canset a default value for the location using the.bigqueryrcfile.project_id: your project ID.dataset: the name of the source or destinationdataset.source_table: the table that you're copying.destination_table: the name of the table in thedestination dataset.
Examples:
To copy the
mydataset.mytabletable and themydataset.mytable2table tomydataset2.tablecopytable, enter the following command . All datasets arein your default project.bqcp\mydataset.mytable,mydataset.mytable2\mydataset2.tablecopy
To copy the
mydataset.mytabletable and themydataset.mytable2table tomyotherdataset.mytabletable and to overwrite a destination table with thesame name, enter the following command. The destination dataset is in themyotherprojectproject, not your default project. The-fshortcut isused to overwrite the destination table without a prompt.bqcp-f\mydataset.mytable,mydataset.mytable2\myotherproject:myotherdataset.mytable
To copy the
myproject:mydataset.mytabletable and themyproject:mydataset.mytable2table and to return an error if thedestination dataset contains a table with the same name, enter the followingcommand. The destination dataset is in themyotherprojectproject. The-nshortcut is used to prevent overwriting a table with the same name.bqcp-n\myproject:mydataset.mytable,myproject:mydataset.mytable2\myotherproject:myotherdataset.mytable
To copy the
mydataset.mytabletable and themydataset.mytable2table andto append the data to a destination table with the same name, enter thefollowing command. The source dataset is in your default project. Thedestination dataset is in themyotherprojectproject. The-ashortcut isused to append to the destination table.bqcp-a\mydataset.mytable,mydataset.mytable2\myotherproject:myotherdataset.mytable
API
To copy multiple tables using the API, call thejobs.insertmethod, configure a tablecopy job, and specify thesourceTablesproperty.
Specify your region in thelocation property in thejobReference section of thejob resource.
Go
Before trying this sample, follow theGo setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryGo API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.import("context""fmt""cloud.google.com/go/bigquery")// copyMultiTable demonstrates using a copy job to copy multiple source tables into a single destination table.funccopyMultiTable(projectID,srcDatasetIDstring,srcTableIDs[]string,dstDatasetID,dstTableIDstring)error{// projectID := "my-project-id"// srcDatasetID := "sourcedataset"// srcTableIDs := []string{"table1","table2"}// dstDatasetID = "destinationdataset"// dstTableID = "destinationtable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()srcDataset:=client.Dataset(srcDatasetID)dstDataset:=client.Dataset(dstDatasetID)vartableRefs[]*bigquery.Tablefor_,v:=rangesrcTableIDs{tableRefs=append(tableRefs,srcDataset.Table(v))}copier:=dstDataset.Table(dstTableID).CopierFrom(tableRefs...)copier.WriteDisposition=bigquery.WriteTruncatejob,err:=copier.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}returnnil}
Java
Before trying this sample, follow theJava setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryJava API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;importjava.util.Arrays;publicclassCopyMultipleTables{publicstaticvoidrunCopyMultipleTables(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DATASET_NAME";StringdestinationTableId="MY_TABLE_NAME";copyMultipleTables(destinationDatasetName,destinationTableId);}publicstaticvoidcopyMultipleTables(StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,Arrays.asList(TableId.of(destinationDatasetName,"table1"),TableId.of(destinationDatasetName,"table2"))).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Job not executed since it no longer exists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuery was unable to copy tables due to an error: \n"+job.getStatus().getError());return;}System.out.println("Table copied successfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Table copying job was interrupted. \n"+e.toString());}}}
Node.js
Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTableMultipleSource(){// Copy multiple source tables to a given destination./** * TODO(developer): Uncomment the following lines before running the sample. */// const datasetId = "my_dataset";// sourceTable = 'my_table';// destinationTable = 'testing';// Create a clientconstdataset=bigquery.dataset(datasetId);constmetadata={createDisposition:'CREATE_NEVER',writeDisposition:'WRITE_TRUNCATE',};// Create table referencesconsttable=dataset.table(sourceTable);constyourTable=dataset.table(destinationTable);// Copy tableconst[apiResponse]=awaittable.copy(yourTable,metadata);console.log(apiResponse.configuration.copy);}
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dest_table_id to the ID of the destination table.# dest_table_id = "your-project.your_dataset.your_table_name"# TODO(developer): Set table_ids to the list of the IDs of the original tables.# table_ids = ["your-project.your_dataset.your_table_name", ...]job=client.copy_table(table_ids,dest_table_id)# Make an API request.job.result()# Wait for the job to complete.print("The tables{} have been appended to{}".format(table_ids,dest_table_id))
Copy tables across regions
Preview
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
You can copy a table,table snapshot, ortable clone from oneBigQueryregion or multi-region to another. This includes anytables that have customer-managed Cloud KMS (CMEK) applied.
Copying a table across regions incurs additional data transfer charges accordingtoBigQuery pricing.Additional charges are incurred even if you cancel the cross-region table copyjob before it has been completed.
To copy a table across regions, select one of the following options:
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Run the
bq cpcommand:
bqcp\-f-n\SOURCE_PROJECT:SOURCE_DATASET.SOURCE_TABLE\DESTINATION_PROJECT:DESTINATION_DATASET.DESTINATION_TABLE
Replace the following:
SOURCE_PROJECT: source project ID. If the source dataset is in a project other than your defaultproject, add the project ID to the source dataset name.DESTINATION_PROJECT: destination project ID. If the destination dataset is in a project other than your defaultproject, add the project ID to the destination dataset name.SOURCE_DATASET: the name of the source dataset.DESTINATION_DATASET: the name of the destination dataset.SOURCE_TABLE: the table that you are copying.DESTINATION_TABLE: the name of the table in the destination dataset.
The following example is a command that copies themydataset_us.mytable table from theus multi-region tothemydataset_eu.mytable2 table in theeu multi-region. Both datasets are in the default project.
bq cp --sync=false mydataset_us.mytable mydataset_eu.mytable2
To copy a table across regions into a CMEK-enabled destination dataset, you mustenable CMEK on the table with a key from the table's region. The CMEK on the table doesn't have to be the same CMEK in use by the destination dataset. The following example copies a CMEK-enabled table to a destination dataset using thebq cp command.
bq cp source-project-id:source-dataset-id.source-table-id destination-project-id:destination-dataset-id.destination-table-id
Conversely, to copy a CMEK-enabled table across regions into a destination dataset, you canenable CMEK on the destination dataset with a key from the destination dataset's region. You can also use thedestination_kms_keys flag in thebq cp command, as shown in the following example:
bq cp --destination_kms_key=projects/project_id/locations/eu/keyRings/eu_key/cryptoKeys/eu_region mydataset_us.mytable mydataset_eu.mytable2
API
To copy a table across regions using the API, call thejobs.insertmethod and configure a tablecopy job.
Specify your region in thelocation property in thejobReference section of thejob resource.
C#
Before trying this sample, follow theC# setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryC# API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.usingGoogle.Apis.Bigquery.v2.Data;usingGoogle.Cloud.BigQuery.V2;usingSystem;publicclassBigQueryCopyTable{publicvoidCopyTable(stringprojectId="your-project-id",stringdestinationDatasetId="your_dataset_id"){BigQueryClientclient=BigQueryClient.Create(projectId);TableReferencesourceTableRef=newTableReference(){TableId="shakespeare",DatasetId="samples",ProjectId="bigquery-public-data"};TableReferencedestinationTableRef=client.GetTableReference(destinationDatasetId,"destination_table");BigQueryJobjob=client.CreateCopyJob(sourceTableRef,destinationTableRef).PollUntilCompleted()// Wait for the job to complete..ThrowOnAnyError();// Retrieve destination tableBigQueryTabledestinationTable=client.GetTable(destinationTableRef);Console.WriteLine($"Copied {destinationTable.Resource.NumRows} rows from table "+$"{sourceTableRef.DatasetId}.{sourceTableRef.TableId} "+$"to {destinationTable.FullyQualifiedId}.");}}
Go
Before trying this sample, follow theGo setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryGo API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.import("context""fmt""cloud.google.com/go/bigquery")// copyTable demonstrates copying a table from a source to a destination, and// allowing the copy to overwrite existing data by using truncation.funccopyTable(projectID,datasetID,srcID,dstIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// srcID := "sourcetable"// dstID := "destinationtable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()dataset:=client.Dataset(datasetID)copier:=dataset.Table(dstID).CopierFrom(dataset.Table(srcID))copier.WriteDisposition=bigquery.WriteTruncatejob,err:=copier.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}returnnil}
Java
Before trying this sample, follow theJava setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryJava API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;publicclassCopyTable{publicstaticvoidrunCopyTable(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DESTINATION_DATASET_NAME";StringdestinationTableId="MY_DESTINATION_TABLE_NAME";StringsourceDatasetName="MY_SOURCE_DATASET_NAME";StringsourceTableId="MY_SOURCE_TABLE_NAME";copyTable(sourceDatasetName,sourceTableId,destinationDatasetName,destinationTableId);}publicstaticvoidcopyTable(StringsourceDatasetName,StringsourceTableId,StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdsourceTable=TableId.of(sourceDatasetName,sourceTableId);TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,sourceTable).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Job not executed since it no longer exists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuery was unable to copy table due to an error: \n"+job.getStatus().getError());return;}System.out.println("Table copied successfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Table copying job was interrupted. \n"+e.toString());}}}
Node.js
Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.// Import the Google Cloud client library and create a clientconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTable(){// Copies src_dataset:src_table to dest_dataset:dest_table./** * TODO(developer): Uncomment the following lines before running the sample */// const srcDatasetId = "my_src_dataset";// const srcTableId = "my_src_table";// const destDatasetId = "my_dest_dataset";// const destTableId = "my_dest_table";// Copy the table contents into another tableconst[job]=awaitbigquery.dataset(srcDatasetId).table(srcTableId).copy(bigquery.dataset(destDatasetId).table(destTableId));console.log(`Job${job.id} completed.`);// Check the job's status for errorsconsterrors=job.status.errors;if(errors &&errors.length >0){throwerrors;}}
PHP
Before trying this sample, follow thePHP setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPHP API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.use Google\Cloud\BigQuery\BigQueryClient;use Google\Cloud\Core\ExponentialBackoff;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $sourceTableId = 'The BigQuery table ID to copy from';// $destinationTableId = 'The BigQuery table ID to copy to';$bigQuery = new BigQueryClient([ 'projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$sourceTable = $dataset->table($sourceTableId);$destinationTable = $dataset->table($destinationTableId);$copyConfig = $sourceTable->copy($destinationTable);$job = $sourceTable->runJob($copyConfig);// poll the job until it is complete$backoff = new ExponentialBackoff(10);$backoff->execute(function () use ($job) { print('Waiting for job to complete' . PHP_EOL); $job->reload(); if (!$job->isComplete()) { throw new Exception('Job has not yet completed', 500); }});// check if the job has errorsif (isset($job->info()['status']['errorResult'])) { $error = $job->info()['status']['errorResult']['message']; printf('Error running job: %s' . PHP_EOL, $error);} else { print('Table copied successfully' . PHP_EOL);}
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set source_table_id to the ID of the original table.# source_table_id = "your-project.source_dataset.source_table"# TODO(developer): Set destination_table_id to the ID of the destination table.# destination_table_id = "your-project.destination_dataset.destination_table"job=client.copy_table(source_table_id,destination_table_id)job.result()# Wait for the job to complete.print("A copy of the table created.")
Limitations
Copying a table across regions is subject to the following limitations:
- You can't copy a table using the Google Cloud console or the
TABLE COPYDDLstatement. - You can't copy a table if there are any policy tags on the source table.
- You can't copy a table if the source table is larger than 20 physical TiB. Seeget information about tables for the source table physical size. Additionally, copying source tables that are larger than 1physical TiB across regions may need multiple retries to successfully copy them.
- You can't copy IAM policies associated with the tables. You can apply the same policies to the destination after the copy is completed.
- If the copy operation overwrites an existing table,tags on the existing table are removed.
- You can't copy multiple source tables into a single destination table.
- You can't copy tables in append mode. If you use
write_emptymode, the destination table must not exist. - Time travel information is not copied to the destination region.
- When you copy a table clone or snapshot to a new region, a full copy of thetable is created. This incurs additional storage costs.
View current quota usage
You can view your current usage of query, load, extract, or copy jobs by runninganINFORMATION_SCHEMA query to view metadata about the jobs ran over aspecified time period. You can compare your current usage against thequotalimit to determine your quota usage for aparticular type of job. The following example query uses theINFORMATION_SCHEMA.JOBS view to list the number of query, load, extract, andcopy jobs by project:
SELECTsum(casewhenjob_type="QUERY"then1else0end)asQRY_CNT,sum(casewhenjob_type="LOAD"then1else0end)asLOAD_CNT,sum(casewhenjob_type="EXTRACT"then1else0end)asEXT_CNT,sum(casewhenjob_type="COPY"then1else0end)asCPY_CNTFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHEREdate(creation_time)=CURRENT_DATE()
INFORMATION_SCHEMA view does not display cross-region copy jobs.Maximum number of copy jobs per day per project quota errors
BigQuery returns this error when the number of copy jobs runningin a project has exceeded the daily limit.To learn more about the limit for copy jobs per day, seeCopy jobs.
Error message
Your project exceeded quota for copies per project
Diagnosis
If you'd like to gather more data about where the copy jobs are coming from,you can try the following:
If your copy jobs are located in a single or only a few regions, you can tryquerying the
INFORMATION_SCHEMA.JOBStable for specific regions. For example:SELECTcreation_time,job_id,user_email,destination_table.project_id,destination_table.dataset_id,destination_table.table_idFROM`PROJECT_ID`.`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREcreation_timeBETWEENTIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL2DAY)ANDCURRENT_TIMESTAMP()ANDjob_type="COPY"orderbycreation_timeDESC
You can also adjust the time interval depending on the time range you're interested in.
To see all copy jobs in all regions, you can use the following filter inCloud Logging:
resource.type="bigquery_resource"protoPayload.methodName="jobservice.insert"protoPayload.serviceData.jobInsertRequest.resource.jobConfiguration.tableCopy:*
Resolution
- If the goal of the frequent copy operations is to create a snapshot of data,consider usingtable snapshotsinstead. Table snapshots are a cheaper and faster alternative to copying full tables.
- You can request a quota increase by contactingsupport orsales. It might take several days to review andprocess the request. We recommend stating the priority, use case, and theproject ID in the request.
Delete tables
You can delete a table in the following ways:
- Using the Google Cloud console.
- Using a data definition language (DDL)
DROP TABLEstatement. - Using the bq command-line tool
bq rmcommand. - Calling the
tables.deleteAPI method. - Using the client libraries.
To delete all of the tables in the dataset,delete the dataset.
When you delete a table, any data in the table is also deleted. To automaticallydelete tables after a specified period of time, set thedefault table expirationfor the dataset or set the expiration time when youcreate the table.
Deleting a table also deletes any permissions associated with this table. Whenyou recreate a deleted table, you must also manuallyreconfigure any access permissionspreviously associated with it.
Required roles
To get the permissions that you need to delete a table, ask your administrator to grant you theData Editor (roles/bigquery.dataEditor) IAM role on the dataset. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the permissions required to delete a table. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to delete a table:
bigquery.tables.deletebigquery.tables.get
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Delete a table
To delete a table:
Console
In the left pane, clickExplorer:

In theExplorer pane, expand your project, clickDatasets, andthen select a dataset.
ClickOverview> Tables, and then select a table.
In the details pane, clickDelete.
Type
"delete"in the dialog, then clickDelete toconfirm.
SQL
Use theDROP TABLE statement.The following example deletes a table namedmytable:
In the Google Cloud console, go to theBigQuery page.
In the query editor, enter the following statement:
DROPTABLEmydataset.mytable;
ClickRun.
For more information about how to run queries, seeRun an interactive query.
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Use the
bq rmcommand with the--tableflag (or-tshortcut) to deletea table. When you use the bq command-line tool to remove a table, you must confirm theaction. You can use the--forceflag (or-fshortcut) to skipconfirmation.If the table is in a dataset in a project other than your defaultproject, add the project ID to the dataset name in the following format:
project_id:dataset.bqrm\-f\-t\project_id:dataset.table
Replace the following:
project_id: your project IDdataset: the name of the dataset that contains thetabletable: the name of the table that you're deleting
Examples:
To delete the
mytabletable from themydatasetdataset, enter thefollowing command. Themydatasetdataset is in your default project.bq rm -t mydataset.mytable
To delete the
mytabletable from themydatasetdataset, enter thefollowing command. Themydatasetdataset is in themyotherprojectproject, not your default project.bq rm -t myotherproject:mydataset.mytable
To delete the
mytabletable from themydatasetdataset, enter thefollowing command. Themydatasetdataset is in your default project. Thecommand uses the-fshortcut to bypass confirmation. Note: You can enter thebq rm -f -t mydataset.mytable
bq lsdatasetcommand in the bq command-line tool to confirm that a table was removed from a dataset.
API
Call thetables.deleteAPI method and specify the table to delete using thetableId parameter.
C#
Before trying this sample, follow theC# setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryC# API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.usingGoogle.Cloud.BigQuery.V2;usingSystem;publicclassBigQueryDeleteTable{publicvoidDeleteTable(stringprojectId="your-project-id",stringdatasetId="your_dataset_id",stringtableId="your_table_id"){BigQueryClientclient=BigQueryClient.Create(projectId);client.DeleteTable(datasetId,tableId);Console.WriteLine($"Table {tableId} deleted.");}}
Go
Before trying this sample, follow theGo setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryGo API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.import("context""fmt""cloud.google.com/go/bigquery")// deleteTable demonstrates deletion of a BigQuery table.funcdeleteTable(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()table:=client.Dataset(datasetID).Table(tableID)iferr:=table.Delete(ctx);err!=nil{returnerr}returnnil}
Java
Before trying this sample, follow theJava setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryJava API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.TableId;publicclassDeleteTable{publicstaticvoidrunDeleteTable(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";deleteTable(datasetName,tableName);}publicstaticvoiddeleteTable(StringdatasetName,StringtableName){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();booleansuccess=bigquery.delete(TableId.of(datasetName,tableName));if(success){System.out.println("Table deleted successfully");}else{System.out.println("Table was not found");}}catch(BigQueryExceptione){System.out.println("Table was not deleted. \n"+e.toString());}}}
Node.js
Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctiondeleteTable(){// Deletes "my_table" from "my_dataset"./** * TODO(developer): Uncomment the following lines before running the sample. */// const datasetId = "my_dataset";// const tableId = "my_table";// Delete the tableawaitbigquery.dataset(datasetId).table(tableId).delete();console.log(`Table${tableId} deleted.`);}
PHP
Before trying this sample, follow thePHP setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPHP API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.use Google\Cloud\BigQuery\BigQueryClient;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $tableId = 'The BigQuery table ID';$bigQuery = new BigQueryClient([ 'projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$table = $dataset->table($tableId);$table->delete();printf('Deleted table %s.%s' . PHP_EOL, $datasetId, $tableId);
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set table_id to the ID of the table to fetch.# table_id = 'your-project.your_dataset.your_table'# If the table does not exist, delete_table raises# google.api_core.exceptions.NotFound unless not_found_ok is True.client.delete_table(table_id,not_found_ok=True)# Make an API request.print("Deleted table '{}'.".format(table_id))
Ruby
Before trying this sample, follow theRuby setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryRuby API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
Restore deleted tables
To learn how to restore or undelete deleted tables, seeRestore deleted tables.
Table security
To control access to tables in BigQuery, seeControl access to resources with IAM.
What's next
- For more information about creating and using tables, seeCreating and using tables.
- For more information about handling data, seeWorking With table data.
- For more information about specifying table schemas, seeSpecifying a schema.
- For more information about modifying table schemas, seeModifying table schemas.
- For more information about datasets, seeIntroduction to datasets.
- For more information about views, seeIntroduction to views.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.