bq command-line tool reference

This document describes the syntax, commands, flags, and arguments forbq,the Python-based command-line tool for BigQuery.

For a tutorial on using the bq command-line tool, seeLoad and query data with the bq tool.

Ways to use the bq command-line tool

You can enter bq command-line tool commands inCloud Shelleither from the Google Cloud console or from a local installation of theGoogle Cloud CLI.

Command format

The bq command-line tool uses the following format:

bqCOMMAND [FLAGS] [ARGUMENTS]

Some flags can be used with multiple bq command-line tool commands; these flags aredescribed in theGlobal flags section.

Other flags are command-specific; they can only be used with a particularbq command-line tool command. The command-specific flags are described in the commandsections.

Specifying values for flags

When you specify a value for a flag, the equals sign (=) is optional. Forexample, the following two commands are equivalent:

bq ls --format prettyjson myDatasetbq ls --format=prettyjson myDataset

This document uses the equals sign for clarity.

Some bq command-line tool flags areboolean; you can set the flag's value to eithertrue orfalse.The bq command-line tool accepts the following formats for setting boolean flags.

ValueFormatExample
true--FLAGNAME=true--debug_mode=true
true--FLAGNAME--debug_mode
false--FLAGNAME=false--debug_mode=false
false--noFLAGNAME--nodebug_mode

This document uses the--FLAGNAME=VALUEformat for boolean flags.

All boolean flags are optional; if a boolean flag is not present, thenBigQuery uses the flag's default value.

Specifying BigQuery resources in arguments

The format for specifying a resource depends on the context; in some cases theseparator between the project and dataset is a colon (:) and in some cases, itis a period (.). The following tabledescribes how to specify a BigQuery table in different contexts.

ContextFormatExample
bq command-line toolPROJECT:DATASET.TABLEmyProject:myDataset.myTable
GoogleSQL queryPROJECT.DATASET.TABLEmyProject.myDataset.myTable
Legacy SQL queryPROJECT:DATASET.TABLEmyProject:myDataset.myTable

If you don't specify a project, then BigQuery uses the currentproject. For example, if the current project ismyProject, thenBigQuery interpretsmyDataset.myTable asmyProject:myDataset.myTable (ormyProject.myDataset.myTable).

Some resource identifiers must be quoted using back ticks (`).If your resource identifier begins with a letter or underscore character, andcontains only characters that are letters, numbers, and underscores, then youdon't need to quote it. However, if your resource identifiercontains other types ofcharacters, or reserved keywords, you need to surround the identifier (or thepart of the identifier with the special characters or reserved keywords) withback ticks. For more information, seeIdentifiers.

How to run commands

Place anyglobal flags before thebq command, and theninclude command-specific flags. You can include multiple global orcommand-specific flags. For example:

bq --location=us mk --reservation --project_id=project reservation_name

You can specify command arguments in the following ways:

  • --FLAGARGUMENT (as shown inthe previous examples)
  • --FLAG=ARGUMENT
  • --FLAG='ARGUMENT'
  • --FLAG="ARGUMENT"
  • --FLAG 'ARGUMENT'
  • --FLAG "ARGUMENT"

Replace the following:

  • FLAG: a global or command-specific flag
  • ARGUMENT: the flag's argument

Some commands require the use of quotes around arguments. If quotes arerequired, either single or double quotes are acceptable. Arguments thatrequire quotes are typically values that contain spaces, commas, or otherspecial characters. If your argument contains a BigQueryresource, be sure to follow therules for specifying resource namesin commands.

Note: The following examples use the--nouse_legacy_sql flag. This flag isrequired to run a GoogleSQL query from the command line unless youset a default value in.bigqueryrcorset the configuration settingdefault_sql_dialect_option to 'default_legacy_sql' at the project ororganization level.

This example shows how to run a GoogleSQL query on the command line:

bqquery--nouse_legacy_sql\'SELECT   COUNT(*) FROM   `bigquery-public-data`.samples.shakespeare'

Flags with boolean values can be specified without an argument. If you specifytrue orfalse, then you must use the formatFLAG=ARGUMENT.

For example, this command specifies false for the boolean flag--use_legacy_sql by placingno at the front of the flag:

bqquery--nouse_legacy_sql\'SELECT   COUNT(*) FROM   `bigquery-public-data`.samples.shakespeare'

Alternatively, to specifyfalse as the flag's argument, you can enter thefollowing:

bqquery--use_legacy_sql=false\'SELECT   COUNT(*) FROM   `bigquery-public-data`.samples.shakespeare'

Run commands in a script

You can run the bq command-line tool in a script, as you would run aGoogle Cloud CLI command. The following is anexample ofgcloud andbq commands in a bash script:

#!/bin/bashgcloudconfigsetprojectmyProjectbqquery--use_legacy_sql=false--destination_table=myDataset.myTable\'SELECT   word,   SUM(word_count) AS count FROM   `bigquery-public-data`.samples.shakespeare WHERE   word LIKE "%raisin%" GROUP BY   word'

Use a service account

You can use aservice account to makeauthorized API calls or run query jobs on your behalf. To use a service accountin the bq command-line tool, authorize access to Google Cloud from the service account.For more information, seegcloud auth activate-service-account.

To start runningbq commands usingservice account impersonation,run the following command:

gcloudconfigsetauth/impersonate_service_accountSERVICE_ACCOUNT_NAME

ReplaceSERVICE_ACCOUNT_NAME with your service accountname.

bq commands that you run now use the service account credentials.

To stop runningbq commands from a service account, run the following command:

gcloudconfigunsetauth/impersonate_service_account

Set default values for command-line flags

You can set default values for command-line flags by including them in thebq command-line tool's configuration file,.bigqueryrc. Before you configure yourdefault options, you must first create a.bigqueryrc file. You can use yourpreferred text editor to create the file. After you create the.bigqueryrcfile, you can specify the path to the file using the--bigqueryrc global flag.

If the--bigqueryrc flag is not specified, then theBIGQUERYRC environmentvariable is used. If that is not specified, then the path~/.bigqueryrc isused. The default path is$HOME/.bigqueryrc.

Note: Creating a.bigqueryrc file using thebq init command is notrecommended.

How to add flags to.bigqueryrc

To add default values for command-line flags to.bigqueryrc:

  • Placeglobal flagsat the top of the file without a header.
  • For command-specific flags,enter the command name (in brackets) and add the command-specific flags (oneper line) after the command name.

For example:

--apilog=stdout--format=prettyjson--location=US[query]--use_legacy_sql=false--max_rows=100--maximum_bytes_billed=10000000[load]--destination_kms_key=projects/myproject/locations/mylocation/keyRings/myRing/cryptoKeys/myKey

The preceding example sets default values for the following flags:

CLI help

You can get help with the bq command-line tool by running the following commands:

DescriptionHelp command formatExample
Installed versionbq versionbq version
List of all commands with examplesbq helpbq help
Description of global flagsbq --helpbq --help
Description of a particular commandbq helpCOMMANDbq help mk

Troubleshooting CLI commands

To log requests sent and received:

Add the--apilog=PATH_TO_FILE flag to save a log ofoperations to a local file. ReplacePATH_TO_FILE withthe location where you want to save the log. The bq command-line tool works by makingstandard REST-based API calls, which can be useful to see when troubleshooting.It's also useful to attach this log when you're reporting issues toCloud Customer Care.

Using- orstdout instead of a path prints the log to the Google Cloud console.Setting--apilog tostderr outputs to the standard error file. To log morerequests, use the--httplib2_debuglevel=LOG_LEVEL flag.A higherLOG_LEVEL logs more information about the HTTPrequests.

To troubleshoot errors:

Enter the--format=prettyjson flag when you get ajob's status or when you viewdetailed information about resources such as tables and datasets. Using thisflag outputs the response in JSON format, including thereason property. Youcan use thereason property to finderror messages.

For more information about errors when you run a command, use the--debug_mode flag.

Global flags

You can use the following flags with anybq command, where applicable:

--api=ENDPOINT
Specifies the API endpoint to call. The default value ishttps://www.googleapis.com.
--api_version=VERSION
Specifies the API version to use. The default isv2.
--apilog=FILE

Logs all API requests and responses to the file specified byFILE. Possible values are the following:

  • the path to a file - logs to the specified file
  • stdout - logs to standard output
  • stderr - logs to standard error
  • false - API requests and responses are not logged (default)
--use_google_auth={true|false}

If set totrue, enables authentication using Google Auth libraries. Thedefault value istrue.

--bigqueryrc=PATH

Specifies the path to the bq command-line tool configuration file. If you don't specifythe--bigqueryrc flag, then the command uses theBIGQUERYRC environmentvariable.If the environment variable is not set, then$HOME/.bigqueryrc is used.If that file does not exist, then~/.bigqueryrc is used. For moreinformation, seeSetting default values for command-line flags.

--ca_certificates_file=PATH

Specifies the location of yourCertificate Authority Service(CA) file.

--dataset_id=DATASET_ID

Specifies the default dataset to use with the command. This flag is ignoredwhen not applicable. You can specify theDATASET_IDargument using the formatPROJECT:DATASET orDATASET.If thePROJECT part is missing, then the defaultprojectis used. You can override the default project setting by specifying the--project_id flag.

--debug_mode={true|false}

If set totrue, shows tracebacks on Python exceptions. The default value isfalse.

--disable_ssl_validation={true|false}

If set totrue, enables HTTPS certificate validation. Thedefault value isfalse.

--discovery_file=PATH

Specifies the JSON file to read for discovery.

--enable_gdrive={true|false}

If set tofalse, requests a new OAuth token without Google Drive scope.The default value istrue; requests a new OAuth token withDrive scope. To set this flag tofalse when authenticatedusing a user account, the--use_google_auth flag must be set tofalse.

--fingerprint_job_id={true|false}

To use a job ID that is derived from afingerprint of the job configuration, set totrue.This prevents the same job fromrunning multiple times accidentally. The default value isfalse.

--format=FORMAT

Specifies the format of the command's output. Use one of the following values:

  • pretty: formatted table output
  • sparse: simpler table output
  • prettyjson: easy-to-read JSON format
  • json: maximally compact JSON
  • csv: csv format with header

pretty,sparse, andprettyjson are intended to be human-readable.json andcsv are intended to be used by another program. Ifnone isspecified, thenthe command produces no output. If the--format flag is absent, then anappropriate output format is chosen based on the command.

--headless={true|false}

To run thebq session without userinteraction, set totrue. For example,debug_mode does not break into the debugger, and the frequency ofinformational printing is lowered. The default value isfalse.

--httplib2_debuglevel=DEBUG_LEVEL

Specifies whether to show HTTP debugging information.IfDEBUG_LEVELis greater than0, then the command logs HTTP server requests and responses tostderr, in addition toerror messages. IfDEBUG_LEVEL is not > 0,or if the--httplib2_debuglevel flag is not used, then onlyerror messages are provided.

For example:

--httplib2_debuglevel=1

Note: Multi-level debugging is not supported for this flag, so you can setDEBUG_LEVEL to any positive number.
--job_id=JOB_ID

Specifies a job identifier for a new job.This flag applies only tocommands that create jobs:cp,extract,load, andquery. If you don'tuse the--job_id flag, then the commands generate a unique job identifier.For more information, seeRunning jobs programmatically.

--job_property=KEY:VALUE

A key-value pair to include in the propertiesfield of the job configuration. Repeat this flag to specify additionalproperties.

--location=LOCATION

A string corresponding to a region or multi-regionlocation.The location flag is required for thebq cancel commandand for thebq showcommand when you use the--jobs flag to show information about jobs. Thelocation flag is optional for the following commands:

  • query
  • cp
  • load
  • extract
  • partition
  • update
  • wait
  • mk when you use the--dataset,--reservation,--capacity_commitment,or--reservation_assignment flags
  • ls when you use the--reservation,--capacity_commitment,or--reservation_assignment flags

All other commands ignore the--location flag.

Note: The--location flag was introduced in bq version 2.0.29. To verifythe version of the bq command-line tool, enterbq version.
--max_rows_per_request=MAX_ROWS

An integer that specifies the maximum number of rowsto return per read.

--project_id=PROJECT

Specifies the project to use for commands.

--proxy_address=PROXY

Specifies the name or IP address of the proxy host to use forconnecting to Google Cloud.

--proxy_password=PASSWORD

Specifies the password to use when authenticating with the proxy host.

--proxy_port=PORT

Specifies the port number to use to connect to the proxy host.

--proxy_username=USERNAME

Specifies the username to use when authenticating with the proxyhost.

--quiet={true|false} or-q={true|false}

To suppress status updates while jobs arerunning, set totrue. The default value isfalse.

--synchronous_mode={true|false} or-sync={true|false}

To create the job and immediately return, with a successful completion statusas the error code,set tofalse. If set totrue, then the command waits for the job tocomplete before returning, and returns the job completion status as the errorcode. The default value istrue.

--trace=token:TOKEN

Specifies a tracing token to include in API requests.

--use_regional_endpoints={true|false}

Inpreview. To connect to a regionalendpoint, set the--use_regional_endpoints flag totrue and the--location flag to the region you want to connect to. Thedefault value isfalse.

Deprecated global flags

The following global flag for specifying bq command-line tool flags from a file is deprecated. To specify flags from a file, use the--bigqueryrc flag.

--flagfile=PATH

When specified, flag definitions from the supplied file are inserted into the bq command-line tool. The default value is''. For more information, see Setting default values for command-line flags.

Commands

The following sections describe the bq command-line tool commands, along with theircommand-specific flags and arguments.

bq add-iam-policy-binding

Use thebq add-iam-policy-binding command to retrieve theIdentity and Access Management (IAM) policyfor a table or view and add a binding to thepolicy, in one step.

This command is an alternative to the following three-step process:

  1. Using thebq get-iam-policy command to retrieve thepolicy file (in JSON format).
  2. Editing the policy file.
  3. Using thebq set-iam-policy command to update thepolicy with a new binding.
Note:bq add-iam-policy-binding does not support datasets. To modify access to a dataset, seeGrant access to a dataset.

Synopsis

bq add-iam-policy-binding [FLAGS] --member=MEMBER_TYPE:MEMBER --role=ROLE  [--table]RESOURCE

Example

bq add-iam-policy-binding --member=user:myAccount@gmail.com \  --role=roles/bigquery.dataViewer myDataset.myTable

Flags and arguments

Thebq add-iam-policy-binding command uses the following flags andarguments:

--member=MEMBER_TYPE:MEMBER

Required. Use the--member flag to specify the member part of theIAM policy binding. The--member flag is requiredalong with the--role flag. One combination of--member and--roleflags equals one binding.

TheMEMBER_TYPE valuespecifies the type of member in theIAM policy binding. Use one of the following values:

  • user
  • serviceAccount
  • group
  • domain

TheMEMBER value specifiesthe email address or domain of the member in theIAM policy binding.

--role=ROLE

Required. Specifies the role part of the IAM policybinding. The--role flag is required along withthe--member flag. One combination of--member and--role flags equalsone binding.

--table={true|false}

To return an error if theRESOURCE argument is not atable or view identifier, set the--table flag totrue. The default value isfalse. This flag is supported for consistency with other commands.

RESOURCE

The table or view whose policy you want to add to.

For more information, see theIAM policy reference.

bq cancel

Use thebq cancel command to cancel BigQuery jobs.

Synopsis

bq [--synchronous_mode=false] cancelJOB_ID

Examples

bq cancel bqjob_12345
bq --synchronous_mode=false cancel bqjob_12345

Flags and arguments

Thebq cancel command uses the following flags and arguments:

--synchronous_mode=false
If you don't want to wait for thebq cancel command to complete ,set theglobal--synchronous_mode flag tofalse. The default istrue.
JOB_ID
The job you want to cancel.

For more information about using thebq cancel command, seeManaging jobs.

bq cp

Use thebq cp command for the following tasks:

Synopsis

bq cp [FLAGS]SOURCE_TABLEDESTINATION_TABLE

Example

bq cp myDataset.myTable myDataset.myTableCopy

Flags and arguments

Thebq cp command uses the following flags and arguments:

--append_table={true|false} or-a={true|false}

To append a table to an existing table, set totrue.The default value isfalse.

You can't use the flag settings--append_table=true and--clone=true at the same time.

--clone={true|false}

To create atable clone, set totrue. The base table can be a standard table, a table clone, or a tablesnapshot. The destination table is a table clone. The default isfalse; ifneither--clone=true nor--snapshot=true is specified, then thedestination table is the same type of table as the base table. Requires the--no_clobber flag.

You can't use the flag settings--append_table=true and--clone=true at the same time.

--destination_kms_key=KEY

Specifies a Cloud KMSkey resource IDfor encrypting the destination table data.

For example:

--destination_kms_key=projects/myProject/locations/global/keyRings/myKeyRing/cryptoKeys/myKey

--expiration=SECONDS

The number of seconds until a table snapshot expires. If not included, thetable snapshot expiration is set to the default expiration of the datasetcontaining the new table snapshot. Use withthe--snapshot flag.

--force={true|false} or-f={true|false}

To overwrite the destination table, if it exists, without prompting, settotrue. The default value isfalse; if the destination table exists, thenthe command prompts for confirmation before overwriting.

--no_clobber={true|false} or-n={true|false}

To disallow overwriting the destination table, if it exists,set totrue. The default value isfalse; if the destination table exists,then it is overwritten.

--restore={true|false}

This flag is being deprecated. To create a writeable table from a tablesnapshot, use thebq cp command or thebq cp --clone command.

--snapshot={true|false}

To create atable snapshot of thetable that's specified in theSOURCE_TABLEargument, set totrue. The base table can be a standard table, a tableclone, or another table snapshot. The default isfalse; ifneither--clone=true nor--snapshot=true is specified, then thedestination table is the same type of table as the base table.Requires the--no_clobber flag.

SOURCE_TABLE

The table that you want to copy.

DESTINATION_TABLE

The table that you want to copy to.

For more information about using thecp command, see the following:

bq extract

Use thebq extract command to export table data to Cloud Storage.

Synopsis

bq extract [FLAGS]RESOURCEDESTINATION

Examples

bqextract--compression=GZIP--destination_format=CSV--field_delimiter=tab\--print_header=falsemyDataset.myTablegs://my-bucket/myFile.csv.gzip
bqextract--destination_format=CSV--field_delimiter='|'myDataset.myTable\gs://myBucket/myFile.csv

Flags and arguments

Thebq extract command uses the following flags and arguments:

--compression=COMPRESSION_TYPE

Specifies the type of compression to use for exported files. Possible valuesare the following:

  • GZIP
  • DEFLATE
  • SNAPPY
  • NONE

The default value isNONE.

For information about which formats are supported for each compression type,seeExport formats and compression types.

--destination_format=FORMAT

Specifies the format for the exported data. Possible valuesare the following:

  • CSV
  • NEWLINE_DELIMITED_JSON
  • AVRO
  • PARQUET

The default value isCSV.

--field_delimiter=DELIMITER

For CSV exports, specifies the character that marks the boundarybetween columns in the output file. The delimiter can beany ISO-8859-1 single-byte character. You can use\t ortab to specifytab delimiters.

--print_header={true|false}

To suppress printing header rows for formats that have headers, set tofalse. The default istrue; header rows are included.

RESOURCE

The table that you are exporting from.

DESTINATION

The storage location that receives the exported data.

For more information about using thebq extract command, seeExporting table data.

bq get-iam-policy

Use thebq get-iam-policy command to retrieve theIAM policyfor a resource andprint it tostdout. The resource can be a table, a view, or aslot reservation.The policy is in JSON format.

Note:bq get-iam-policy does not support datasets. To get theIAM policy of a dataset, seeView the access policy of a dataset.

Synopsis

bq get-iam-policy [FLAGS]RESOURCE

Examples

bq get-iam-policy myDataset.myTable
bq get-iam-policy --reservation myReservation

Flags and arguments

Thebq get-iam-policy command uses the following flags and arguments:

--table={true|false} or--t={true|false}
To return an error ifRESOURCE is not atable or view identifier, set the--table flag totrue. The default value isfalse. This flag is supported for consistency with other commands.
--reservation={true|false}
To get the IAM policy of a reservation, set totrue(Preview). Thedefault value isfalse. When this flag is used,RESOURCEis treated as a reservation identifier. The reservation can have optionalproject and location prefixes:myProject:myLocation.myReservation.
RESOURCE
The table or view whose policy you want to get.

For more information about thebq get-iam-policy command, seeControl access to resources with IAM.

bq head

Use thebq head command to display the specified rows and columns of a table.By default, it displays all columns of the first 100 rows.

Synopsis

bq head [FLAGS] [TABLE]

Example

bq head --max_rows=10 --start_row=50 --selected_fields=field1,field3 \  myDataset.myTable

Flags and arguments

Thebq head command uses the following flags and arguments:

--job=JOB or -j=JOB
To read the results of a query job, specify this flag with a valid job ID.
--max_rows=MAX or -n=MAX
An integer that indicates the maximum number of rows to print whenshowing table data. The default value is100.
--selected_fields=COLUMN_NAMES or -c=COLUMN_NAMES
A comma-separated list that specifies a subset offields (including nested and repeated fields) to return when showing tabledata. If this flag is not specified, then all columns are returned.
--start_row=START_ROW or -s=START_ROW
An integer that specifies the number of rows to skipbefore showing table data. The default value is0; the table data starts atthe first row.
--table={true|false} or-t={true|false}
To return an error if the command argument is not atable or view, set totrue. The default value isfalse. This flag is supported for consistency with other commands.
TABLE
The table whose data you want to retrieve.

For more information about using thebq head command, seeManaging table data.

bq help

Use thebq help command to display bq command-line tool documentation within the tool.

Synopsis

bq help [COMMAND]

Flags and arguments

Thebq help command uses the following flags and arguments:

COMMAND
Specifies a particular bq command-line tool command that you want to get online helpfor.

bq insert

Use thebq insert command to insert rows of newline-delimited,JSON-formatted data into a table from a file using the streaming insert. Datatypes are converted to matchthe column types of the destination table. This command is intended for testingpurposes only. To stream data into BigQuery, use theinsertAll API method.

Synopsis

bq insert [FLAGS]TABLEFILE

Examples

bqinsert--ignore_unknown_values--template_suffix=_insertmyDataset.myTable/tmp/myData.json
echo'{"a":1, "b":2}'|bqinsertmyDataset.myTable

Flags and arguments

Thebq insert command uses the following flags and arguments:

--ignore_unknown_values={true|false} or-i={true|false}
When set totrue, BigQuery ignores any key-value pairs thatdon't match the table's schema, and inserts the row with the data that doesmatch the schema. When set tofalse, rows with data that doesn't match thetable's schema are not inserted. The default isfalse.
--skip_invalid_rows={true|false} or-s={true|false}
When set totrue, BigQuery attempts to insert anyvalid rows, even if invalid rows are present. When set tofalse,the command fails if any invalid rows are present. The default isfalse.
--template_suffix=SUFFIX or -x=SUFFIX
When specified, treat the destination tableTABLE as abase template, and insert the rows into an instance table named{destination}{templateSuffix}. BigQuery creates theinstance table using the schema of the base template.
TABLE
The table that you want to insert data into.
FILE
The file containing the data that you want to insert.

For more information about using thebq insert command, seeStreaming data into BigQuery.

bq load

Use thebq load command to load data into a table.

Synopsis

bq load [FLAGS]DESTINATION_TABLESOURCE_DATA [SCHEMA]

Example

bq load myDataset.newTable gs://mybucket/info.csv ./info_schema.json

Flags and arguments

Thebq load command uses the following flags and arguments:

--allow_jagged_rows={true|false}
To allow missing trailing optional columns in CSV data, set totrue.
--preserve_ascii_control_characters={true|false}
To allow embedded ASCII control characters in CSV data, set totrue.
--allow_quoted_newlines={true|false}
To allow quoted newlines in CSV data, set totrue.
--autodetect={true|false}
To enable schema auto-detection for CSV and JSON data, set totrue. Thedefault isfalse. If--autodetect isfalse, and no schema is specifiedby using the--schema flag, and the destination table exists, then theschema of the destination table is used.
--clustering_fields=COLUMNS
A comma-separated list of up to four column names that specifies the fields touse fortable clustering.
--column_name_character_map=SCOPE
Defines the scope and handling ofcharacters in column names, with the option of enablingflexible column names.Requires the--autodetect option for CSV files.For a list of possible values, seeload_option_list.
--destination_kms_key=KEY
Specifies a Cloud KMSkey resource IDfor encrypting the destination table data.
--encoding=ENCODING_TYPE or -E=ENCODING_TYPE
The character encoding used in the data. Use one of the following values:
  • ISO-8859-1 (also known as Latin-1)
  • UTF-8
--field_delimiter=DELIMITER or -F=DELIMITER
Specifies the character that marks the boundary between columns in the data.The delimiter can be any ISO-8859-1 single-byte character. You can use either\t ortab to specify tab delimiters.
--ignore_unknown_values={true|false}
When set totrue for CSV and JSON files, rows with extra column values thatdon't match the table schema are loaded, but the extra columns are ignored.When set totrue for Avro, Parquet and ORC files, fields in the file schemathat don't exist in the table schema are ignored and aren't loaded.
--json_extension=JSON_TYPE

Specifies the type of JSON file to load. Applies only to JSON files. Possiblevalues are the following:

  • GEOJSON - newline-delimited GeoJSON file

To use this flag, the--source_format flag must be set toNEWLINE_DELIMITED_JSON.

For more information, seeLoading newline-delimited GeoJSON files.

--max_bad_records=MAX

An integer that specifies the maximum number of bad records allowed before theentire job fails. The default value is0. At most, five errors of any typeare returned regardless of the--max_bad_records value. This flag appliesfor loading CSV, JSON, and Google Sheets data only.

--null_marker=STRING

An optional custom string that represents aNULL value inCSV data.

--projection_fields=PROPERTY_NAMES

If you set--source_format toDATASTORE_BACKUP, then this flagindicates which entity properties to load from a Datastore export.Specify the property names in a comma-separated list.Property names are case sensitive and must refer totop-level properties. You can also use this flagwith Firestore exports.

--quote=CHARACTER

Specifies a quote character to surround fields in CSV data. TheCHARACTER argument can be any one-byte character.The default value is double quote (").To specify that there is no quote character, use an empty string"".

--replace={true|false}

To erase any existing data and schema when new data isloaded, set totrue. Any Cloud KMS key is also removed, unless youspecify the--destination_kms_key flag. The default value isfalse.

Equivalent to theWRITE_TRUNCATE value forJobConfigurationLoad.writeDisposition.

Note: You can use theTRUNCATE TABLEstatement to remove all rows from a table without deleting the schema.
--schema={SCHEMA_FILE|SCHEMA}

Specifies either the path to a local JSON schema file or a comma-separatedlist of column definitions in the formFIELD:DATA_TYPE,FIELD:DATA_TYPE.If you use a schema file, don't add an extension to the filename.

For example:

--schema=/tmp/tabledef
--schema=Region:STRING,Quarter:STRING,Total_sales:INTEGER

If no schema is specified, and--autodetect isfalse, and the destinationtable exists, then the schema of the destination table is used.

--schema_update_option=OPTION

When appending data to a table (in a load job or aquery job), or when overwriting a table partition, specifies how to update theschema of the destination table. Use one of the following values:

  • ALLOW_FIELD_ADDITION: Allow new fields to be added
  • ALLOW_FIELD_RELAXATION: Allow relaxingREQUIRED fields toNULLABLE

Repeat this flag to specify multiple schema update options.

--skip_leading_rows=NUMBER_OF_ROWS

An integer that specifies the number of rows to skip atthe beginning of the source file. The default is0.

--file_set_spec_type=FILE_SET_SPEC_TYPE

Specifies how to interpret source URIs.

  • FILE_SYSTEM_MATCH: Expands source URIs by listing files from the objectstore. This is the default behavior if FileSetSpecType is not set.
  • NEW_LINE_DELIMITED_MANIFEST: Indicates that the provided URIs arenewline-delimited manifest files, with one URI per line. Wildcard URIs arenot supported in the manifest files, and all referenced data files must bein the same bucket as the manifest.

For example, if you have a source URI of"gs://bucket/path/file"and thefile_set_spec_type isFILE_SYSTEM_MATCH, then thefile is used directly as a data file. If thefile_set_spec_typeisNEW_LINE_DELIMITED_MANIFEST, then each line in the file is interpretedas a URI that points to a data file.

--source_format=FORMAT

The format of the source data. Use one of the following values:

  • CSV
  • NEWLINE_DELIMITED_JSON
  • AVRO
  • DATASTORE_BACKUP (use this value for Filestore)
  • PARQUET
  • ORC
--time_partitioning_expiration=SECONDS

An integer that specifies (in seconds) whena time-based partition should be deleted. The expiration time evaluates to thepartition's UTC date plus the integer value. A negative number indicates noexpiration.

--time_partitioning_field=COLUMN_NAME

Specifies the field that determines how to create atime-based partition. If time-based partitioning is enabled without thisvalue, then the table is partitioned based on the load time.

--time_partitioning_type=INTERVAL

Enables time-based partitioning on a table and sets the partition type.Use one of the following values:

  • DAY
  • HOUR
  • MONTH
  • YEAR

The default partition type for time-based partitioning isDAY.

--use_avro_logical_types={true|false}

If the--source_format flag is set toAVRO, then set this flag totrue toconvert logical types into their corresponding types (such asTIMESTAMP)instead of only using their raw types (such asINTEGER).

--decimal_target_types=DECIMAL_TYPE

Determines how to convert aDecimal logical type. Equivalent toJobConfigurationLoad.decimalTargetTypes.Repeat this flag to specify multiple target types.

--parquet_enum_as_string={true|false}

If the--source_format flag is set toPARQUET, and you want BigQueryto infer ParquetENUM logical types asSTRING values, then set this flagtotrue. The default isfalse.

--parquet_enable_list_inference={true|false}

If the--source_format flag is set toPARQUET, then this flag indicates whether to useschema inferencefor ParquetLIST logical types.

--reference_file_schema_uri=URI

Specifies the path to a reference file with the expected table schema forcreating external tables. Equivalent toExternalDataConfiguration.referenceFileSchemaUri.This flag is enabled for Avro, ORC, and PARQUET formats.

DESTINATION_TABLE

The table that you want to load data into.

SOURCE_DATA

TheCloud Storage URIof the file that contains the data that you want to load.

SCHEMA

The schema for the destination table.

For more information about loading data from Cloud Storage using thebq loadcommand, see the following:

For more information about loading data from a local source using thebq loadcommand, see the following:

bq ls

Use thebq ls command to list objects in a collection.

Synopsis

bq ls [FLAGS] [RESOURCE]

Example

bq ls myDataset

Flags and arguments

Thebq ls command uses the following flags and arguments:

--all={true|false} or-a={true|false}
To show all results, set totrue. Shows jobs from all users orall datasets, including hidden ones. This flag is not needed when listingtransfer configurations or transfer runs. The default value isfalse.
--capacity_commitment={true|false}

To list capacity commitments, set totrue and use the--location flag tospecify the location. For more information, seeView purchased commitments.

For example:bq ls --capacity_commitment=true --location='us'

--datasets={true|false} or-d={true|false}

To list datasets, set totrue. The default value isfalse.

--filter="FILTER"

Filters the listed resources to match theFILTER argument.

For datasets,FILTER consists of one or morespace-separated triples in the formatlabels.KEY:VALUE. If more than one tripleis provided, then the command only returns datasets matchingall of thetriples (i.e., the command uses theAND logical operator, notOR). Ifyou want to specify more than one triple, then surround theFILTER value with quotation marks.

To filter based on dataset labels, use the keys and values that you appliedto your datasets.

For example:

 --filter "labels.department:marketing labels.team:sales"

For transfer configurations, usedataSourceIds as the key, and one of the following data sources as the value:

For example:

   --filter labels.dataSourceIds:dcm_dt

For transfer runs, usestates as the key and one or more of the followingtransfer states as the value:

  • SUCCEEDED
  • FAILED
  • PENDING
  • RUNNING
  • CANCELLED

For example:

--filter="states:FAILED"

For jobs, usestates as the key and one or more of the following job states asthe value:

  • RUNNING
  • PENDING
  • DONE

For example:

bq ls --jobs --filter="states:RUNNING"bq ls --jobs --filter="states:RUNNING,PENDING"
--jobs={true|false} or-j={true|false}
To list jobs, set totrue. The default value isfalse. By default, you arelimited to 100,000 results.
--max_creation_time=MAX_CREATION_TIME_MS
An integer that represents aUnix epoch timestamp in milliseconds.When specified with the--jobs flag, this flag lists only the jobs createdbefore the timestamp.
--max_results=MAX_RESULTS or -n=MAX_RESULTS
An integer indicating the maximum number of results. The default value is 50and the maximum value is 1000. If you have more than 1000 jobs, then you canuse thepage_token flag to list all jobs using pagination.
--min_creation_time=MIN_CREATION_TIME_MS
An integer that represents aUnix epoch timestampin milliseconds. When specified with the--jobs flag, this flag lists onlythe jobs created after the timestamp.
--message_type=messageTypes:MESSAGE_TYPE

To list only transfer run log messages of a particular type,specifymessageTypes:MESSAGE_TYPE. Possible valuesare the following:

  • INFO
  • WARNING
  • ERROR
--models={true|false} or-m={true|false}

To list BigQuery ML models, set totrue. The default value isfalse.

--page_token=TOKEN or-k=TOKEN

Lists items starting from the specified page token.

--projects={true|false} or-p={true|false}

To show all projects, set totrue. The default value isfalse.

--reservation={true|false}

To list all reservations for a given project and location, set totrue. Thedefault value isfalse. Use with the--project_id and--locationflags.

For example:

bq ls --reservation=true --project_id=myProject --location=us

--reservation_assignment={true|false}

To list all reservation assignments for a given project andlocation, set totrue. The default value isfalse. Use with the--project_id and--location flags.

--routines={true|false}

To list all routines in the specified dataset, set totrue. The defaultvalue isfalse. Routines includepersistent user-defined functions,table functions(Preview), andstored procedures.

--row_access_policies

When specified, lists all the row-level access policies on a table. Row-levelaccess policies are used for row-level security. You must supply the tablename in the formatdataset.table.

--run_attempt=RUN_ATTEMPT

Use with the--transfer_run flag. To list all run attempts for thespecified transfer run, set toRUN_ATTEMPT_UNSPECIFIED.To list only the latest run attempt, set toLATEST. The default isLATEST.

--transfer_config={true|false}

To list transfer configurations in the specified project and location, set totrue. Use with the--transfer_location and--project_id flag.The default value isfalse.

--transfer_location=LOCATION

List transfer configurations in the specifiedlocation. You set the transfer location when the transfer is created.

--transfer_log={true|false}

Use with the--transfer_run flag. To list transfer log messages for thespecified transfer run, set totrue. The default value isfalse.

--transfer_run={true|false}

Lists the transfer runs for the specified transfer configuration.

For example:

bq ls --transfer_run=true projects/myProject/locations/us/transferConfigs/12345

RESOURCE

The collection whose objects that you want to list. The resource can be adataset, project, reservation, or transfer configuration.

For more information about using thebq ls command, see the following:

bq mk

Use thebq mk command to create a BigQuery resource.

Synopsis

bq mkTYPE_FLAG [OTHER FLAGS] [ARGS]

Flags and arguments

Thebq mk command takes atype flag that specifies the type of resource tocreate, and additional flags that depend on the resource type.

TYPE_FLAG: Set one of the following flags totrue.Your selection specifies the type of resource to create.

Thebq mk command supports the following flag for all types of resources:

--force={true|false} or-f={true|false}
To ignore errors if a resource with the same name already exists, set totrue. If the resource already exists, then the exit code is 0, but setting thisflag totrue does not causethebq mk command to overwrite the resource. The default value isfalse.

Thebq mk command supports additional flags, depending on the type of resourceyou are creating, as described in the following sections.

bq mk --capacity_commitment

To purchase a capacity commitment, set--capacity_commitment totrue and usethe following flags:

--location=LOCATION
Specifies the location of the commitment.
--plan=PLAN_TYPE

Specifies thecommitment plan type.Must be one of the following values:

  • ANNUAL
  • THREE_YEAR

Customers using legacy flat-rate pricing also may use one of the following values:

  • FLEX
  • MONTHLY
  • ANNUAL
--renewal_plan=RENEWAL_TYPE

Specifies therenewal plan type.Required forANNUAL orTHREE_YEAR commitment plans.Must be one of the following:

  • ANNUAL
  • THREE_YEAR
  • NONE

Customers using legacy flat-rate pricing also may use one of the followingvalues:

  • FLEX
  • MONTHLY
  • ANNUAL
--project_id=PROJECT_ID

Specifies the project that administers the slots.

--slots=NUMBER_OF_BASELINE_SLOTS

Specifies the number of baseline slots to purchase.

--edition=EDITION

Theedition associated with the capacitycommitment. Must be one of the following:

  • ENTERPRISE
  • ENTERPRISE_PLUS

For more information, seePurchase commitments.

bq mk --connection

Creates a connection. The following flags are supported:

--connection_type=CONNECTION_TYPE
The type of the connection, for exampleCLOUD_SQL for Cloud SQL connections.
--properties=PROPERTIES

Connection specific parameters in JSON format.instanceId,database andtype must be specified.

If you create a Spanner connection and want to useData Boost, include the"useParallelism":true and"useDataBoost":true pairs.

--connection_credential=CONNECTION_CREDENTIAL

The credentials of the connection in JSON format.username andpassword must be specified.

--project_id=PROJECT_ID

Specifies the ID of the project that the connection belongs to.

--location=LOCATION

Specifies the location that the connection will be stored.

--display_name=DISPLAY_NAME

Specifies an optional friendly name for the connection.

--description=DESCRIPTION

Specifies an optional description of the connection.

--iam_role_id=ROLE_ID

For BigQuery Omni on AWS, specifies an IAM role thatallows access to the resource.

Use the following format:"arn:aws:iam::AWS_ACCOUNT_ID:role/POLICY_NAME",where:

  • AWS_ACCOUNT_ID is the ID number of the connection'sAWS IAM user.
  • POLICY_NAME is the policy name.

Example:"arn:aws:iam::0123456789AB:policy/s3-read-role"

--tenant_id=TENANT_ID

For BigQuery Omni on Microsoft Azure, specifies the tenant ID ofthe Microsoft Azure directory that contains the Microsoft Azure Storage account.

CONNECTION_ID

Specifies an optional connection ID for the connection. If a connection ID isnot provided a unique ID is automatically generated. The connection ID cancontain letters, numbers and underscores.

For more information, seeIntroduction to connections.

bq mk --dataset

Creates a dataset. The following flags are supported:

--add_tags=TAGS
Specifies thetags that you are attaching to the newdataset, separated by commas. For example,556741164180/env:prod,myProject/department:sales. Each tag must have thenamespaced key name and value short name.
--default_kms_key=KEY
Specifies the default Cloud KMSkey resource IDfor encrypting the table data in a dataset ifno explicit key is provided during table creation or query.
--default_partition_expiration=SECONDS
An integer that specifies the default expiration time, in seconds, for allpartitions in newly-created partitioned tables in the dataset. A partition'sexpiration time is set to the partition's UTC date plus the integer value.If this property is set, then its value overrides the dataset-level defaulttableexpiration if it exists. If you supply the--time_partitioning_expirationflag when you create or update a partitioned table, then the table-levelpartitionexpiration takes precedence over the dataset-level default partitionexpiration.
--default_table_expiration=SECONDS
An integer that specifies the default lifetime,in seconds, for newly created tables in a dataset. The expiration time is setto the current UTC time plus this value.
--description=DESCRIPTION
Specifies the description of the dataset.
--external_source=EXTERNAL_SOURCE
Specifies the external data source when you create afederated dataset.
--label=KEY:VALUE
Specifies a label for the dataset. Repeat this flag to specify multiplelabels.
--location=LOCATION or--data_location=LOCATION
Specifies the location of the dataset. Prefer the--locationflag; the--data_location flag is a legacy flag.
--max_time_travel_hours=HOURS
Specifies the duration inhours of thetime travel windowfor the dataset. The--max_time_travel_hours value mustbe an integer expressed in multiples of 24 (48, 72, 96, 120, 144, 168)between 48 (2 days) and 168 (7 days). 168 hours is the default if thisflag isn't specified.
--storage_billing_model=BILLING_MODEL

Specifies thestorage billing modelfor a dataset. Set the--storage_billing_model value toPHYSICAL to usephysical bytes when calculating storage charges, or toLOGICAL to uselogical bytes.LOGICAL is the default.

When you change a dataset's billing model, it takes 24 hours for thechange to take effect.

Once you change a dataset's storage billing model, you must wait 14 daysbefore you can change the storage billing model again.

For more information, seeCreating datasets.

bq mk --materialized_view

Creates a materialized view. The following flags are supported:

--enable_refresh={true|false}
To disable automatic refresh for a materialized view, set tofalse.The default when creating a materialized view istrue.
--refresh_interval_ms=MILLISECONDS
Specifies the number of milliseconds for the refresh interval of amaterialized view. If this flag is not specified, then the default refreshinterval for a materialized view that has refresh enabled is 1,800,000milliseconds, which is 30 minutes.

For more information, seeCreating and using materialized views.

bq mk --reservation

Creates a reservation with dedicated slots.The following flags are supported:

--target_job_concurrency=CONCURRENCY
Specifies the target number ofqueries that execute simultaneously. The default value is 0, which meansthat the concurrency is automatically computed based on the reservation size.For more information, seeUse query queues.
--ignore_idle_slots={true|false}
To restrict jobs running in this reservation to only use slots allocated tothe reservation, set totrue. The default value isfalse; jobs in thisreservation can use idle slots from other reservations, or slots that are notallocated to any reservation. For more information, seeIdle slots.
--location=LOCATION
Specifies the location of the reservation.
--project_id=PROJECT_ID
Specifies the project that owns the reservation.
--slots=NUMBER_OF_BASELINE_SLOTS
Specifies the number of baseline slots to allocate to this reservation.
--edition=EDITION
Theedition associated with the capacitycommitment. Must be one of the following:
  • STANDARD
  • ENTERPRISE
  • ENTERPRISE_PLUS
--autoscale_max_slots=NUMBER_OF_AUTOSCALING_SLOTS
The number of autoscaling slots assigned to the reservation. This is equal tothe value of the max reservation size minus the number of baseline slots.
--max_slots=MAXIMUM_NUMBER_OF_SLOTS
The maximum number of slots the reservation will consume. Must be configuredwith the--scaling_mode flag(Preview).
--scaling_mode=SCALING_MODE

The scaling mode of the reservation. Must be one of the following:

  • IDLE_SLOTS_ONLY
  • ALL_SLOTS
  • AUTOSCALE_ONLY
  • SCALING_MODE_UNSPECIFIED

Must be configured with the--max_slots flag(Preview).

For more information, seeCreate a reservation with dedicated slots.

bq mk --reservation_assignment

Assigns a project, folder, or organization to a reservation.The following flags are supported:

--assignee_id=ASSIGNEE_ID
Specifies the ID of the folder, organization, or project.
--assignee_type=ASSIGNEE_TYPE
Specifies the type of entity to assign to the reservation. One ofthe following:
  • FOLDER
  • ORGANIZATION
  • PROJECT
--job_type=JOB_TYPE
Specifies the type of job to assign to the reservation. One of the following:
  • QUERY
  • PIPELINE
  • ML_EXTERNAL
  • BACKGROUND
--location=LOCATION
Specifies the location of the reservation.
--project_id=PROJECT_ID
Specifies the project that owns the reservation.
--reservation_id=RESERVATION_ID
Specifies the ID of the reservation.

For moreinformation, seeWork with reservation assignments.

bq mk --alpha=reservation_groups --reservation-group

Create a reservation group.The following flags are supported:

--location=LOCATION
Specifies the location of the reservation.
--project_id=PROJECT_ID
Specifies the project that owns the reservation.

Must be configured with the--alpha=reservation_groups flag (Preview).

For moreinformation, seeWork with reservation groups.

bq mk --table

Creates a table. The following flags are supported:

--add_tags=TAGS
Specifies thetags that you are attaching to the newtable, separated by commas. For example,556741164180/env:prod,myProject/department:sales. Each tag must have thenamespaced key name and value short name.
--clustering_fields=COLUMNS
A comma-separated list of up to four column names that specifies the fields touse fortable clustering.If specified with partitioning, then the table is firstpartitioned, and then each partition is clustered using the supplied columns.
--description=DESCRIPTION
Specifies the description of the table.
--destination_kms_key=KEY
Specifies a Cloud KMSkey resource IDfor encrypting the destination table data.
--expiration=SECONDS
Specifies the lifetime for the table. If you don't specify the--expirationflag, thenBigQuery creates the table with the dataset's default tablelifetime, or the table doesn't expire.
--external_table_definition=STRING

Specifies a table definition for creating anexternal table.

ForCloud StorageandGoogle Drive external tables:

--external_table_definition={PATH_TO_FILE|DEFINITION}
The value can be either a path to a file containing atable definition(PATH_TO_FILE) or an inlinetable definition (DEFINITION).
  • The format for theDEFINITION field isSCHEMA@FORMAT=URI.
  • The format for theSCHEMA value is acomma-separated list of column definitions in the formFIELD:DATA_TYPE,FIELD:DATA_TYPE. You can omit theSCHEMA value if the data format is self-describing(such as Avro) or if you are usingschema auto-detection.

  • TheFORMAT value specifies the data format; one ofthe following:

    • AVRO
    • CSV
    • DATASTORE_BACKUP (use this value for Filestore)
    • ICEBERG
    • NEWLINE_DELIMITED_JSON
    • ORC
    • PARQUET

If you specify a table definition file, don't add an extension to thefilename.

For example:

--external_table_definition=/tmp/tabledef
--external_table_definition=Region:STRING,Quarter:STRING,Total_sales:INTEGER@CSV=gs://mybucket/sales.csv

ForBigtable external tables,and BigLake tables based onAWSandAzure:

--external_table_definition=PATH_TO_FILE
The value must be a path to a file containing atable definition.

ForBigLake tables based on Cloud Storage:

--external_table_definition=FORMAT=BUCKET_PATH@REGION.CONNECTION_NAME:

Forobject tables:

--external_table_definition=BUCKET_PATH@REGION.CONNECTION_NAME:

  • BUCKET_PATH is the path to theCloud Storage bucket that contains the objects represented bythe object table, in the formatgs://bucket_name/[folder_name/]*.You can specifymultiple buckets by providing multiple paths, for examplegs://mybucket1/*,gs://mybucket2/folder5/*.

    You can use wildcards to limit the objects included in the object table. For example, if the bucket contains several types of unstructured data, you could create the object table over only PDF objects by specifyinggs://bucket_name/*.pdf. For more information about using wildcards, seeURI wildcards.

  • TheREGION value specifies theregion or multi-regionthat contains the connection.

  • TheCONNECTION_NAME value specifies the name of thecloud resource connection to use with this external table. The connectiondetermines which service account is used to read data fromCloud Storage.

--file_set_spec_type=FILE_SET_SPEC_TYPE

Specifies how to interpret source URIs.

  • FILE_SYSTEM_MATCH: Expands source URIs by listing files from the objectstore. This is the default behavior if FileSetSpecType is not set.
  • NEW_LINE_DELIMITED_MANIFEST: Indicates that the provided URIs arenewline-delimited manifest files, with one URI per line. Wildcard URIs arenot supported in the manifest files, and all referenced data files must bein the same bucket as the manifest.

For example, if you have a source URI of"gs://bucket/path/file"and thefile_set_spec_type isFILE_SYSTEM_MATCH, then thefile is used directly as a data file. If thefile_set_spec_typeisNEW_LINE_DELIMITED_MANIFEST, then each line in the file is interpretedas a URI that points to a data file.

--reference_file_schema_uri=URI

Specifies the path to a reference file with the expected table schema forcreating external tables. Equivalent toExternalDataConfiguration.referenceFileSchemaUri.This flag is enabled for Avro, ORC, and PARQUET formats.

--label=KEY:VALUE

Specifies a label for the table. Repeat this flag to specify multiplelabels.

--max_staleness=INTERVAL

Specifies whether cached metadata is used by operations against thetable, and how fresh the cached metadata must be in order for the operation touse it.

Applicable forBigLake tablesandobject tables.

To disable metadata caching, specify 0. This is the default.

To enable metadata caching, specify an interval value between 30minutes and 7 days, using theY-M D H:M:S format described in theINTERVAL data typedocumentation. For example, specify0-0 0 4:0:0 for a 4 hour stalenessinterval. With this value, operations against the table use cached metadata ifit has been refreshed within the past 4 hours. If the cached metadatais older than that, the operation falls back to retrieving metadata fromCloud Storage instead.

--object_metadata=STRING

Set the value of this flag toSIMPLE when creating anobject table.

Only required when creating anobject table.

--range_partitioning=COLUMN_NAME,START,END,INTERVAL

Specifies options for an integer-range partition, as follows:

  • column_name is the column used to create the integer range partitions.
  • start is the start of range partitioning, inclusive.
  • end is the end of range partitioning, exclusive.
  • interval is the width of each range within the partition.

For example:

--range_partitioning=customer_id,0,10000,100

--require_partition_filter={true|false}

To require a partition filter for queries over the supplied table, set totrue. This flag only applies to partitioned tables.The default value isfalse.

--schema={SCHEMA_FILE|SCHEMA}

Specifies either the path to a local JSON schema file or a comma-separatedlist of column definitions in the formFIELD:DATA_TYPE,FIELD:DATA_TYPE.If you use a schema file, don't add an extension to the filename.

Examples:

--schema=/tmp/tabledef
--schema=Region:STRING,Quarter:STRING,Total_sales:INTEGER

--time_partitioning_expiration=SECONDS

An integer that specifies (in seconds) whena time-based partition should be deleted. The expiration time evaluates to thepartition's UTC date plus the integer value. A negative number indicates noexpiration.

--time_partitioning_field=COLUMN_NAME

Specifies the field used to determine how to create atime-based partition. If time-based partitioning is enabled without thisvalue, thenthe table is partitioned based on the load time.

--time_partitioning_type=INTERVAL

Enables time-based partitioning on a table and sets the partition type.Use one of the following values:

  • DAY
  • HOUR
  • MONTH
  • YEAR
--use_avro_logical_types={true|false}

If theFORMAT part of the--external_table_definition flagis set toAVRO, then this flag specifieswhether to convertlogical types into their corresponding types (such asTIMESTAMP) instead ofonly using their raw types (such asINTEGER).

--parquet_enable_list_inference={true|false}

If theFORMAT part of the--external_table_definition flagis set toPARQUET, then this flagspecifies whether to useschema inferencefor ParquetLIST logical types.

--parquet_enum_as_string={true|false}

If theFORMAT part of the--external_table_definition flagis set toPARQUET, then this flagspecifies whether to infer ParquetENUM logical types asSTRING values.

For more information, seeCreating and using tables.

bq mk --transfer_config

Creates a transfer configuration. The following flags are supported:

--data_source=DATA_SOURCE
Specifies the data source. Required when creating a transfer configuration.Use one of the following values:
Note: Theredshift andon_premises values are for data migrations; before you use thebq mk --transfer_config command with these values, consult the linked documentation from the preceding list.
--display_name=DISPLAY_NAME
Specifies the display name for the transfer configuration.
--no_auto_scheduling={true|false}
Disables automatic scheduling of data transfer runs for this configuration.The default value isfalse.
--params={"PARAMETER":"VALUE"} or-p={"PARAMETER":"VALUE"}
Specifies the parameters for the transfer configuration inJSON format. The parameters vary depending on the data source.
--refresh_window_days=DAYS
An integer that specifies the refresh window for the transfer configuration indays. The default value is0.
--service_account_name=SERVICE_ACCOUNT
Specifies a service account to be used as the credential for the transferconfiguration.
--target_dataset=DATASET
Specifies the target dataset for the transfer configuration.
--table_filter=TABLES
Only used with thegoogle_ads data source. TheTABLES parameter is acomma-separated list of tables to include in the transfer. To exclude a table,prefix it with a hyphen (-). The default value includes all tables in thetransfer.
--destination_kms_key=KEY
Specifies a Cloud KMSkey resource IDfor encrypting the destination table data.

For information about using thebq mk command with the BigQueryData Transfer Service, see the following:

bq mk --transfer_run

Creates a data transfer run at the specified time or time range using thespecified data transfer configuration.

Synopsis
bq mk --transfer_run [--run_time=RUN_TIME | --start_time=START_TIME --end_time=END_TIME]CONFIG

The following flags are supported:

--run_time=RUN_TIME
Atimestampthat specifies the time to schedule the data transfer run.
--start_time=START_TIME
Atimestampthat specifies the start time for a range of data transfer runs.
--end_time=END_TIME
Atimestampthat specifies the end time for a range of data transfer runs.

The format for the timestamps isRFC3339 UTC "Zulu".

TheCONFIG argument specifies a pre-existing datatransfer configuration.

Examples
bqmk--transfer_run\--run_time=2021-01-20T17:00:00.00Z\projects/p/locations/l/transferConfigs/c
bqmk--transfer_run\--start_time=2020-12-19T16:39:57-08:00\--end_time=2020-12-19T20:39:57-08:00\projects/p/locations/l/transferConfigs/c

bq mk --view

Creates a view. The following flags are supported:

--add_tags=TAGS
Specifies thetags that you are attaching to the newview, separated by commas. For example,556741164180/env:prod,myProject/department:sales. Each tag must have thenamespaced key name and value short name.
--description=DESCRIPTION
Specifies the description of the view.
--expiration=SECONDS
Specifies the lifetime for the view. IfSECONDS is0, then the view doesn't expire. If you don't specify the--expirationflag, thenBigQuery creates the view with the dataset's default tablelifetime.
--label=KEY:VALUE
Specifies a label for the view. Repeat this flag to specify multiplelabels.
--use_legacy_sql={true|false}
Set tofalse to use a GoogleSQL query tocreate a view. The default value is determined by yourconfiguration settings. If the configuration setting is not specified, the default value istrue; uses legacy SQL.
--view_udf_resource=FILE
Specifies the Cloud Storage URI or the path to a local code filethat is loaded and evaluated immediately as a user-defined function resourceused by a view's SQL query. Repeat this flag to specify multiple files.

For more information, seeCreating views.

bq mkdef

Use thebq mkdef command to create a table definition in JSON format for datastored in Cloud Storage or Google Drive.

Synopsis

bq mkdef [FLAGS]URI [ >FILE ]

Flags and arguments

Thebq mkdef command uses the following flags and arguments:

--autodetect={true|false}
Specifies whether to use schema auto-detection for CSV and JSONdata. The default isfalse.
--connection_id=CONNECTION_ID
The ID of a connection resource to use for authentication.
--hive_partitioning_mode

Specifies how to determine the partitioningschema when BigQuery reads data. The following modes aresupported:

  • AUTO: Automatically infer partition key names and types.
  • STRINGS: Automatically infer partition key names. All types aretreated as strings.
  • CUSTOM: Specify the partitioning schema in the source URI prefix.

The default value isAUTO.

--hive_partitioning_source_uri_prefix

Specifies the common prefix for thesource URIs. The common prefix value is the portion of the URI thatimmediately precedes the partition key encoding. If you specifiedCUSTOM forthe mode, then you must also identify the partitioning schema.

For example, consider files with the following structure:

  • gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avro
  • gs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avro

If you useAUTO orSTRINGS modes, then the following values areacceptable:

  • gs://bucket/path_to_table
  • gs://bucket/path_to_table/

If you useCUSTOM mode, then the following values are acceptable:

  • gs://bucket/path_to_table/{dt:DATE}/{country:STRING}/{id:INTEGER}
  • gs://bucket/path_to_table/{dt:STRING}/{country:STRING}/{id:INTEGER}
  • gs://bucket/path_to_table/{dt:DATE}/{country:STRING}/{id:STRING}

For more information about using thebq mkdef command, seeCreating a table definition file for an external data source.

--ignore_unknown_values={true|false} or-i={true|false}
Specifies whether to ignore any values in a row that are not present in theschema. The default isfalse.
--metadata_cache_mode=STRING

Specifies whether the metadata cache for the table is refreshedautomatically or manually.

Set toAUTOMATIC for the metadata cache to berefreshed at a system-defined interval, usually somewhere between 30 and60 minutes.

Set toMANUAL if you want to refreshthe metadata cache on a schedule you determine. In this case, you can calltheBQ.REFRESH_EXTERNAL_METADATA_CACHE system procedure to refresh the cache.

You must set the--metadata_cache_mode flag if you set the--max_stalenessflag with thebq mk command.

--parquet_enable_list_inference={true|false}

Ifsource_format is set toPARQUET, then this flag specifies whether to useschema inferencefor ParquetLIST logical types. The default isfalse.

--parquet_enum_as_string={true|false}

Ifsource_format is set toPARQUET, then this flag specifies whether toinfer ParquetENUM logical types asSTRING values. The default isfalse.

--file_set_spec_type=FILE_SET_SPEC_TYPE

Specifies how to interpret source URIs.

  • FILE_SYSTEM_MATCH: Expands source URIs by listing files from the objectstore. This is the default behavior if FileSetSpecType is not set.
  • NEW_LINE_DELIMITED_MANIFEST: Indicates that the provided URIs arenewline-delimited manifest files, with one URI per line. Wildcard URIs arenot supported in the manifest files, and all referenced data files must bein the same bucket as the manifest.

For example, if you have a source URI of"gs://bucket/path/file"and thefile_set_spec_type isFILE_SYSTEM_MATCH, then thefile is used directly as a data file. If thefile_set_spec_typeisNEW_LINE_DELIMITED_MANIFEST, then each line in the file is interpretedas a URI that points to a data file.

--source_format=FORMAT

Specifies the format of the source data. Use one of the following values:

  • AVRO
  • CSV
  • DATASTORE_BACKUP (use this value for Filestore)
  • GOOGLE_SHEETS
  • NEWLINE_DELIMITED_JSON
  • ORC
  • PARQUET

The default value isCSV.

--use_avro_logical_types={true|false}

If the--source_format flag is set toAVRO, then this flag specifieswhether to convert logicaltypes into their corresponding types (such asTIMESTAMP) instead ofonly using their raw types (such asINTEGER). The default isfalse.

bq partition

Use thebq partition command to convert a group of tables with time-unitsuffixes, such as tables ending inYYYYMMDD for datepartitioning, into partitioned tables.

Synopsis

bq partition [FLAGS]SOURCE_TABLE_BASE_NAMEPARTITION_TABLE

Flags and arguments

Thebq partition command uses the following flags and arguments:

--no_clobber={true|false} or-n={true|false}
To disallow overwriting an existing partition,set totrue. The default value isfalse; if the partition exists,then it is overwritten.
--time_partitioning_expiration=SECONDS
An integer that specifies (in seconds) whena time-based partition should be deleted. The expiration time evaluates to thepartition's UTC date plus the integer value. A negative number indicates noexpiration.
--time_partitioning_type=INTERVAL

Specifies the partition type. The following table provides the possible valuesfor theINTERVAL flag and the expectedtime-unit-suffix format for each:

INTERVALSuffix
HOURYYYYMMDDHH
DAYYYYYMMDD
MONTHYYYYMM
YEARYYYY
SOURCE_TABLE_BASE_NAME

The base name of the group of tables with time-unit suffixes.

PARTITION_TABLE

The name of the destination partitioned table.

For more information about using thebq partition command, seeConverting date-sharded tables into ingestion-time partitioned tables.

bq query

Use thebq query command to create a query job that runs the specified SQLquery.

Synopsis

bq query [FLAGS] 'QUERY'

Flags and arguments

Thebq query command uses the following flags and arguments:

--allow_large_results={true|false}
To enable large destination tablesizes for legacy SQL queries, set totrue. The default value isfalse.
--append_table={true|false}
To append data to a destination table, set totrue. Thedefault value isfalse.
--batch={true|false}
To run the query in batch mode, set totrue. The default valueisfalse.
--clustering_fields=COLUMNS
A comma-separated list of up to four column names that specifies fields touse tocluster the destination table in a query.If specified withpartitioning, then the table is first partitioned, and then each partition isclustered using the supplied columns.
--connection_property=KEY=VALUE

A key-value pair that lets you specify connection-level properties tocustomize query behavior. Repeat this flag to specify additional properties.

Supported connection properties are as follows:

  • dataset_project_id: represents the default project for datasetsthat are used in the query, similar to the@@dataset_project_idsystem variable.
  • query_label: associates the query with a given job label. If set,all subsequent queries in a script or session have this label. For detailson query label formatting requirements, see thelabels field in theJobConfiguration resource.
  • service_account: specifies a service account to use to run thequery. For example,--connection_property=service_account=myserviceaccount@project.iam.gserviceaccount.com.
  • session_id: associates the query with a given session.
  • time_zone: represents the default timezone to use to run the query.
--continuous={true|false}

To run acontinuous query,set totrue. The default value isfalse.

--destination_kms_key=KEY

Specifies a Cloud KMSkey resource IDfor encrypting the destination table data.

--destination_schema={PATH_TO_FILE|SCHEMA}

The path to a local JSON schema file or acomma-separated list of column definitions in the formFIELD:DATA_TYPE,FIELD:DATA_TYPE.

Schema changes happen in a separate operation from query execution.If you write query results to a table by specifying the--destination_table flag, and the query subsequently raises an exception,it is possible that any schema changes will be skipped. If this occurs,check the destination table schema andmanually update it if necessary.

--destination_table=TABLE

When specified, the query results are saved toTABLE.SpecifyTABLE in the followingformat:PROJECT:DATASET.TABLE.IfPROJECT is not specified, then the current projectis assumed. If the--destination_table flag is not specified, then thequery results are saved to a temporary table.

Examples:

--destination_table myProject:myDataset.myTable
--destination_table myDataset.myTable

--dry_run={true|false}

When specified, the query is validated but not run.

--external_table_definition={TABLE::PATH_TO_FILE|TABLE::DEFINITION}

Specifies the table name andtable definitionfor an external table query.The table definition can be a path to a local JSON schemafile or an inline table definition.The format for supplying the inline table definition isSCHEMA@SOURCE_FORMAT=CLOUD_STORAGE_URI.The format for theSCHEMA value isa comma-separated list of column definitions in the formFIELD:DATA_TYPE,FIELD:DATA_TYPE.If you use a table definition file, don't add an extension to the filename.

For example:

--external_table_definition=myTable::/tmp/tabledef
--external_table_definition=myTable::Region:STRING,Quarter:STRING,Total_sales:INTEGER@CSV=gs://mybucket/sales.csv

Repeat this flag to query multiple tables.

--flatten_results={true|false}

To disallow flattening nested and repeated fields inthe results for legacy SQL queries, set tofalse. The default value istrue.

--label=KEY:VALUE

Specifies a label for the query job. Repeat this flag to specify multiplelabels.

--max_rows=MAX_ROWS or-n=MAX_ROWS

An integer specifying the number of rows to return inthe query results. The default value is100.

--maximum_bytes_billed=MAX_BYTES

An integer that limits the bytes billed for thequery. If the query goes beyond the limit, then the query fails (withoutincurring acharge). If this flag is not specified, then the bytes billed is set to theproject default.

--max_statement_results=VALUE

An integer specifying the maximum number of script statements displayed for the query results.The default value is100.

--min_completion_ratio=RATIO

(Experimental) A number from 0 through 1.0 thatspecifies the minimum fraction of data that must be scanned before a queryreturns. If the flag is not specified, then the default server value1.0 isused.

--parameter={PATH_TO_FILE|PARAMETER}

Either a JSON file containing a list of query parameters, or aquery parameter in the formNAME:TYPE:VALUE.An empty name creates a positional parameter. IfTYPEis omitted, then theSTRING type is assumed.NULL specifies a null value.Repeat this flag to specify multiple parameters.

For example:

--parameter=/tmp/queryParams
--parameter=Name::Oscar
--parameter=Count:INTEGER:42

--range_partitioning=COLUMN_NAME,START,END,INTERVAL

Use with the--destination_table flag. Specifies options for integer-rangepartitioning in the destination table. The value is a comma-separated list ofthe formcolumn_name,start,end,interval, where

  • column_name is the column used to create the integer range partitions.
  • start is the start of range partitioning, inclusive.
  • end is the end of range partitioning, exclusive.
  • interval is the width of each range within the partition.

For example:

--range_partitioning=customer_id,0,10000,100

--replace={true|false}

To overwrite the destination table with the query results, set totrue. Anyexisting data and schema are erased. Any Cloud KMS key is alsoremoved, unless you specify the--destination_kms_key flag. The defaultvalue isfalse.

Note: To remove all rows from a table without deleting the schema, use theTRUNCATE TABLEstatement.
--require_cache={true|false}

If specified, then run the query only if results can beretrieved from the cache.

--require_partition_filter={true|false}

If specified, then a partition filter is required for queries over the suppliedtable. This flag can only be used with a partitioned table.

**--reservation_id=RESERVATION

Preview. If specified, the reservation thequery runs in.

--rpc={true|false}

To use the RPC-style query API instead of the REST APIjobs.insert method,set totrue. The default value isfalse.

--schedule="SCHEDULE"

Makes a query a recurring scheduled query. A schedule for how often thequery should run is required.

Examples:

--schedule="every 24 hours"
--schedule="every 3 hours"

For a description of the schedule syntax, seeFormatting the schedule.

--schema_update_option=OPTION

When appending data to a table in a load job or aquery job, or when overwriting a table partition, specifies how to update theschema of the destination table. Use one of the following values:

  • ALLOW_FIELD_ADDITION: Allow new fields to be added.
  • ALLOW_FIELD_RELAXATION: Allow relaxingREQUIRED fields toNULLABLE.

Repeat this flag to specify multiple schema update options.

--start_row=ROW_NUMBER or-s=ROW_NUMBER

An integer that specifies the first row to return inthe query result. The default value is0.

--target_dataset=DATASET

When specified with--schedule, updates the target dataset for a scheduledquery. The query must be DDL or DML.

--time_partitioning_expiration=SECONDS

Use with the--destination_table flag. An integer that specifies (inseconds) when a time-based partition should be deleted. The expiration timeevaluates to the partition's UTC date plus the integer value. A negativenumber indicates no expiration.

--time_partitioning_field=COLUMN_NAME

Use with the--destination_table flag. Specifies the partitioning column fortime-based partitioning. If time-based partitioning is enabled without thisvalue, then the table is partitioned based on the ingestion time.

--time_partitioning_type=INTERVAL

Use with the--destination_table flag. Specifies the partition type forthe destination table. Use one of the following values:

  • DAY
  • HOUR
  • MONTH
  • YEAR
--udf_resource=FILE

This flag applies only to legacy SQL queries. Specifies theCloud Storage URI or the path to a local filecontaining a user-defined function resource to beused by a legacy SQL query. Repeat this flag to specify multiple files.

--use_cache={true|false}

To disallow caching query results, set tofalse. The default valueistrue.

--use_legacy_sql={true|false}

To run a GoogleSQL query, set tofalse.The default value is determined by yourconfiguration settings. If the configuration setting is not specified, the default value istrue; the command uses legacy SQL.

--job_timeout_ms={string (Int64Value)}

Specifies the maximum time to run a query in milliseconds. If this timelimit is exceeded, BigQuery attempts to stop the job.

QUERY

The query that you want to run. You can specify the query by using one of thefollowing methods:

  • Specify a string that contains the query.

    If you need to use additional string literals within the query, youmust follow the quoting rules for the shell that you are using, such asBash orPowerShell.

    The following example shows a typical approach in Bash, which is to usedouble quotes to denote the string literals in the query, and thenenclose the query itself in single quotes:

    'SELECT * FROM mydataset.mytable WHERE column1 = "value";'

    If you are copying the query from another location, you must also removeany comments in the query.

  • Pass in a SQL script that contains the query. The following exampleshows how to pass in a SQL script in the Bash shell:

    bq query --use_legacy_sql=false < query.sql

For more information about using thebq query command, seeRun a query.

bq remove-iam-policy-binding

Use thebq remove-iam-policy-binding command to retrieve theIAM policyfor a resource and remove a binding from the policy, in one step.The resource can be a table or a view.

This command is an alternative to the following three-step process:

  1. Using thebq get-iam-policy command to retrievethe policy file (in JSON format).
  2. Editing the policy file.
  3. Using thebq set-iam-policy command to update thepolicy without the binding.
Note:bq remove-iam-policy-binding does not support datasets. To remove access to a dataset, seeRevoke access to a dataset.

Synopsis

bq remove-iam-policy-bindingFLAGS --member=MEMBER_TYPE:MEMBER --role=ROLERESOURCE

Flags and arguments

Thebq remove-iam-policy-binding command uses the followingflags and arguments:

--member=MEMBER_TYPE:MEMBER

Required. Use the--member flag to specify the member part of theIAM policy binding.The--member flag is requiredalong with the--role flag. One combination of--member and--roleflags equals one binding.

TheMEMBER_TYPE valuespecifies the type of member in theIAM policy binding. Use one of the following values:

  • user
  • serviceAccount
  • group
  • domain

TheMEMBER value specifiesthe email address or domain of the member in theIAM policy binding.

--role=ROLE

Required. Specifies the role part of the IAM policybinding. The--role flag is required along withthe--member flag. One combination of--member and--role flags equalsone binding.

--table={true|false} or-t={true|false}

Optional. To remove a binding from the IAM policyof a table or view, set totrue.The default value isfalse.

RESOURCE is the table or view whose policy bindingyou want to remove.

For more information, see theIAM policy reference.

bq rm

Use thebq rm command to delete a BigQuery resource.

Synopsis

bq rm [FLAGS]RESOURCE

Flags and arguments

Thebq rm command uses the following flags and arguments:

--capacity_commitment={false|true}
To delete a capacity commitment, set totrue, specify the location of thecommitment you want to remove by using the--location flag, and replaceRESOURCE with the ID of the commitment you want toremove.
--dataset={true|false} or-d={true|false}
To delete a dataset, set totrue. The default value isfalse.
--force={true|false} or-f={true|false}
To delete a resourcewithout prompting, set totrue. The default value isfalse.
--job={true|false} or-j={true|false}
To delete a job, set to true. The default value is false.
--model={true|false} or-m={true|false}
To delete a BigQuery ML model, set totrue. The default isfalse.
--recursive={true|false} or-r{true|false}
To delete a dataset and any tables, table data, or modelsin it, set totrue. The default value isfalse.
--reservation={true|false}
To delete a reservation, set totrue. The default value isfalse.
--reservation_assignment={true|false}
To delete a reservation assignment, set totrue. The default value isfalse.
--routine={true|false}
To delete a routine, set totrue. The default value isfalse. A routinecan be apersistent user-defined function,atable function(Preview), orastored procedure.
--table={true|false} or-t={true|false}
To delete a table or view, set totrue. The default value isfalse.
--transfer_config={true|false}
To delete a transfer configuration, set totrue. Thedefault value isfalse.
RESOURCE
The resource that you want to remove.

For more information about using thebq rm command, see the following:

bq set-iam-policy

Use thebq set-iam-policy command to specify or update theIAM policyfor a resource. The resource can be a table, a view, or aslot reservation.After setting the policy, the new policy isprinted tostdout. The policy is in JSON format.

Theetag field in the updated policy must match theetag value of thecurrent policy, otherwise the update fails. This feature prevents concurrentupdates.

You can obtain the current policy andetag value for a resource by using thebq get-iam-policy command.

Note:bq set-iam-policy does not support datasets. To modify access to a dataset, seeGrant access to a dataset orRevoke access to a dataset.

Synopsis

bq set-iam-policy [FLAGS]RESOURCEFILE_NAME

Examples

bq set-iam-policy myDataset.myTable policy.json
bq set-iam-policy --reservation myReservation policy.json

Flags and arguments

Thebq set-iam-policy command uses the following flags andarguments.

--table={true|false} or-t={true|false}
Optional. To set the IAM policyof a table or view, set totrue. The defaultvalue isfalse.
--reservation={true|false}
To set the IAM policy of a reservation, set totrue(Preview). Thedefault value isfalse. When this flag is used,RESOURCEis treated as a reservation identifier. The reservation can have optionalproject and location prefixes:myProject:myLocation.myReservation.

RESOURCE is the table or view whose policy you want toupdate.

FILE_NAME is the name of a file containing the policy in JSON format.

For more information about thebq set-iam-policy command, seeControl access to resources with IAM.

bq show

Use thebq show command to display information about a resource.

Synopsis

bq show [FLAGS] [RESOURCE]

Flags and arguments

Thebq show command uses the following flags and arguments:

--assignee_id=ASSIGNEE
When used with the--reservation_assignment flag, specifies the ID of afolder, organization, or project. Use the--assignee_type flag to specifywhich type of assignee to show.
--assignee_type=TYPE
When used with the--reservation_assignment flag, specifies the type ofentity to show. Use one of the following values:
  • FOLDER
  • ORGANIZATION
  • PROJECT
--connection={true|false}
To show information about a connection, set totrue. The default value isfalse. For more information, seeViewing a connection resource.
--dataset={true|false} or-d={true|false}
To show information about a dataset, set totrue.The default value isfalse.
--encryption_service_account={true|false}
To show the encryption service account for a project, if it exists, or createone if it doesn't exist, set totrue. The default value isfalse. Use withthe--project_id flag.
--job={true|false} or-j={true|false}
To show information about a job, set totrue. The defaultvalue isfalse.
--job_type=JOB_TYPE
When used with the--reservation_assignment flag,specifies the job type of the reservation assignments you want to show. Useone of the following values:
  • QUERY
  • PIPELINE
  • ML_EXTERNAL
--model={true|false} or-m={true|false}
To show information about a BigQuery ML model, set totrue. Thedefault value isfalse.
--reservation={true|false}
To show information about a reservation, set totrue. The default value isfalse.
--reservation_assignment={true|false}
When set totrue, the command displays reservation assignments for aspecifiedfolder, organization, or project. The command displays the target resource'sexplicit assignments, if any; otherwise, displays assignmentsinherited from parent resources. For example, a project might inheritassignments from its parent folder. When using this flag, the--job_type,--assignee_type, and--assignee_id flags apply. The default value isfalse.
--routine={true|false}
To show information about a routine, set totrue. The default value isfalse. A routine can be apersistent user-defined function,atable function(Preview), orastored procedure.
--schema={true|false}
To display only the table's schema, set totrue. The defaultvalue isfalse.
--transfer_config={true|false}
To display information about a transfer configuration, set totrue. Thedefault value isfalse.
--transfer_run={true|false}
To display information about a transfer run, set totrue. The default valueisfalse.
--view={true|false}
To display information about a view, set totrue. The defaultvalue isfalse.
RESOURCE
The resource whose information you want to show.

For more information about using thebq show command, see the following:

bq update

Use thebq update command to change a resource.

Synopsis

bq update [FLAGS] [RESOURCE]

Flags and arguments

Thebq update command uses the following flags and arguments:

--add_tags=TAGS
Only available on datasets and tables. Specifies thetags that youare attaching to the resource, separated by commas. For example,556741164180/env:prod,myProject/department:sales. Each tag must have thenamespaced key name and value short name.
--autoscale_max_slots=NUMBER_OF_AUTOSCALING_SLOTS
the number of autoscaling slots assigned to the reservation. This is equal tothe value of the max reservation size minus the number of baseline slots. Onlyavailable with the--reservation flag and if the reservation was createdwith anedition.
--capacity_commitment={true|false}
To update a capacity commitment, set totrue.Use this flag with the--merge,--plan,--renewal_plan,--split, and--slots flags.
--clear_all_tags={true|false}
Only available on datasets and tables. To clear alltags from aresource, set totrue. The default value isfalse.
--clear_label=KEY:VALUE
Removes a label from the resource. Use the formatKEY:VALUE to specify the label to remove.Repeat this flag to remove multiple labels.
--clustering_fields=COLUMNS
Updates a table'sclustering specification. TheCOLUMNS value is a comma-separatedlist of column names to use for clustering. To remove the clustering, setCOLUMNS to"" (the empty string). For more information, seeModify the clustering specification.
--target_job_concurrency=CONCURRENCY
When used with the--reservation flag, specifies the target number ofqueries that execute simultaneously. The default value is 0, which meansthat the concurrency is automatically set based on the reservation size.For more information, seeUse query queues.
--dataset={true|false} or-d={true|false}
To update a dataset, set totrue. The default value isfalse.
--default_kms_key=KEY
Specifies the default Cloud KMSkey resource IDfor encrypting table data in a dataset. The default key is used if no explicitkey is provided for a table creation or a query.
--default_partition_expiration=SECONDS

An integer that specifies the default expiration time, in seconds, for allpartitions in newly created partitioned tables in the dataset. This flag hasno minimum value.

A partition's expiration time is set to the partition's UTC date plus theinteger value. If this property is set, then it overrides the dataset-leveldefaulttable expiration if it exists. If you supply the--time_partitioning_expiration flag when you create or update a partitionedtable, then the table-level partition expiration takes precedence over thedataset-level default partition expiration. Specify0 to remove an existingexpiration.

--default_table_expiration=SECONDS

An integer that updates the default lifetime,in seconds, for newly created tables in a dataset. The expiration time is setto the current UTC time plus this value. Specify0 to remove the existingexpiration.

--description=DESCRIPTION

Updates the description of a dataset, table,table snapshot,model, or view.

--destination_reservation_id=RESERVATION_ID

When used with the--reservation_assignment flag, moves an existingreservation assignment to the specified reservation. The value is the ID ofthe destination reservation. For more information, seeMove an assignment to a different reservation.

--display_name=DISPLAY_NAME

Updates the display name for a transfer configuration.

--etag=ETAG

Acts as a filter; updates the resource only if the resource has anETag thatmatches the string specified in theETAG argument.

--expirationSECONDS

To update the expiration for the table, model,table snapshot,or view, include this flag.ReplaceSECONDS with the number of seconds from theupdate time to the expiration time. To remove the expiration for a table,model,table snapshot,or view, set theSECONDS argument to 0.

--external_table_definition={TABLE::PATH_TO_FILE|TABLE::DEFINITION}

Updates an external table with the specifiedtable definition. The table definitioncan be a path to a local JSON table definitionfile or an inline table definition in the formatSCHEMA@SOURCE_FORMAT=CLOUD_STORAGE_URI.TheSCHEMA value isa comma-separated list of column definitions in the formFIELD:DATA_TYPE,FIELD:DATA_TYPE.If you use a table definition file, don't add an extension to the filename.

For example:

--external_table_definition=myTable::/tmp/tabledef
--external_table_definition=myTable::Region:STRING,Quarter:STRING,Total_sales:INTEGER@CSV=gs://mybucket/sales.csv

--ignore_idle_slots={true|false}

Use with the--reservation flag. To restrict jobs running in the specifiedreservation to only use slots allocated to that reservation, set totrue.The default value isfalse; jobs in the specified reservation can use idleslots from other reservations, or slots that are notallocated to any reservation. For more information, seeIdle slots.

--max_staleness=INTERVAL

Specifies anINTERVALvalue that determines the maximum staleness allowed when querying amaterialized view or an external table. The default value is0-0 0 0:0:0.

For example:

  • 1 day:0-0 1 0:0:0
  • 1 hour:0-0 0 1:0:0

To use this flag, you must specify a table definition with the--external_table_definition flag.

--max_time_travel_hours=HOURS

Specifies the duration inhours of thetime travel windowfor the dataset. The--max_time_travel_hours value mustbe an integer expressed in multiples of 24 (48, 72, 96, 120, 144, 168)between 48 (2 days) and 168 (7 days).

--merge={true|false}

To merge two capacity commitments, set--merge totrue. Set the--capacity_commitment flag totrue, specify the location of thecommitments you want to merge by using the--location flag, and replaceRESOURCE with the IDs of the two commitments you wantto merge, separated by a comma. For more information, seeMerge two commitments.

--metadata_cache_mode=METADATA_CACHE_MODE

Enables metadata cache for an external table with a connection. Use one of thefollowing values:

  • AUTOMATIC
  • MANUAL

SpecifyAUTOMATIC to automatically refresh the cached metadata. SpecifyMANUAL to stop the automatic refresh. To use this flag, you must specify atable definition with the--external_table_definition flag.

--model={true|false} or-m={true|false}

To update metadata for a BigQuery ML model, set totrue. Thedefault value isfalse.

--params={"PARAMETER":"VALUE"} or -p={"PARAMETER":"VALUE"}

Updates parameters for a transfer configuration. Theparameters vary depending on the data source. For more information, seeIntroduction to BigQuery Data Transfer Service.

--plan=PLAN

When used with the--capacity_commitment flag, converts a capacitycommitment to the specified longer-duration commitment plan.ReplacePLAN with one of the following:

  • ANNUAL
  • THREE_YEAR
--refresh_window_days=DAYS

An integer that specifies an updated refresh window(in days) for a transfer configuration.

--remove_tags=TAG_KEYS

Only available on datasets and tables. Specifies thetags that youare removing from the resource, separated by commas—for example,556741164180/env,myProject/department. Each tag key must have thenamespaced key name.

--renewal_plan=PLAN

When used with the--capacity_commitment flag, updates the renewal planfor an annual capacity commitment.ReplacePLAN with one of the following:

  • ANNUAL
  • THREE_YEAR
  • NONE

Customers using legacy flat-rate pricing also may use one of the followingvalues:

  • FLEX
  • MONTHLY
  • ANNUAL
--reservation={true|false}

Specifies whether to update a reservation. The default value isfalse.

--reservation_assignment={true|false}

Specifies whether to update a reservation assignment. The defaultvalue isfalse.

--schema={SCHEMA_FILE|SCHEMA}

Specifies either the path to a local JSON schema file or a comma-separatedlist of column definitions in the formFIELD:DATA_TYPE,FIELD:DATA_TYPE.If you use a schema file, don't add an extension to the filename.

For example:

--schema=/tmp/tabledef
--schema=Region:STRING,Quarter:STRING,Total_sales:INTEGER

--service_account_name=SERVICE_ACCOUNT

Specifies a service account to be used as the credential for a transferconfiguration.

--set_label=KEY:VALUE

Specifies a label to update. To update multiple labels, repeat this flag.

--slots=NUMBER_OF_BASELINE_SLOTS

When used with the--capacity_commitment and--split flags, specifies thenumber of baseline slots to split from an existing capacity commitment into a newcommitment. ReplaceRESOURCE with the ID of thecommitment you want to split from.

When used with the--reservation flag, updates the number ofslots in a reservation.

--source=FILE

The path to a local JSON file containing a payload used to update aresource. For example, you can use this flag to specify a JSON file thatcontains adatasetresource with an updatedaccess property. The file is used to overwritethe dataset's access controls. The JSON file must not include a byte ordermark (BOM).

--split={true|false}

When set totrue and used with the--capacity_commitment flag, specifiesthat you want to split an existing capacity commitment. Use the--locationflag to specify the location of thecommitment you want to split from, and use the--slots flag to specify thenumber of slots you want to split off. ReplaceRESOURCE with the ID of the commitment you want tosplit from. For more information, seeSplit a commitment.

--storage_billing_model=BILLING_MODEL

Specifies thestorage billing modelfor a dataset. Set the--storage_billing_model value toPHYSICAL to usephysical bytes when calculating storage charges, or toLOGICAL to uselogical bytes.

When you change a dataset's billing model, it takes 24 hours for thechange to take effect.

Once you change a dataset's storage billing model, you must wait 14 daysbefore you can change the storage billing model again.

--table={true|false} or-t={true|false}

Specifies whether to update a table. The default value isfalse.

--target_dataset=DATASET

When specified, updates the target dataset for a transferconfiguration.

--time_partitioning_expiration=SECONDS

An integer that updates (in seconds) whena time-based partition should be deleted. The expiration time evaluates to thepartition's UTC date plus the integer value. A negative number indicates noexpiration.

--time_partitioning_field=COLUMN_NAME

Updates the field used to determine how to createa time-based partition. If time-based partitioning is enabled without thisvalue, then the table is partitioned based on the load time.

--time_partitioning_type=INTERVAL

Specifies the partitioning type. Use one of the following values:

  • DAY
  • HOUR
  • MONTH
  • YEAR

You can't change the partitioning type of an existing table.

--transfer_config={true|false}

Specifies whether to update a transfer configuration.The default value isfalse.

--update_credentials={true|false}

Specifies whether to update the transfer configurationcredentials. The default value isfalse.

--use_legacy_sql={true|false}

Set tofalse to update the SQL query for a viewfrom legacy SQL to GoogleSQL. The default value is determined by yourconfiguration settings. If the configuration setting is not specified, the default value istrue; the query useslegacy SQL.

--vertex_ai_model_id=VERTEX_AI_MODEL_ID

When specified, updates the model ID for a BigQuery ML modelthat is registered in the Vertex AI Model Registry.

--view=QUERY

When specified, updates the SQL query for a view.

--view_udf_resource=FILE

Updates the Cloud Storage URI or the path to a local codefile that is loaded and evaluated immediately as a user-defined functionresource in a view's SQL query. Repeat this flag to specify multiple files.

RESOURCE

The resource that you want to update.

For more information about using thebq update command, see the following:

bq version

Use thebq version command to display the version number of your bq command-line tool.

Synopsis

bq version
Note: You can see theversion number of all components in your Google Cloud CLI installation by using thegcloud versioncommand.

bq wait

Use thebq wait command to wait a specified number of seconds for a jobto finish. If a job isn't specified, then the command waits for the currentjob to finish.

Synopsis

bq wait [FLAGS] [JOB] [SECONDS]

Examples

bq wait
bq wait --wait_for_status=RUNNING 12345 100

Flags and arguments

Thebq wait command uses the following flags and arguments:

--fail_on_error={true|false}
To return success if the job completed during the wait time, even if the jobfailed, set tofalse.The default value istrue; after the wait time elapses, the command exitswith an error if the job is still running, or if the job completed butfailed.
--wait_for_status=STATUS

When specified, waits for a particular job status beforeexiting. Use one of the following values:

  • PENDING
  • RUNNING
  • DONE

The default value isDONE.

JOB

Specifies the job to wait for. You can use thebq ls --jobs myProjectcommand to find a job identifier.

SECONDS

Specifies the maximum number of seconds to wait until the job is finished. Ifyou enter0, then the command polls for job completion and returnsimmediately. If you don't specify an integer value, then the command waitsuntil the job is finished.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.