Teradata SQL translation guide

This document details the similarities and differences in SQL syntax betweenTeradata and BigQuery to help you plan your migration. Usebatch SQL translation tomigrate your SQL scripts in bulk, orinteractive SQL translationto translate ad-hoc queries.

Data types

This section shows equivalents between data types in Teradata and in BigQuery.

Note: Teradata supportsDEFAULT and otherconstraints;these are not used in BigQuery.
TeradataBigQueryNotes
INTEGERINT64
SMALLINTINT64
BYTEINTINT64
BIGINTINT64
DECIMAL

NUMERIC, DECIMAL

BIGNUMERIC, BIGDECIMAL

Use BigQuery'sNUMERIC (aliasDECIMAL) when the scale (digits after the decimal point) <= 9.
Use BigQuery'sBIGNUMERIC (aliasBIGDECIMAL) when the scale > 9.

Use BigQuery'sparameterized decimal data types if you need to enforce custom digit or scale bounds (constraints).

Teradata allows you to insert values of higher precision by rounding the stored value; however, it keeps the high precision in calculations. This can lead to unexpected rounding behavior compared to the ANSI standard.

FLOATFLOAT64
NUMERIC

NUMERIC, DECIMAL

BIGNUMERIC, BIGDECIMAL

Use BigQuery'sNUMERIC (aliasDECIMAL) when the scale (digits after the decimal point) <= 9.
Use BigQuery'sBIGNUMERIC (aliasBIGDECIMAL) when the scale > 9.

Use BigQuery'sparameterized decimal data types if you need to enforce custom digit or scale bounds (constraints).

Teradata allows you to insert values of higher precision by rounding the stored value; however, it keeps the high precision in calculations. This can lead to unexpected rounding behavior compared to the ANSI standard.

NUMBER

NUMERIC, DECIMAL

BIGNUMERIC, BIGDECIMAL

Use BigQuery'sNUMERIC (aliasDECIMAL) when the scale (digits after the decimal point) <= 9.
Use BigQuery'sBIGNUMERIC (aliasBIGDECIMAL) when the scale > 9.

Use BigQuery'sparameterized decimal data types if you need to enforce custom digit or scale bounds (constraints).

Teradata allows you to insert values of higher precision by rounding the stored value; however, it keeps the high precision in calculations. This can lead to unexpected rounding behavior compared to the ANSI standard.

REALFLOAT64
CHAR/CHARACTERSTRING

Use BigQuery'sparameterizedSTRING data type if you need to enforce a maximum character length.

VARCHARSTRING

Use BigQuery'sparameterizedSTRING data type if you need to enforce a maximum character length.

CLOBSTRING
JSONJSON
BLOBBYTES
BYTEBYTES
VARBYTEBYTES
DATEDATEBigQuery does not support custom formatting similar to what Teradata with DataForm in SDF supports.
TIMETIME
TIME WITH TIME ZONETIMETeradata stores theTIME data type in UTC and allows you to pass an offset from UTC using theWITH TIME ZONE syntax.TheTIME data type in BigQuery represents a time that's independent of any date or time zone.
TIMESTAMPTIMESTAMPBoth Teradata and BigQueryTIMESTAMP data types have microsecond precision (but Teradata supports leap seconds, while BigQuery does not).

Both Teradata and BigQuery data types are usually associated with a UTC time zone (details).
TIMESTAMP WITH TIME ZONETIMESTAMPThe TeradataTIMESTAMP can be set to a different time zone system-wide, per user or per column (usingWITH TIME ZONE).

The BigQueryTIMESTAMP type assumes UTC if you don't explicitly specify a time zone. Make sure you either export time zone information correctly (do not concatenate aDATE andTIME value without time zone information) so that BigQuery can convert it on import. Or make sure that you convert time zone information to UTC before exporting.

BigQuery hasDATETIME for an abstraction between civil time, which does not show a timezone when it is output, andTIMESTAMP, which is a precise point in time that always shows the UTC timezone.
ARRAYARRAY
MULTI-DIMENSIONAL ARRAYARRAYIn BigQuery, use an array of structs, with each struct containing a field of typeARRAY (For details, see the BigQuerydocumentation).
INTERVAL HOURINT64
INTERVAL MINUTEINT64
INTERVAL SECONDINT64
INTERVAL DAYINT64
INTERVAL MONTHINT64
INTERVAL YEARINT64
PERIOD(DATE)DATE,DATEPERIOD(DATE) should be converted to twoDATE columns containing the start date and end date so that they can be used with window functions.
PERIOD(TIMESTAMP WITH TIME ZONE)TIMESTAMP,TIMESTAMP
PERIOD(TIMESTAMP)TIMESTAMP,TIMESTAMP
PERIOD(TIME)TIME,TIME
PERIOD(TIME WITH TIME ZONE)TIME,TIME
UDTSTRING
XMLSTRING
TD_ANYTYPESTRING

For more information on type casting, see the next section.

Teradata type formatting

Teradata SQL uses a set of default formats for displaying expressions andcolumn data, and for conversions between data types. For example, aPERIOD(DATE) data type inINTEGERDATE mode is formatted asYY/MM/DDby default. We suggest that you use ANSIDATE mode whenever possible to ensureANSI SQL compliance, and use this chance to clean up legacy formats.

Teradata allows automatic application of custom formats using theFORMATclause, without changing the underlying storage, either as a data type attributewhen you create a table using DDL, or in a derived expression. Forexample, aFORMAT specification9.99 rounds anyFLOAT value to two digits.In BigQuery, this functionality has to be converted by using theROUND() function.

This functionality requires handling of intricate edge cases. For instance,when theFORMAT clause is applied to aNUMERIC column, you must take intoaccount special rounding and formatting rules.AFORMAT clause can be used to implicitly cast anINTEGER epoch value to aDATE format. Or aFORMAT specificationX(6) on aVARCHAR columntruncates the column value, and you have to therefore convert to aSUBSTR()function. This behavior is not ANSI SQL compliant. Therefore, we suggest notmigrating column formats to BigQuery.

If column formats are absolutely required, useViews oruser-defined functions (UDFs).

For information about the default formats that Teradata SQL uses for each datatype, see theTeradata default formattingdocumentation.

Timestamp and date type formatting

The following table summarizes the differences in timestamp and dateformatting elements between Teradata SQL andGoogleSQL.

Note: There are no parentheses in the Teradata formats, because the formats(CURRENT_*) are keywords, not functions.
Teradata formatTeradata descriptionBigQuery
CURRENT_TIMESTAMP
CURRENT_TIME
TIME andTIMESTAMP information in Teradata can have different time zone information, which is defined usingWITH TIME ZONE. If possible, useCURRENT_TIMESTAMP(), which is formatted in ISO format. However, the output format does always show the UTC timezone. (Internally, BigQuery does not have a timezone.)

Note the following details on differences in the ISO format.

DATETIME is formatted based on output channel conventions. In the BigQuery command-line tool and BigQuery console, it's formatted using aT separator according to RFC 3339. However, in Python and Java JDBC, a space is used as a separator.

If you want to use an explicit format, useFORMAT_DATETIME(), which makes an explicit cast a string. For example, the following expression always returns a space separator:

CAST(CURRENT_DATETIME() AS STRING)

Teradata supports aDEFAULT keyword inTIME columns to set the current time (timestamp); this is not used in BigQuery.
CURRENT_DATE Dates are stored in Teradata asINT64 values using the following formula:

(YEAR - 1900) * 10000 + (MONTH * 100) + DAY

Dates can be formatted as integers.
BigQuery has a separateDATE format that always returns a date in ISO 8601 format.

DATE_FROM_UNIX_DATE can't be used, because it is 1970-based.

Teradata supports aDEFAULT keyword inDATE columns to set the current date; this is not used in BigQuery.
CURRENT_DATE-3 Date values are represented as integers. Teradata supports arithmetic operators for date types. For date types, useDATE_ADD() orDATE_SUB().

BigQuery uses arithmetic operators for data types:INT64,NUMERIC, andFLOAT64.
SYS_CALENDAR.CALENDAR Teradata provides a view for calendar operations to go beyond integer operations.Not used in BigQuery.
SET SESSION DATEFORM=ANSIDATE Set the session or system date format to ANSI (ISO 8601).BigQuery always uses ISO 8601, so make sure you convert Teradata dates and times.

Query syntax

This section addresses differences in query syntax between Teradata andBigQuery.

SELECT statement

Most TeradataSELECT statementsare compatible with BigQuery. The following table contains a listof minor differences.

TeradataBigQuery
SELConvert toSELECT. BigQuery does not use theSEL abbreviation.
SELECT
  (subquery) AS flag,
  CASE WHEN flag = 1 THEN ...
In BigQuery, columns cannot reference the output of othercolumns defined within the same select list. Prefer moving a subquery intoaWITH clause.

WITH flags AS (
  subquery
),
SELECT
  CASE WHEN flags.flag = 1 THEN ...
SELECT * FROMtable
WHERE A LIKE ANY ('string1', 'string2')
BigQuery does not use theANY logicalpredicate.

The same functionality can be achieved using multipleORoperators:

SELECT * FROMtable
WHERE col LIKE 'string1' OR
      col LIKE 'string2'


In this case, string comparison also differs. SeeComparison operators.
SELECT TOP 10 * FROMtableBigQuery usesLIMIT at the end of a query instead ofTOPn following theSELECT keyword.

Comparison operators

The following table shows Teradata comparison operators that are specific toTeradata and must be converted to the ANSI SQL:2011 compliant operators used inBigQuery.

For information about operators in BigQuery, see theOperators section of the BigQuery documentation.

TeradataBigQueryNotes
exp EQexp2
exp IN(exp2, exp3)
exp =exp2
exp IN(exp2, exp3)

To keep non-ANSI semantics forNOT CASESPECIFIC, you can use
RTRIM(UPPER(exp)) = RTRIM(UPPER(exp2))
When comparing strings for equality, Teradatamight ignore trailing whitespaces, while BigQuery considers them part of the string. For example,'xyz'=' xyz' isTRUE in Teradata butFALSE in BigQuery.

Teradata also provides aNOT CASESPECIFIC column attribute that instructs Teradata to ignore case when comparing two strings. BigQuery is always case specific when comparing strings. For example,'xYz' = 'xyz' isTRUE in Teradata butFALSE in BigQuery.
exp LEexp2exp <=exp2
exp LTexp2exp <exp2
exp NEexp2exp <>exp2
exp !=exp2
exp GEexp2exp >=exp2
exp GTexp2exp >exp2

JOIN conditions

BigQuery and Teradata support the sameJOIN,ON, andUSING conditions. The following tablecontains a list of minor differences.

TeradataBigQueryNotes
FROMA JOINB ONA.date >B.start_date ANDA.date<B.end_dateFROMA LEFT OUTER JOINB ONA.date >B.start_date ANDA.date<B.end_dateBigQuery supports inequalityJOIN clauses for all inner joins or if at least one equality condition is given (=). But not just one inequality condition (= and <) in anOUTER JOIN. Such constructs are sometimes used to query date or integer ranges. BigQuery prevents users from inadvertently creating large cross joins.
FROMA,B ONA.id =B.idFROMA JOINB ONA.id =B.idUsing a comma between tables in Teradata is equal to anINNER JOIN, while in BigQuery it equals aCROSS JOIN (Cartesian product). Because the comma in BigQuery legacy SQL is treated asUNION, we recommend making the operation explicit to avoid confusion.
FROMA JOINB ON (COALESCE(A.id , 0) = COALESCE(B.id, 0))FROMA JOINB ON (COALESCE(A.id , 0) = COALESCE(B.id, 0))No difference for scalar (constant) functions.
FROMA JOINB ONA.id = (SELECT MAX(B.id) FROMB)FROMA JOIN (SELECT MAX(B.id) FROMB)B1 ONA.id =B1.idBigQuery prevents users from using subqueries, correlated subqueries, or aggregations in join predicates. This lets BigQuery parallelize queries.

Type conversion and casting

BigQuery has fewer but wider data types than Teradata,which requires BigQuery to be stricter in casting.

TeradataBigQueryNotes
exp EQexp2
exp IN(exp2, exp3)
exp =exp2
exp IN(exp2, exp3)

To keep non-ANSI semantics forNOT CASESPECIFIC, you can use
RTRIM(UPPER(exp)) = RTRIM(UPPER(exp2))
When comparing strings for equality, Teradatamight ignore trailing whitespaces, while BigQuery considers them part of the string. For example,'xyz'=' xyz' isTRUE in Teradata butFALSE in BigQuery.

Teradata also provides aNOT CASESPECIFIC column attribute that instructs Teradata to ignore case when comparing two strings. BigQuery is always case specific when comparing strings. For example,'xYz' = 'xyz' isTRUE in Teradata butFALSE in BigQuery.
CAST(long_varchar_column AS CHAR(6))LPAD(long_varchar_column, 6)Casting a character column in Teradata is sometimes used as a non-standard and non-optimal way to create a padded substring.
CAST(92617 AS TIME)92617 (FORMAT '99:99:99')PARSE_TIME("%k%M%S", CAST(92617 AS STRING))
Teradata performs manymore implicit type conversions and rounding than BigQuery, which is generally stricter and enforces ANSI standards.
(This example returns 09:26:17)
CAST(48.5 (FORMAT 'zz') AS FLOAT)CAST(SUBSTR(CAST(48.5 AS STRING), 0, 2) AS FLOAT64)
Floating point and numeric data types can require specialrounding rules when applied with formats such as currencies.
(This example returns 48)

CastFLOAT/DECIMAL toINT

Where Teradata uses Gaussian and Banker algorithms to roundnumerics, use theROUND_HALF_EVENRoundingMode in BigQuery:

round(CAST(2.5asNumeric),0,'ROUND_HALF_EVEN')

CastSTRING toNUMERIC orBIGNUMERIC

When converting fromSTRING to numeric values, use the correct data type,eitherNUMERIC orBIGNUMERIC, based on the number of decimal places in yourSTRING value.

For more information about the supported numeric precision and scale in BigQuery,seeDecimal types.

See alsoComparison operators andcolumn formats. Both comparisons and column formatting can behave like type casts.

QUALIFY,ROWS clauses

TheQUALIFY clause in Teradata allows you tofilter results for window functions.Alternatively, aROWS phrase can be used for the same task. These work similar to aHAVING condition for aGROUP clause, limiting the output of what in BigQuery arecalledwindow functions.

TeradataBigQuery
SELECTcol1,col2
FROMtable
QUALIFY ROW_NUMBER() OVER (PARTITION BYcol1 ORDER BYcol2) = 1;
The TeradataQUALIFY clause with a windowfunction likeROW_NUMBER(),SUM(),COUNT() and withOVER PARTITION BY is expressedin BigQuery as aWHERE clause on a subquerythat contains an analytics value.

UsingROW_NUMBER():

SELECTcol1,col2
FROM (
  SELECTcol1,col2,
  ROW_NUMBER() OVER (PARTITION BYcol1 ORDER BYcol2) RN
  FROMtable
)WHERE RN = 1;


UsingARRAY_AGG, which supports larger partitions:

SELECT
  result.*
FROM (
  SELECT
    ARRAY_AGG(table ORDER BYtable.col2
      DESC LIMIT 1)[OFFSET(0)]
  FROMtable
  GROUP BYcol1
) ASresult;
SELECTcol1,col2
FROMtable
AVG(col1) OVER (PARTITION BYcol1 ORDER BYcol2 ROWS BETWEEN 2 PRECEDING AND CURRENT ROW);
SELECTcol1,col2
FROMtable
AVG(col1) OVER (PARTITION BYcol1 ORDER BYcol2 RANGE BETWEEN 2 PRECEDING AND CURRENT ROW);


In BigQuery, bothRANGE andROWScan be used in the window frame clause. However, window clauses can only beused with window functions likeAVG(), not with numberingfunctions likeROW_NUMBER().
SELECTcol1,col2
FROMtable
QUALIFY ROW_NUMBER() OVER (PARTITION BYcol1 ORDER BYcol2) = 1;
SELECTcol1,
col2 FROM
Dataset-name.table
QUALIFY row_number() OVER (PARTITION BY upper(a.col1) ORDER BY upper(a.col2)) = 1

NORMALIZE keyword

Teradata provides theNORMALIZE keyword forSELECT clauses to coalesce overlapping periods or intervals intoa single period or interval that encompasses all individual period values.

BigQuery does not support thePERIOD type, so anyPERIODtype column in Teradata has to be inserted into BigQuery as twoseparateDATE orDATETIME fields that correspond to the start and end of theperiod.

TeradataBigQuery
SELECT NORMALIZE
    client_id,
    item_sid,
    BEGIN(period) AS min_date,
    END(period) AS max_date,
  FROM
    table;
SELECT
  t.client_id,
  t.item_sid,
  t.min_date,
  MAX(t.dwh_valid_to) AS max_date
FROM (
  SELECT
    d1.client_id,
    d1.item_sid,
    d1.dwh_valid_to AS dwh_valid_to,
    MIN(d2.dwh_valid_from) AS min_date
  FROM
    table d1
  LEFT JOIN
    table d2
  ON
    d1.client_id = d2.client_id
    AND d1.item_sid = d2.item_sid
    AND d1.dwh_valid_to >= d2.dwh_valid_from
    AND d1.dwh_valid_from < = d2.dwh_valid_to
  GROUP BY
    d1.client_id,
    d1.item_sid,
    d1.dwh_valid_to ) t
GROUP BY
  t.client_id,
  t.item_sid,
  t.min_date;

Functions

The following sections list mappings between Teradata functions andBigQuery equivalents.

Aggregate functions

The following table maps common Teradata aggregate,statistical aggregate, and approximate aggregate functions to theirBigQuery equivalents. BigQuery offers thefollowing additional aggregate functions:

TeradataBigQuery
AVGAVG
Note: BigQuery provides approximate results when calculating the average ofINT values.
BITANDBIT_AND
BITNOTBitwisenot operator (~)
BITORBIT_OR
BITXORBIT_XOR
CORRCORR
COUNTCOUNT
COVAR_POPCOVAR_POP
COVAR_SAMPCOVAR_SAMP
MAXMAX
MINMIN
REGR_AVGXAVG(
  IF(dep_var_expression is NULL
     ORind_var_expression is NULL,
     NULL,ind_var_expression)
)
REGR_AVGYAVG(
  IF(dep_var_expression is NULL
     ORind_var_expression is NULL,
     NULL,dep_var_expression)
)
REGR_COUNTSUM(
  IF(dep_var_expression is NULL
     ORind_var_expression is NULL,
     NULL, 1)
)
REGR_INTERCEPTAVG(dep_var_expression) -AVG(ind_var_expression) * (COVAR_SAMP(ind_var_expression,
              dep_var_expression)
   / VARIANCE(ind_var_expression))
REGR_R2(COUNT(dep_var_expression)*
 SUM(ind_var_expression *dep_var_expression) -
 SUM(dep_var_expression) *SUM(ind_var_expression))
SQRT(
     (COUNT(ind_var_expression)*
      SUM(POWER(ind_var_expression, 2))*
      POWER(SUM(ind_var_expression),2))*
     (COUNT(dep_var_expression)*
      SUM(POWER(dep_var_expression, 2))*
      POWER(SUM(dep_var_expression), 2)))
REGR_SLOPE-COVAR_SAMP(ind_var_expression,
            dep_var_expression)
/VARIANCE(ind_var_expression)
REGR_SXXSUM(POWER(ind_var_expression, 2)) -COUNT(ind_var_expression) *
 POWER(AVG(ind_var_expression),2)
REGR_SXYSUM(ind_var_expression *dep_var_expression) -COUNT(ind_var_expression)
  *AVG(ind_var_expression) *AVG(dep_var_expression)
REGR_SYYSUM(POWER(dep_var_expression, 2)) -COUNT(dep_var_expression)
  *POWER(AVG(dep_var_expression),2)
SKEWCustom user-defined function.
STDDEV_POPSTDDEV_POP
STDDEV_SAMPSTDDEV_SAMP,STDDEV
SUMSUM
VAR_POPVAR_POP
VAR_SAMPVAR_SAMP,VARIANCE

Analytical functions and window functions

The following table maps common Teradata analytic andaggregate analytic functions to their BigQuerywindow function equivalents. BigQuery offers the followingadditional functions:

TeradataBigQuery
ARRAY_AGGARRAY_AGG
ARRAY_CONCAT, (|| operator)ARRAY_CONCAT_AGG, (|| operator)
BITANDBIT_AND
BITNOTBitwisenot operator (~)
BITORBIT_OR
BITXORBIT_XOR
CORRCORR
COUNTCOUNT
COVAR_POPCOVAR_POP
COVAR_SAMPCOVAR_SAMP
CUME_DISTCUME_DIST
DENSE_RANK (ANSI)DENSE_RANK
FIRST_VALUEFIRST_VALUE
LAST_VALUELAST_VALUE
MAXMAX
MINMIN
PERCENT_RANKPERCENT_RANK
PERCENTILE_CONT,PERCENTILE_DISCPERCENTILE_CONT,PERCENTILE_DISC
QUANTILENTILE
RANK (ANSI)RANK
ROW_NUMBERROW_NUMBER
STDDEV_POPSTDDEV_POP
STDDEV_SAMPSTDDEV_SAMP,STDDEV
SUMSUM
VAR_POPVAR_POP
VAR_SAMPVAR_SAMP,VARIANCE

Date/time functions

The following table maps common Teradata date/time functionsto their BigQuery equivalents. BigQuery offersthe following additional date/time functions:

TeradataBigQuery
ADD_MONTHSDATE_ADD,TIMESTAMP_ADD
CURRENT_DATECURRENT_DATE
CURRENT_TIMECURRENT_TIME
CURRENT_TIMESTAMPCURRENT_TIMESTAMP
DATE + kDATE_ADD(date_expression,INTERVALk DAY)
DATE - kDATE_SUB(date_expression,INTERVALk DAY)
EXTRACTEXTRACT(DATE),EXTRACT(TIMESTAMP)
FORMAT_DATE
FORMAT_DATETIME
FORMAT_TIME
FORMAT_TIMESTAMP
LAST_DAYLAST_DAYNote: This function supports bothDATE andDATETIME input expressions.
MONTHS_BETWEENDATE_DIFF(date_expression,date_expression, MONTH)
NEXT_DAYDATE_ADD(
  DATE_TRUNC(
    date_expression,
    WEEK(day_value)
  ),
  INTERVAL 1 WEEK
)
OADD_MONTHSDATE_SUB(
  DATE_TRUNC(
    DATE_ADD(
      date_expression,
      INTERVALnum_months MONTH
    ),
    MONTH
  ),
  INTERVAL 1 DAY
)
td_day_of_monthEXTRACT(DAY FROMdate_expression)
EXTRACT(DAY FROMtimestamp_expression)
td_day_of_weekEXTRACT(DAYOFWEEK FROMdate_expression)
EXTRACT(DAYOFWEEK FROMtimestamp_expression)
td_day_of_yearEXTRACT(DAYOFYEAR FROMdate_expression)
EXTRACT(DAYOFYEAR FROMtimestamp_expression)
td_fridayDATE_TRUNC(
  date_expression,
  WEEK(FRIDAY)
)
td_mondayDATE_TRUNC(
 date_expression,
  WEEK(MONDAY)
)
td_month_beginDATE_TRUNC(date_expression, MONTH)
td_month_endDATE_SUB(
  DATE_TRUNC(
    DATE_ADD(
      date_expression,
      INTERVAL 1 MONTH
    ),
    MONTH
  ),
  INTERVAL 1 DAY
)
td_month_of_calendar(EXTRACT(YEAR FROMdate_expression) - 1900) * 12 +EXTRACT(MONTH FROMdate_expression)
td_month_of_quarterEXTRACT(MONTH FROMdate_expression)
- ((EXTRACT(QUARTER FROMdate_expression) - 1) * 3)
td_month_of_yearEXTRACT(MONTH FROMdate_expression)
EXTRACT(MONTH FROMtimestamp_expression)
td_quarter_beginDATE_TRUNC(date_expression, QUARTER)
td_quarter_endDATE_SUB(
  DATE_TRUNC(
    DATE_ADD(
      date_expression,
      INTERVAL 1 QUARTER
    ),
    QUARTER
  ),
  INTERVAL 1 DAY
)
td_quarter_of_calendar(EXTRACT(YEAR FROMdate_expression)
- 1900) * 4
+EXTRACT(QUARTER FROMdate_expression)
td_quarter_of_yearEXTRACT(QUARTER FROMdate_expression)
EXTRACT(QUARTER FROMtimestamp_expression)
td_saturdayDATE_TRUNC(
  date_expression,
  WEEK(SATURDAY)
)
td_sundayDATE_TRUNC(
  date_expression,
  WEEK(SUNDAY)
)
td_thursdayDATE_TRUNC(
  date_expression,
  WEEK(THURSDAY)
)
td_tuesdayDATE_TRUNC(
 date_expression,
  WEEK(TUESDAY)
)
td_wednesdayDATE_TRUNC(
 date_expression,
  WEEK(WEDNESDAY)
)
td_week_beginDATE_TRUNC(date_expression, WEEK)
td_week_endDATE_SUB(
 DATE_TRUNC(
   DATE_ADD(
     date_expression,
      INTERVAL 1 WEEK
    ),
    WEEK
  ),
  INTERVAL 1 DAY
)
td_week_of_calendar(EXTRACT(YEAR FROMdate_expression) - 1900) * 52 +EXTRACT(WEEK FROMdate_expression)
td_week_of_monthEXTRACT(WEEK FROMdate_expression)
-EXTRACT(WEEK FROMDATE_TRUNC(date_expression, MONTH))
td_week_of_yearEXTRACT(WEEK FROMdate_expression)
EXTRACT(WEEK FROMtimestamp_expression)
td_weekday_of_monthCAST(
 CEIL(
   EXTRACT(DAY FROMdate_expression)
    / 7
  ) AS INT64
)
td_year_beginDATE_TRUNC(date_expression, YEAR)
td_year_endDATE_SUB(
 DATE_TRUNC(
   DATE_ADD(
     date_expression,
      INTERVAL 1 YEAR
    ),
    YEAR
  ),
  INTERVAL 1 DAY
)
td_year_of_calendarEXTRACT(YEAR FROMdate_expression)
TO_DATEPARSE_DATE
TO_TIMESTAMPPARSE_TIMESTAMP
TO_TIMESTAMP_TZPARSE_TIMESTAMP

String functions

The following table maps Teradata string functions to theirBigQuery equivalents. BigQuery offers thefollowing additional string functions:

TeradataBigQuery
ASCIITO_CODE_POINTS(string_expression)[OFFSET(0)]
CHAR2HEXINTTO_HEX
CHARACTER LENGTHCHAR_LENGTH
CHARACTER LENGTHCHARACTER_LENGTH
CHRCODE_POINTS_TO_STRING(
  [mod(numeric_expression, 256)]
)
CONCAT, (|| operator)CONCAT, (|| operator)
CSVCustom user-defined function.
CSVLDCustom user-defined function.
FORMATFORMAT
INDEXSTRPOS(string,substring)
INITCAPINITCAP
INSTRCustom user-defined function.
LEFTSUBSTR(source_string, 1,length)
LENGTHLENGTH
LOWERLOWER
LPADLPAD
LTRIMLTRIM
NGRAMCustom user-defined function.
NVPCustom user-defined function.
OREPLACEREPLACE
OTRANSLATECustom user-defined function.
POSITIONSTRPOS(string,substring)
REGEXP_INSTRSTRPOS(source_string,
REGEXP_EXTRACT(source_string,regexp_string))


Note: Returns first occurrence.
REGEXP_REPLACEREGEXP_REPLACE
REGEXP_SIMILARIF(REGEXP_CONTAINS,1,0)
REGEXP_SUBSTRREGEXP_EXTRACT,
REGEXP_EXTRACT_ALL
REGEXP_SPLIT_TO_TABLECustom user-defined function.
REVERSEREVERSE
RIGHTSUBSTR(source_string, -1,length)
RPADRPAD
RTRIMRTRIM
STRTOK

Note: Each character in the delimiter string argument is considered a separate delimiter character. The default delimiter is a space character.
SPLIT(instring,delimiter)[ORDINAL(tokennum)]

Note: The entiredelimiter string argument is used as a single delimiter. The default delimiter is a comma.
STRTOK_SPLIT_TO_TABLECustom user-defined function
SUBSTRING,SUBSTRSUBSTR
TRIMTRIM
UPPERUPPER

Math functions

The following table maps Teradata math functions to theirBigQuery equivalents. BigQuery offers thefollowing additional math functions:

TeradataBigQuery
ABSABS
ACOSACOS
ACOSHACOSH
ASINASIN
ASINHASINH
ATANATAN
ATAN2ATAN2
ATANHATANH
CEILINGCEIL
CEILINGCEILING
COSCOS
COSHCOSH
EXPEXP
FLOORFLOOR
GREATESTGREATEST
LEASTLEAST
LNLN
LOGLOG
MOD (% operator)MOD
NULLIFZERONULLIF(expression, 0)
POWER (** operator)POWER,POW
RANDOMRAND
ROUNDROUND
SIGNSIGN
SINSIN
SINHSINH
SQRTSQRT
TANTAN
TANHTANH
TRUNCTRUNC
ZEROIFNULLIFNULL(expression, 0),COALESCE(expression, 0)

Rounding functions

Where Teradata uses Gaussian and Banker algorithms to roundnumerics, use theROUND_HALF_EVENRoundingMode in BigQuery:

-- Teradata syntaxround(3.45,1)-- BigQuery syntaxround(CAST(3.45asNumeric),1,'ROUND_HALF_EVEN')

DML syntax

This section addresses differences in data management language syntax betweenTeradata and BigQuery.

INSERT statement

Most TeradataINSERT statements are compatible with BigQuery.The following table shows exceptions.

DML scripts in BigQuery have slightly different consistencysemantics than the equivalent statements in Teradata. For an overview ofsnapshot isolation and session and transaction handling, see theCREATE INDEX section elsewhere in this document.

TeradataBigQuery
INSERT INTOtable VALUES (...);INSERT INTOtable (...) VALUES (...);

Teradata offers aDEFAULT keyword for non-nullable columns.

Note: In BigQuery, omitting column names intheINSERT statement only works if values for all columns inthe target table are included in ascending order based on their ordinalpositions.
INSERT INTOtable VALUES (1,2,3);
INSERT INTOtable VALUES (4,5,6);
INSERT INTOtable VALUES (7,8,9);
INSERT INTOtable VALUES (1,2,3),
                         (4,5,6),
                         (7,8,9);

Teradata has a concept ofmulti-statementrequest (MSR), which sends multipleINSERT statementsat a time. In BigQuery, this is not recommended due to theimplicit transaction boundary between statements. Usemulti-valueINSERT instead.

BigQuery allows concurrentINSERT statements but mightqueueUPDATE.To improve performance, consider the following approaches:
  • Combine multiple rows in a singleINSERT statement,instead of one row perINSERT operation.
  • Combine multiple DML statements (includingINSERT) using aMERGE statement.
  • UseCREATE TABLE ... AS SELECT to create and populate newtables instead ofUPDATE orDELETE, in particular when queryingpartitioned fields orrollback or restore.

UPDATE statement

MostTeradataUPDATE statements are compatible with BigQuery, except for the following items:

  • When you use aFROM clause, the ordering of theFROM andSETclauses is reversed in Teradata and BigQuery.
  • In GoogleSQL, eachUPDATE statement must include theWHERE keyword,followed by a condition. To update all rows in the table, useWHERE true.

As a best practice, you should group multiple DML mutations instead of singleUPDATE andINSERT statements. DML scripts in BigQuery haveslightly different consistency semantics than equivalent statements in Teradata.For an overview on snapshot isolation and session and transaction handling, seetheCREATE INDEX section elsewhere in this document.

The following table shows TeradataUPDATE statements andBigQuery statements that accomplish the same tasks.

For more information aboutUPDATE in BigQuery, see theBigQueryUPDATE examples in the DML documentation.

TeradataBigQuery
UPDATEtable_A
FROMtable_A,table_B
SET
  y =table_B.y,
  z =table_B.z + 1
WHEREtable_A.x =table_B.x
  ANDtable_A.y IS NULL;
UPDATEtable_A
SET
  y =table_B.y,
  z =table_B.z + 1
FROMtable_B
WHEREtable_A.x =table_B.x
  ANDtable_A.y IS NULL;
UPDATEtable alias
SET x = x + 1
WHERE f(x) IN (0, 1);
UPDATEtable
SET x = x + 1
WHERE f(x) IN (0, 1);
UPDATEtable_A
FROMtable_A,table_B,B
SET z =table_B.z
WHEREtable_A.x =table_B.x
  ANDtable_A.y =table_B.y;
UPDATEtable_A
SET z =table_B.z
FROMtable_B
WHEREtable_A.x =table_B.x
  ANDtable_A.y =table_B.y;

DELETE andTRUNCATE statements

Both theDELETE andTRUNCATE statements are supported ways to remove rowsfrom a table without affecting the table schema or indexes.TRUNCATE deletesall the data, whileDELETE removes the selected rows from the table.

In BigQuery, theDELETE statement must have aWHERE clause.To delete all rows in the table (truncate), useWHERE true. To speed truncateoperations up for very large tables, we recommend using theCREATE OR REPLACE TABLE ... AS SELECT statement, using aLIMIT 0 on the same table to replace itself. However,make sure to manually add partitioning and clustering information when using it.

Teradata vacuums deleted rows later. This means thatDELETE operations areinitially faster than in BigQuery, but they require resourceslater, especially large-scaleDELETE operations that impact the majority of atable. To use a similar approach in BigQuery, we suggest reducingthe number ofDELETE operations, such as by copying the rows not to be deletedinto a new table. Alternatively, you canremove entire partitions.Both of these options are designed to be faster operations than atomic DML mutations.

For more information aboutDELETE in BigQuery, see theDELETE examples in the DML documentation.

TeradataBigQuery
BEGIN TRANSACTION;
LOCKING TABLEtable_A FOR EXCLUSIVE;
DELETE FROMtable_A;
INSERT INTOtable_A SELECT * FROMtable_B;
END TRANSACTION;
Replacing the contents of a table with query output is the equivalent of atransaction. You can do this with either aquery operationor acopyoperation.

Using a query operation:

bq query --replace --destination_tabletable_A 'SELECT * FROMtable_B';

Using a copy operation:

bq cp -ftable_Atable_B
DELETEdatabase.table ALL;DELETE FROMtable WHERE TRUE;

Or for very large tables a faster way:
CREATE OR REPLACEtable AS SELECT * FROMtable LIMIT 0;

MERGE statement

TheMERGE statement can combineINSERT,UPDATE, andDELETE operationsinto a single "upsert" statement and perform the operations atomically. TheMERGE operation must match at most one source row for each target row.BigQuery and Teradata both follow ANSI Syntax.

Teradata'sMERGE operation is limited to matching primary keys within oneaccess module processor (AMP).In contrast, BigQuery has no size or column limitation forMERGE operations, therefore usingMERGE is a useful optimization. However,if theMERGE is primarily a large delete, see optimizations forDELETEelsewhere in this document.

DML scripts in BigQuery have slightly differentconsistency semantics than equivalent statements in Teradata. For example,Teradata's SET tables in session mode mightignore duplicates during aMERGE operation. For an overview on handling MULTISET andSET tables, snapshot isolation, and session and transaction handling,see theCREATE INDEX section elsewhere in this document.

Rows-affected variables

In Teradata, theACTIVITY_COUNT variable is a Teradata ANSI SQL extension populated with the number of rowsaffected by a DML statement.

The@@row_count system variable in theScripting feature has similar functionality.In BigQuery it would be more common to check thenumDmlAffectedRows return value in the audit logs or theINFORMATION_SCHEMA views.

DDL syntax

This section addresses differences in data definition language syntax betweenTeradata and BigQuery.

CREATE TABLE statement

Most TeradataCREATE TABLE statements are compatible with BigQuery, except for the followingsyntax elements, which are not used in BigQuery:

For more information aboutCREATE TABLE in BigQuery, see theBigQueryCREATE examples in the DML documentation.

Column options and attributes

The following column specifications for theCREATE TABLE statement are notused in BigQuery:

Teradata extends the ANSI standard with a columnTITLE option. This feature can be similarly implemented in BigQueryusing the column description as shown in the following table. Note this optionis not available for Views.

TeradataBigQuery
CREATE TABLE table (
col1 VARCHAR(30) TITLE 'column desc'
);
CREATE TABLEdataset.table (
 col1 STRING
OPTIONS(description="column desc")
);

Temporary tables

Teradata supportsvolatile tables, which are often used to store intermediate results in scripts. There areseveral ways to achieve something similar to volatile tables in BigQuery:

  • CREATE TEMPORARY TABLEcan be used inScripting,and is valid during the lifetime of the script. If the table has to existbeyond a script, you can use the other options in this list.

  • Dataset TTL: Create a dataset that has a short time to live (forexample, 1 hour) so that any tables created in the dataset are effectivelytemporary since they won't persist longer than the dataset's time to live.You can prefix all the table names in this dataset withtemp to clearlydenote that the tables are temporary.

  • Table TTL: Create a table that has a table-specific short time tolive using DDL statements similar to the following:

    CREATE TABLE temp.name (col1,col2, ...)OPTIONS(expiration_timestamp=TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR));
  • WITH clause: If a temporary table is needed only within the sameblock, use a temporary result using aWITH statement or subquery. This is the most efficient option.

An often-used pattern in Teradata scripts (BTEQ)is to create a permanent table, insert a value in it, use this like a temporarytable in ongoing statements, and then delete or truncate the table afterwards.In effect, this uses the table as a constant variable (a semaphore). Thisapproach is not efficient in BigQuery, and we recommend usingreal variables inScripting instead, or usingCREATE OR REPLACE withAS SELECT query syntax to create a table that already has values in it.

CREATE VIEW statement

The following table shows equivalents between Teradata andBigQuery for theCREATE VIEW statement. The clauses for tablelocking such asLOCKING ROW FOR ACCESS are not needed within BigQuery.

Note: Teradata does not directly supportmaterialized views like BigQuery, only join indexes.
TeradataBigQueryNotes
CREATE VIEWview_name AS SELECT ...CREATE VIEWview_name AS SELECT ...
REPLACE VIEWview_name AS SELECT ...CREATE OR REPLACE VIEW
view_name AS
SELECT ...
Not supportedCREATE VIEW IF NOT EXISTS
OPTIONS(view_option_list)
AS SELECT ...
Creates a new view only if the view does not currently exist in thespecified dataset.

CREATE [UNIQUE] INDEX statement

Teradata requires indices for all tables and requires special workarounds likeMULTISETtables andNoPI Tablesto work with non-unique or non-indexed data.

BigQuery does not require indices. This section describesapproaches in BigQuery for how to create functionality similarto how indexes are used in Teradata where there is an actual business logic need.

Indexing for performance

Because it's a column-oriented database with query and storage optimization,BigQuery doesn't need explicit indexes. BigQueryprovides functionality such aspartitioning and clustering as well asnested fields,which can increase query efficiency and performance by optimizing how data isstored.

Teradata does not support materialized views. However, it offersjoin indexes using theCREATE JOIN INDEX statement, which essentially materializes datathat's needed for a join. BigQuery does not need materializedindexes to speed up performance, just as it doesn't need dedicated spoolspace for joins.

For other optimization cases,materialized views can be used.

Indexing for consistency (UNIQUE, PRIMARY INDEX)

In Teradata, a unique index can be used to prevent rows with non-unique keys ina table. If a process tries to insert or update data that has a value that'salready in the index, the operation either fails with an index violation(MULTISET tables) or silently ignores it (SET tables).

Because BigQuery doesn't provide explicit indexes, aMERGEstatement can be used instead to insert only unique records into a target tablefrom a staging table while discarding duplicate records. However, there is noway to prevent a user with edit permissions from inserting a duplicate record,because BigQuery never locks duringINSERT operations.To generate an error for duplicate records in BigQuery, you canuse aMERGE statement from a staging table, as shown in the followingexample.

TeradataBigQuery
CREATE [UNIQUE]INDEX name;MERGE `prototype.FIN_MERGE` t
USING `prototype.FIN_TEMP_IMPORT` m
ON t.col1 = m.col1
  AND t.col2 = m.col2
WHEN MATCHED THEN
  UPDATE SET t.col1 = ERROR(CONCAT('Encountered error for ', m.col1, ' ', m.col2))
WHEN NOT MATCHED THEN
  INSERT (col1,col2,col3,col4,col5,col6,col7,col8) VALUES(col1,col2,col3,col4,col5,col6,CURRENT_TIMESTAMP(),CURRENT_TIMESTAMP());

More often, users prefer toremove duplicates independently in order to find errors in downstream systems.
BigQuery does not supportDEFAULT andIDENTITY (sequences)columns.

Indexing to achieve locking

Teradata provides resources in theaccess module processor (AMP); queries can consume all-AMP, single-AMP, or group-AMP resources. DDLstatements are all-AMP and therefore similar to a global DDL lock.BigQuery doesn't have a lock mechanism like this and can runconcurrent queries andINSERT statements up to your quota;only concurrentUPDATE DML statements have certainconcurrency implications:UPDATE operations against the same partition are queued to ensure snapshotisolation, so you don't have to lock to prevent phantom reads or lost updates.

Because of these differences, the following Teradata elements are not used inBigQuery:

  • ON COMMIT DELETE ROWS;
  • ON COMMIT PRESERVE ROWS;

Procedural SQL statements

This section describes how to convert procedural SQL statements that are usedin stored procedures, functions, and triggers from Teradatato BigQuery Scripting, procedures, or user-defined functions (UDFs).All of these are available for system administrators to check using theINFORMATION_SCHEMA views.

CREATE PROCEDURE statement

Stored procedures are supported as part of BigQueryScripting.

In BigQuery, Scripting refers to any use of control statements,whereas procedures are named scripts (with arguments if needed) that can becalled from other Scripts and stored permanently, if needed.A user-defined function (UDF) can also be written in JavaScript.

TeradataBigQuery
CREATE PROCEDURECREATE PROCEDURE if a name is required, otherwise use inline withBEGIN or in a single line withCREATE TEMP FUNCTION.
REPLACE PROCEDURECREATE OR REPLACE PROCEDURE
CALLCALL

The sections that follow describe ways to convert existing Teradata proceduralstatements to BigQuery Scripting statements that have similarfunctionality.

Variable declaration and assignment

BigQuery variables are valid during the lifetime of the script.

TeradataBigQuery
DECLAREDECLARE
SETSET

Error condition handlers

Teradata uses handlers on status codes in procedures for error control. InBigQuery, error handling is a core feature of the main controlflow, similar to what other languages provide withTRY ... CATCH blocks.

TeradataBigQuery
DECLARE EXIT HANDLER FORSQLEXCEPTIONBEGIN ... EXCEPTION WHEN ERROR THEN
SIGNALsqlstateRAISEmessage
DECLARE CONTINUE HANDLER FOR SQLSTATE VALUE 23505;Exception handlers that trigger for certain error conditions are not used by BigQuery.

We recommend usingASSERT statements where exit conditions are used for pre-checks or debugging, because this is ANSI SQL:2011 compliant.

TheSQLSTATE variable in Teradata is similar to the@@error system variable in BigQuery. In BigQuery, it is more common toinvestigate errors usingaudit logging or theINFORMATION_SCHEMA views.

Cursor declarations and operations

Because BigQuery doesn't support cursors or sessions, thefollowing statements aren't used in BigQuery:

Dynamic SQL statements

TheScripting feature in BigQuery supports dynamic SQL statements like those shown in the following table.

TeradataBigQuery
EXECUTE IMMEDIATEsql_str;EXECUTE IMMEDIATEsql_str;
EXECUTEstmt_id [USINGvar,...];EXECUTE IMMEDIATEstmt_id USINGvar;

The following Dynamic SQL statements are not used in BigQuery:

Flow-of-control statements

TheScripting feature in BigQuery supports flow-of-control statements like those shown in the following table.

TeradataBigQuery
IFconditionTHENstmtsELSEstmtsEND IFIFconditionTHENstmtsELSEstmtsEND IF
label_name:LOOPstmtsEND LOOPlabel_name;GOTO-style block constructs are not used in BigQuery.

We recommend rewriting them asuser-defined functions (UDFs) or useASSERT statements where they are used for error handling.
REPEATstmts UNTILconditionEND REPEAT;WHILEconditionDOstmtsEND WHILE
LEAVEouter_proc_label;LEAVE is not used for GOTO-style blocks; it is used as a synonym forBREAK to leave aWHILE loop.
LEAVElabel;LEAVE is not used for GOTO-style blocks; it is used as a synonym forBREAK to leave aWHILE loop.
WITH RECURSIVEtemp_table AS ( ... );Recursive queries (also known as recursive common table expressions (CTE)) are not used in BigQuery. They can be rewritten using arrays ofUNION ALL.

The following flow-of-control statements are not used in BigQuerybecause BigQuery doesn't use cursors or sessions:

Metadata and transaction SQL statements

TeradataBigQuery
HELP TABLEtable_name;
HELP VIEWview_name;
SELECT
 * EXCEPT(is_generated, generation_expression, is_stored, is_updatable)
FROM
 mydataset.INFORMATION_SCHEMA.COLUMNS;
WHERE
 table_name=table_name


The same query is valid to get column information for views.
For more information, see theColumnview in the BigQueryINFORMATION_SCHEMA.
SELECT * FROM dbc.tables WHERE tablekind = 'T';

(Teradata DBC view)
SELECT
 * EXCEPT(is_typed)
FROM
mydataset.INFORMATION_SCHEMA.TABLES;


For more information, seeIntroductionto BigQueryINFORMATION_SCHEMA.
HELP STATISTICStable_name;APPROX_COUNT_DISTINCT(col)
COLLECT STATS USING SAMPLE ONtable_namecolumn (...);Not used in BigQuery.
LOCKING TABLEtable_name FOR EXCLUSIVE;BigQuery always uses snapshot isolation. For details, seeConsistency guarantees elsewhere in this document.
SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL ...BigQuery always uses Snapshot Isolation. For details, seeConsistency guaranteeselsewhere in this document.
BEGIN TRANSACTION;
SELECT ...
END TRANSACTION;
BigQuery always uses Snapshot Isolation. For details, seeConsistency guaranteeselsewhere in this document.
EXPLAIN ...Not used in BigQuery.

Similar features are thequery planexplanation in the BigQuery web UI and the slotallocation visible in theINFORMATION_SCHEMAviews and inaudit logging inCloud Monitoring.
BEGIN TRANSACTION;
SELECT ...
END TRANSACTION;
BEGIN
 BEGIN TRANSACTION;
 COMMIT TRANSACTION;
EXCEPTION WHEN ERROR THEN
 -- Roll back the transaction inside the exception handler.
 SELECT @@error.message;
 ROLLBACK TRANSACTION;
END;

Multi-statement and multi-line SQL statements

Both Teradata and BigQuery support transactions (sessions)and therefore support statements separated by semicolons that are consistentlyexecuted together. For more information, seeMulti-statement transactions.

Error codes and messages

Teradata error codes and BigQuery error codes are different.Providing a REST API, BigQuery relies primarily onHTTP status codes plus detailed error messages.

If your application logic is currently catching the following errors, try toeliminate the source of the error, because BigQuery will notreturn the same error codes.

  • SQLSTATE = '02000'—"Row not found"
  • SQLSTATE = '21000'—"Cardinality violation (Unique Index)"
  • SQLSTATE = '22000'—"Data violation (Data Type)"
  • SQLSTATE = '23000'—"Constraint Violation"

In BigQuery, it would be more common to use theINFORMATION_SCHEMA viewsoraudit logging to drill down into errors.

For information about how to handle errors in Scripting, see the sections that follow.

Consistency guarantees and transaction isolation

Both Teradata and BigQuery are atomic—that is, ACID-compliant ona per-mutation level across many rows. For example, aMERGE operation iscompletely atomic, even with multiple inserted and updated values.

Transactions

Teradata provides either Read Uncommitted (allowing dirty reads) orSerializable transactionisolation level when running in session mode (instead of auto-commit mode). In the best case,Teradata achieves strictly serializable isolation by using pessimistic lockingagainst a row hash across all columns of rows across all partitions. Deadlocksare possible. DDL always forces a transaction boundary. Teradata Fastload jobsrun independently, but only on empty tables.

BigQuery alsosupports transactions.BigQuery helps ensureoptimistic concurrency control (first to commit wins) withsnapshot isolation,in which a query reads the last committed data before the query starts. Thisapproach guarantees the same level of consistency on a per-row, per-mutationbasis and across rows within the same DML statement, yet avoids deadlocks. Inthe case of multipleUPDATE statements against the same table, BigQueryswitches topessimistic concurrency control andqueues multipleUPDATE statements, automatically retrying in case of conflicts.INSERT DML statements andload jobs can run concurrently and independently to append to tables.

Rollback

Teradata supportstwo session rollback modes,ANSI session mode and Teradata session mode (SET SESSION CHARACTERISTICS andSET SESSION TRANSACTION), depending on which rollback mode you want. Infailure cases, the transaction might not be rolled back.

BigQuery supports theROLLBACK TRANSACTION statement.There is noABORT statement in BigQuery.

Database limits

Always checkthe BigQuery public documentationforthe latest quotas and limits. Many quotas for large-volume users can be raisedby contacting the Cloud Support team. The following table shows a comparison ofthe Teradata and BigQuery database limits.

LimitTeradataBigQuery
Tables per databaseUnrestrictedUnrestricted
Columns per table2,04810,000
Maximum row size1 MB100 MB
Column name length128 Unicode chars300 Unicode characters
Table description length128 Unicode chars16,384 Unicode characters
Rows per tableUnlimitedUnlimited
Maximum SQL request length1 MB1 MB (maximum unresolved GoogleSQL query length)
12 MB (maximum resolved legacy and GoogleSQL query length)

Streaming:
  • 10 MB (HTTP request size limit)
  • 10,000 (maximum rows per request)
Maximum request and response size7 MB (request), 16 MB (response)10 MB (request) and 10 GB (response), or virtually unlimited if you usepagination or the Cloud Storage API.
Maximum number of concurrent sessions120 per parsing engine (PE)1,000 concurrent multi-statement queries (can be raised with aslot reservation), 300 concurrent API requests per user.
Maximum number of concurrent (fast) loads30 (default 5)No concurrency limit; jobs are queued. 100,000 load jobs per project perday.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.