Stay organized with collections Save and categorize content based on your preferences.

Snowflake SQL translation guide

This document details the similarities and differences in SQL syntax betweenSnowflake and BigQuery to help accelerate the planning and execution ofmoving your EDW (Enterprise Data Warehouse) to BigQuery. Snowflake datawarehousing is designed to work with Snowflake-specific SQL syntax. Scriptswritten for Snowflake might need to be altered before you can use them inBigQuery, because the SQL dialects vary between the services. Usebatch SQL translation tomigrate your SQL scripts in bulk, orinteractive SQL translationto translate ad hoc queries. Snowflake SQL is supported by bothtools inpreview.

Note: In some cases, there is no direct mapping between a SQL element inSnowflake and BigQuery. However, in most cases, you can achieve thesame functionality in BigQuery that you can in Snowflake using analternative means, as shown in the examples in this document.

Data types

This section shows equivalents between data types in Snowflake and inBigQuery.



SnowflakeBigQueryNotes
NUMBER/DECIMAL/NUMERICNUMERICTheNUMBER data type in Snowflake supports 38 digits of precision and 37 digits of scale. Precision and scale can be specified according to the user.

BigQuery supportsNUMERIC andBIGNUMERIC withoptionally specified precision and scale within certain bounds.
INT/INTEGERBIGNUMERICINT/INTEGER and all otherINT-like datatypes, such asBIGINT, TINYINT, SMALLINT, BYTEINT represent an alias for theNUMBER datatype where the precision and scale cannot be specified and is alwaysNUMBER(38, 0)
BIGINTBIGNUMERIC
SMALLINTBIGNUMERIC
TINYINTBIGNUMERIC
BYTEINTBIGNUMERIC
FLOAT/
FLOAT4/
FLOAT8
FLOAT64TheFLOAT data type in Snowflake establishes 'NaN' as > X, where X is any FLOAT value (other than 'NaN' itself).

TheFLOAT data type in BigQuery establishes 'NaN' as< X, where X is any FLOAT value (other than 'NaN' itself).
DOUBLE/
DOUBLE PRECISION/

REAL
FLOAT64TheDOUBLE data type in Snowflake is synonymous with theFLOAT data type in Snowflake, but is commonly incorrectly displayed asFLOAT. It is properly stored asDOUBLE.
VARCHARSTRINGTheVARCHAR data type in Snowflake has a maximum length of 16 MB (uncompressed). If length is not specified, the default is the maximum length.

TheSTRING data type in BigQuery is stored as variable length UTF-8 encoded Unicode. The maximum length is 16,000 characters.
CHAR/CHARACTERSTRINGTheCHAR data type in Snowflake has a maximum length of 1.
STRING/TEXTSTRINGTheSTRING data type in Snowflake is synonymous with Snowflake's VARCHAR.
BINARYBYTES
VARBINARYBYTES
BOOLEANBOOLTheBOOL data type in BigQuery can only acceptTRUE/FALSE, unlike theBOOL data type in Snowflake, which can accept TRUE/FALSE/NULL.
DATEDATETheDATE type in Snowflake accepts most common date formats, unlike theDATE type in BigQuery, which only accepts dates in the format, 'YYYY-[M]M-[D]D'.
TIMETIMEThe TIME type in Snowflake supports 0 to 9 nanoseconds of precision, whereas the TIME type in BigQuery supports 0 to 6 nanoseconds of precision.
TIMESTAMPDATETIMETIMESTAMP is a user-configurable alias which defaults toTIMESTAMP_NTZ which maps toDATETIME in BigQuery.
TIMESTAMP_LTZTIMESTAMP
TIMESTAMP_NTZ/DATETIMEDATETIME
TIMESTAMP_TZTIMESTAMP
OBJECTJSONTheOBJECT type in Snowflake does not support explicitly-typed values. Values are of theVARIANT type.
VARIANTJSONTheOBJECT type in Snowflake does not support explicitly-typed values. Values are of theVARIANT type.
ARRAYARRAY<JSON>TheARRAY type in Snowflake can only supportVARIANT types, whereas the ARRAY type inBigQuery can support all data types with the exception of an array itself.

BigQuery also has the following data types which do not have a directSnowflake analogue:

Query syntax and query operators

This section addresses differences in query syntax between Snowflake andBigQuery.

SELECT statement

MostSnowflakeSELECT statementsare compatible with BigQuery. The following table contains a list ofminor differences.

SnowflakeBigQuery

SELECTTOP ...

FROM table

SELECT expression

FROM table

ORDER BY expression DESC

LIMIT number

SELECT

x/total AS probability,

ROUND(100 * probability, 1) AS pct

FROM raw_data


Note: Snowflake supports creating and referencing an alias in the sameSELECT statement.

SELECT

x/total AS probability,

ROUND(100 * (x/total), 1) AS pct

FROM raw_data

SELECT *FROM (

VALUES (1), (2), (3)

)

SELECTAS VALUE STRUCT(1, 2, 3)

Snowflake aliases and identifiers are case-insensitive by default. To preservecase, enclose aliases and identifiers with double quotes (").

FROM clause

AFROM clausein a query specifies the possible tables, views, subquery, or table functions touse in a SELECT statement. All of these table references are supported inBigQuery.

The following table contains a list of minor differences.

SnowflakeBigQuery

SELECT $1, $2 FROM(VALUES (1, 'one'), (2, 'two'));

WITH table1 AS
(
SELECT STRUCT(1 as number, 'one' as spelling)
UNION ALL
SELECT STRUCT(2 as number, 'two' as spelling)
)
SELECT *
FROM table1

SELECT*FROM tableSAMPLE(10)

SELECT*FROM table

TABLESAMPLE

BERNOULLI (0.1 PERCENT)

SELECT * FROM table1AT(TIMESTAMP => timestamp)SELECT * FROM table1BEFORE(STATEMENT => statementID)

SELECT * FROM table

FOR SYSTEM_TIME AS OF timestamp


Note: BigQuery does not have a direct alternative to Snowflake's BEFORE using a statement ID. The value oftimestamp cannot be more than 7 days before the current timestamp.

@[namespace]<stage_name>[/path]

BigQuery does not support the concept of staged files.

SELECT*

FROM table

START WITH predicate

CONNECT BY

[PRIOR] col1 = [PRIOR] col2

[, ...]

...

BigQuery does not offer a direct alternative to Snowflake'sCONNECT BY.

BigQuery tables can be referenced in theFROM clause using:

  • [project_id].[dataset_id].[table_name]
  • [dataset_id].[table_name]
  • [table_name]

BigQuery also supportsadditional table references:

  • Historical versions of the table definition and rows usingFOR SYSTEM_TIMEAS OF
  • Field paths, or any path that resolves to a field within a data type (thatis, aSTRUCT)
  • Flattened arrays

WHERE clause

The SnowflakeWHEREclause and BigQueryWHEREclause are identical, except for the following:

SnowflakeBigQuery

SELECT col1, col2FROM table1, table2WHERE col1 = col2(+)

SELECT col1, col2
FROM table1 INNER JOIN table2
ON col1 = col2

Note: BigQuery does not support the(+) syntax forJOINs

JOIN types

Both Snowflake and BigQuery support the following types of join:

Both Snowflake and BigQuery support theONandUSING clause.

The following table contains a list of minor differences.

SnowflakeBigQuery

SELECT col1

FROM table1

NATURAL JOIN

table2

SELECT col1

FROM table1

INNER JOIN

table2

USING (col1, col2 [, ...])


Note: In BigQuery,JOIN clauses require a JOIN condition unless it is a CROSS JOIN or one of the joined tables is a field within a data type or an array.

SELECT ...FROM table1 AS t1,LATERAL ( SELECT*

FROM table2 AS t2

WHERE t1.col = t2.col)


Note: Unlike the output of a non-lateral join, the output from a lateral join includes only the rows generated from the in-line view. The rows on the left-hand side do not need to be joined to the right hand side because the rows on the left-hand side have already been taken into account by being passed into the in-line view.

SELECT ...FROM table1 as t1LEFT JOIN table2 as t2

ON t1.col = t2.col

Note: BigQuery does not support a direct alternative forLATERAL JOINs.

WITH clause

ABigQueryWITHclausecontains one or more named subqueries which execute every time a subsequentSELECT statement references them.SnowflakeWITHclauses behave the same as BigQuery with the exception thatBigQuery does not supportWITH RECURSIVE.

GROUP BY clause

SnowflakeGROUP BY clauses supportGROUPBY,GROUP BYROLLUP,GROUP BY GROUPINGSETS,andGROUP BYCUBE,while BigQueryGROUP BY clauses supportsGROUPBY,GROUP BYALL,GROUPBY ROLLUP,GROUP BY GROUPINGSETS,andGROUP BYCUBE.

SnowflakeHAVINGand BigQueryHAVING aresynonymous. Note thatHAVING occurs afterGROUP BY and aggregation, andbeforeORDER BY.

SnowflakeBigQuery

SELECTcol1 as one, col2 as two

FROM tableGROUP BY (one, 2)

SELECT col1 as one, col2 as two

FROM tableGROUP BY (one, 2)

SELECTcol1 as one, col2 as two

FROM tableGROUP BY ROLLUP (one, 2)

SELECT col1 as one, col2 as two

FROM tableGROUP BY ROLLUP (one, 2)

SELECTcol1 as one, col2 as two

FROM tableGROUP BY GROUPING SETS(one, 2)


Note: Snowflake allows up to 128 grouping sets in the same query block

SELECT col1 as one, col2 as two

FROM tableGROUP BY GROUPING SETS (one, 2)

SELECTcol1 as one, col2 as two

FROM tableGROUP BY CUBE(one,2)


Note: Snowflake allows up to 7 elements (128 grouping sets) in each cube

SELECT col1 as one, col2 as two

FROM tableGROUP BY CUBE (one, 2)

ORDER BY clause

There are some minor differences betweenSnowflakeORDER BY clausesandBigQueryORDER BY clauses.

SnowflakeBigQuery
In Snowflake,NULLs are ranked last by default (ascending order).In BigQuery,NULLS are ranked first by default (ascending order).
You can specify whetherNULL values should be ordered first or last usingNULLS FIRSTorNULLS LAST, respectively.There's no equivalent to specify whetherNULL values should be first or last in BigQuery.

LIMIT/FETCH clause

TheLIMIT/FETCHclause in Snowflake constrains the maximum number of rows returned by astatement or subquery.LIMIT(Postgres syntax) andFETCH(ANSI syntax) produce the same result.

In Snowflake and BigQuery, applying aLIMIT clause to a query doesnot affect the amount of data that is read.

SnowflakeBigQuery

SELECT col1, col2

FROM table

ORDER BY col1

LIMIT count OFFSET start


SELECT ...

FROM ...

ORDER BY ...

OFFSET start {[ROW | ROWS]}FETCH {[FIRST | NEXT]} count

{[ROW | ROWS]} [ONLY]


Note:NULL, empty string (''), and $$$$ values are accepted and are treated as "unlimited". Primary use is for connectors and drivers.

SELECT col1, col2

FROM table

ORDER BY col1

LIMIT count OFFSET start


Note:BigQuery does not supportFETCH.LIMIT replacesFETCH.

Note: In BigQuery,OFFSET must be used together with aLIMITcount. Make sure to set thecount INT64 value to the minimum necessary ordered rows for best performance. Ordering all result rows unnecessarily will lead to worse query execution performance.

QUALIFY clause

TheQUALIFYclause in Snowflake allows you to filter results for window functions similar towhatHAVING does with aggregate functions andGROUP BY clauses.

SnowflakeBigQuery

SELECT col1, col2FROM tableQUALIFY ROW_NUMBER() OVER (PARTITION BY col1 ORDER BY col2) = 1;

The SnowflakeQUALIFY clause with an analytics function likeROW_NUMBER(),COUNT(), and withOVER PARTITION BY is expressed in BigQuery as aWHERE clause on a subquery that contains the analytics value.

UsingROW_NUMBER():

SELECT col1, col2

FROM (SELECT col1, col2

ROW NUMBER() OVER (PARTITION BY col1 ORDER by col2) RNFROM table) WHERE RN = 1;


UsingARRAY_AGG(), which supports larger partitions:

SELECT result.*FROM (SELECT ARRAY_AGG(table ORDER BY table.col2 DESC LIMIT 1) [OFFSET(0)]FROM table

GROUP BY col1) AS result;

Functions

The following sections list Snowflake functions and their BigQueryequivalents.

Aggregate functions

The following table shows mappings between common Snowflake aggregate, aggregateanalytic, and approximate aggregate functions with their BigQueryequivalents.

SnowflakeBigQuery

ANY_VALUE([DISTINCT] expression)[OVER ...]


Note:DISTINCT does not have any effect

ANY_VALUE(expression)[OVER ...]

APPROX_COUNT_DISTINCT([DISTINCT] expression) [OVER ...]


Note:DISTINCT does not have any effect

APPROX_COUNT_DISTINCT(expression)


Note: BigQuery does not supportAPPROX_COUNT_DISTINCT with Window Functions

APPROX_PERCENTILE(expression, percentile) [OVER ...]


Note:Snowflake does not have the option toRESPECT NULLS

APPROX_QUANTILES([DISTINCT] expression,100)[OFFSET((CAST(TRUNC(percentile * 100) as INT64))]


Note: BigQuery does not supportAPPROX_QUANTILES with Window Functions

APPROX_PERCENTILE_ACCUMULATE(expression)

BigQuery does not support the ability to store intermediate state when predicting approximate values.

APPROX_PERCENTILE_COMBINE(state)

BigQuery does not support the ability to store intermediate state when predicting approximate values.

APPROX_PERCENTILE_ESTIMATE(state, percentile)

BigQuery does not support the ability to store intermediate state when predicting approximate values.

APPROX_TOP_K(expression, [number [counters]]


Note:If no number parameter is specified, default is 1. Counters should be significantly larger than number.

APPROX_TOP_COUNT(expression, number)


Note: BigQuery does not supportAPPROX_TOP_COUNT with Window Functions.

APPROX_TOP_K_ACCUMULATE(expression,counters)

BigQuery does not support the ability to store intermediate state when predicting approximate values.

APPROX_TOP_K_COMBINE(state, [counters])

BigQuery does not support the ability to store intermediate state when predicting approximate values.

APPROX_TOP_K_ESTIMATE(state, [k])

BigQuery does not support the ability to store intermediate state when predicting approximate values.

APPROXIMATE_JACCARD_INDEX([DISTINCT] expression)


You can use a custom UDF to implementMINHASH withk distinct hash functions. Another approach to reduce the variance inMINHASH is to keep
k of the minimum values of one hash function. In this case Jaccard index can be approximated as following:

WITH

minhash_A AS (

SELECT DISTINCTFARM_FINGERPRINT(TO_JSON_STRING(t)) AS h

FROM TA AS t

ORDER BY h

LIMIT k),

minhash_B AS (

SELECT DISTINCTFARM_FINGERPRINT(TO_JSON_STRING(t)) AS h

FROM TB AS t

ORDER BY h

LIMIT k)

SELECT

COUNT(*) / k AS APPROXIMATE_JACCARD_INDEX

FROM minhash_A

INNER JOIN minhash_B

ON minhash_A.h = minhash_B.h

APPROXIMATE_SIMILARITY([DISTINCT] expression)


It is a synonym forAPPROXIMATE_JACCARD_INDEX and can be implemented in the same way.

ARRAY_AGG([DISTINCT] expression1)[WITHIN GROUP (ORDER BY ...)]

[OVER ([PARTITION BY expression2])]

Note: Snowflake does not support ability to IGNORE|RESPECT NULLS and to LIMIT directly in ARRAY_AGG.

ARRAY_AGG([DISTINCT] expression1

[{IGNORE|RESPECT}] NULLS][ORDER BY ...]LIMIT ...])

[OVER (...)]

AVG([DISTINCT] expression)[OVER ...]

AVG([DISTINCT] expression)[OVER ...]


Note:BigQuery'sAVG does not perform automatic casting onSTRINGs.

BITAND_AGG(expression)

[OVER ...]

BIT_AND(expression)[OVER ...]

Note:BigQuery does not implicitly cast character/text columns to the nearestINTEGER.

BITOR_AGG(expression)

[OVER ...]

BIT_OR(expression)

[OVER ...]


Note:BigQuery does not implicitly cast character/text columns to the nearestINTEGER.

BITXOR_AGG([DISTINCT] expression)[OVER ...]

BIT_XOR([DISTINCT] expression)[OVER ...]


Note:BigQuery does not implicitly cast character/text columns to the nearestINTEGER.

BOOLAND_AGG(expression)[OVER ...]


Note:Snowflake allows numeric, decimal, and floating point values to be treated asTRUE if not zero.

LOGICAL_AND(expression)

[OVER ...]

BOOLOR_AGG(expression)

[OVER ...]


Note:Snowflake allows numeric, decimal, and floating point values to be treated asTRUE if not zero.

LOGICAL_OR(expression)

[OVER ...]

BOOLXOR_AGG(expression)

[OVER ([PARTITION BY <partition_expr> ])


Note:Snowflake allows numeric, decimal, and floating point values to be treated asTRUE if not zero.
For numeric expression:

SELECT

CASECOUNT(*)

WHEN 1 THEN TRUE

WHEN 0 THEN NULL

ELSE FALSE

END AS BOOLXOR_AGG

FROM T

WHERE expression != 0


To useOVER you can run the following (boolean example provided):

SELECT

CASE COUNT(expression) OVER (PARTITION BY partition_expr)

WHEN 0 THEN NULL

ELSE

CASE COUNT(

CASE expression

WHEN TRUE THEN 1

END) OVER (PARTITION BY partition_expr)

WHEN 1 THEN TRUE

ELSE FALSE

END

END AS BOOLXOR_AGG

FROM T

CORR(dependent, independent)

[OVER ...]

CORR(dependent, independent)

[OVER ...]

COUNT([DISTINCT] expression [,expression2])[OVER ...]

COUNT([DISTINCT] expression [,expression2])[OVER ...]

COVAR_POP(dependent, independent)[OVER ...]

COVAR_POP(dependent, independent)[OVER ...]

COVAR_SAMP(dependent, independent)

[OVER ...]

COVAR_SAMP(dependent, independent)

[OVER ...]

GROUPING(expression1, [,expression2...])

BigQuery does not support a direct alternative to Snowflake'sGROUPING. Available through a User-Defined Function.

GROUPING_ID(expression1, [,expression2...])

BigQuery does not support a direct alternative to Snowflake'sGROUPING_ID. Available through a User-Defined Function.

HASH_AGG([DISTINCT] expression1, [,expression2])

[OVER ...]

SELECT
BIT_XOR(
FARM_FINGERPRINT(
TO_JSON_STRING(t))) [OVER]
FROM t

SELECTHLL([DISTINCT] expression1, [,expression2])

[OVER ...]


Note:Snowflake does not allow you to specify precision.

SELECTHLL_COUNT.EXTRACT(sketch)FROM (

SELECTHLL_COUNT.INIT(expression)

AS sketch FROM table)


Note: BigQuery does not supportHLL_COUNT…with Window Functions. A user cannot include multiple expressions in a singleHLL_COUNT... function.

HLL_ACCUMULATE([DISTINCT] expression)


Note:Snowflake does not allow you to specify precision.
HLL_COUNT.INIT(expression [, precision])

HLL_COMBINE([DISTINCT] state)

HLL_COUNT.MERGE_PARTIAL(sketch)

HLL_ESTIMATE(state)

HLL_COUNT.EXTRACT(sketch)

HLL_EXPORT(binary)

BigQuery does not support a direct alternative to Snowflake'sHLL_EXPORT.

HLL_IMPORT(object)

BigQuery does not support a direct alternative to Snowflake'sHLL_IMPORT.

KURTOSIS(expression)

[OVER ...]

BigQuery does not support a direct alternative to Snowflake'sKURTOSIS.

LISTAGG(

[DISTINCT] aggregate_expression

[, delimiter]

)

[OVER ...]

STRING_AGG(

[DISTINCT] aggregate_expression

[, delimiter]

)

[OVER ...]

MEDIAN(expression)[OVER ...]


Note:Snowflake does not support ability toIGNORE|RESPECT NULLSand toLIMITdirectly inARRAY_AGG.

PERCENTILE_CONT(

value_expression,

0.5

[ {RESPECT | IGNORE} NULLS]

) OVER()

MAX(expression)[OVER ...]


MIN(expression)[OVER ...]

MAX(expression)[OVER ...]


MIN(expression)[OVER ...]

MINHASH(k, [DISTINCT] expressions)

You can use a custom UDF to implementMINHASH withk distinct hash functions. Another approach to reduce the variance inMINHASH is to keepk of the minimum values of one hash function: SELECT DISTINCT
FARM_FINGERPRINT(
TO_JSON_STRING(t)) AS MINHASH

FROM t

ORDER BY MINHASH

LIMIT k

MINHASH_COMBINE([DISTINCT] state)

FROM (
SELECT DISTINCT
FARM_FINGERPRINT(
TO_JSON_STRING(t)) AS h
FROM TA AS t
ORDER BY h
LIMIT k
UNION
SELECT DISTINCT
FARM_FINGERPRINT(
TO_JSON_STRING(t)) AS h
FROM TB AS t
ORDER BY h
LIMIT k
)
ORDER BY h
LIMIT k

MODE(expr1)

OVER ( [ PARTITION BY <expr2> ] )

SELECT expr1

FROM (

SELECT

expr1,

ROW_NUMBER() OVER (

PARTITION BY expr2

ORDER BY cnt DESC) rn

FROM (

SELECT

expr1,

expr2,

COUNTIF(expr1 IS NOT NULL) OVER

(PARTITION BY expr2, expr1) cnt

FROM t))

WHERE rn = 1

OBJECT_AGG(key, value)[OVER ...]

You may consider usingTO_JSON_STRING to convert a value into JSON-formatted string

PERCENTILE_CONT(percentile) WITHIN GROUP (ORDER BY value_expression)

[OVER ...]

PERCENTILE_CONT(

value_expression,

percentile

[ {RESPECT | IGNORE} NULLS]

) OVER()

PERCENTILE_DISC(percentile) WITHIN GROUP (ORDER BY value_expression)

[OVER ...]

PERCENTILE_DISC(

value_expression,

percentile

[ {RESPECT | IGNORE} NULLS]

) OVER()

REGR_AVGX(dependent, independent)

[OVER ...]

SELECTAVG(independent) [OVER ...]

FROM table

WHERE (

(dependent IS NOT NULL) AND

(independent IS NOT NULL)

)

REGR_AVGY(dependent, independent)

[OVER ...]

SELECTAVG(dependent) [OVER ...]

FROM table

WHERE (

(dependent IS NOT NULL) AND

(independent IS NOT NULL)

)

REGR_COUNT(dependent, independent)

[OVER ...]

SELECTCOUNT(*) [OVER ...]

FROM table

WHERE (

(dependent IS NOT NULL) AND

(independent IS NOT NULL)

)

REGR_INTERCEPT(dependent, independent)

[OVER ...]

SELECT

AVG(dependent) -

COVAR_POP(dependent,independent)/

VAR_POP(dependent) *

AVG(independent)

[OVER ...]

FROM table

WHERE (

(dependent IS NOT NULL) AND

(independent IS NOT NULL)

)

[GROUP BY ...]

REGR_R2(dependent, independent)

[OVER ...]

SELECT

CASE

WHENVAR_POP(independent) = 0

THEN NULL

WHEN VAR_POP(dependent) = 0 AND VAR_POP(independent) != 0

THEN 1

ELSEPOWER(CORR(dependent, independent), 2)

END AS ...

FROM table

WHERE (

(dependent IS NOT NULL) AND

(independent IS NOT NULL)

)

[GROUP BY ...]

REGR_SLOPE(dependent, independent)

[OVER ...]

SELECT

COVAR_POP(dependent,independent)/

VAR_POP(dependent)

[OVER ...]

FROM table

WHERE (

(dependent IS NOT NULL) AND

(independent IS NOT NULL)

)

[GROUP BY ...]

REGR_SXX(dependent, independent)

[OVER ...]

SELECTCOUNT(*)*VAR_POP(independent)

[OVER ...]

FROM table

WHERE (

(dependent IS NOT NULL) AND

(independent IS NOT NULL)

)

[GROUP BY ...]

REGR_SYY(dependent, independent)

[OVER ...]

SELECTCOUNT(*)*VAR_POP(dependent)

[OVER ...]

FROM table

WHERE (

(dependent IS NOT NULL) AND

(independent IS NOT NULL)

)

[GROUP BY ...]

SKEW(expression)

BigQuery does not support a direct alternative to Snowflake's SKEW.

STDDEV([DISTINCT] expression)

[OVER ...]

STDDEV([DISTINCT] expression)

[OVER ...]

STDDEV_POP([DISTINCT] expression)

[OVER ...]

STDDEV_POP([DISTINCT] expression)

[OVER ...]

STDDEV_SAMP([DISTINCT] expression)

[OVER ...]

STDDEV_SAMP([DISTINCT] expression)

[OVER ...]

SUM([DISTINCT] expression)

[OVER ...]

SUM([DISTINCT] expression)

[OVER ...]

VAR_POP([DISTINCT] expression)

[OVER ...]


Note: Snowflake supports the ability to castVARCHARs to floating point values.

VAR_POP([DISTINCT] expression)

[OVER ...]

VARIANCE_POP([DISTINCT] expression)

[OVER ...]


Note: Snowflake supports the ability to castVARCHARs to floating point values.

VAR_POP([DISTINCT] expression)

[OVER ...]

VAR_SAMP([DISTINCT] expression)

[OVER ...]


Note: Snowflake supports the ability to castVARCHARs to floating point values.

VAR_SAMP([DISTINCT] expression)

[OVER ...]

VARIANCE([DISTINCT] expression)

[OVER ...]


Note: Snowflake supports the ability to castVARCHARs to floating point values.

VARIANCE([DISTINCT] expression)

[OVER ...]

BigQuery also offers the followingaggregate,aggregate analytic,andapproximate aggregatefunctions, which do not have a direct analogue in Snowflake:

Bitwise expression functions

The following table shows mappings between common Snowflake bitwise expressionfunctions with their BigQuery equivalents.

If the data type of an expression is notINTEGER, Snowflake attempts to casttoINTEGER. However, BigQuery does not attempt to cast toINTEGER.

SnowflakeBigQuery

BITAND(expression1, expression2)

BIT_AND(x)FROM UNNEST([expression1, expression2]) AS xexpression1 & expression2

BITNOT(expression)

~ expression

BITOR(expression1, expression2)

BIT_OR(x)FROM UNNEST([expression1, expression2]) AS x


expression1| expression2

BITSHIFTLEFT(expression, n)

expression<< n

BITSHIFTRIGHT

(expression, n)

expression>> n

BITXOR(expression, expression)


Note: Snowflake does not supportDISTINCT.

BIT_XOR([DISTINCT] x)FROM UNNEST([expression1, expression2]) AS x


expression ^ expression

Conditional expression functions

The following table shows mappings between common Snowflake conditionalexpressions with their BigQuery equivalents.

SnowflakeBigQuery

expression[ NOT ] BETWEENlower AND upper

(expression>= lowerAND expression<= upper)

BOOLAND(expression1, expression2)


Note:Snowflake allows numeric, decimal, and floating point values to be treated asTRUE if not zero.

LOGICAL_AND(x)

FROM UNNEST([expression1, expression2]) AS x


expression1AND expression2

BOOLNOT(expression1)


Note:Snowflake allows numeric, decimal, and floating point values to be treated asTRUE if not zero.

NOT expression

BOOLOR

Note:Snowflake allows numeric, decimal, and floating point values to be treated asTRUE if not zero.

LOGICAL_OR(x)FROM UNNEST([expression1, expression2]) AS x


expression1OR expression2

BOOLXOR

Note:Snowflake allows numeric, decimal, and floating point values to be treated asTRUE if not zero.
BigQuery does not support a direct alternative to Snowflake'sBOOLXOR.

CASE [expression] WHEN condition1 THEN result1 [WHEN condition2 THEN result2]

[...]

[ELSE result3]

END

CASE [expression] WHEN condition1 THEN result1 [WHEN condition2 THEN result2]

[...]

[ELSE result3]

END

COALESCE(expr1, expr2, [,...])


Note:Snowflake requires at least two expressions. BigQuery only requires one.

COALESCE(expr1, [,...])

DECODE(expression, search1, result1,[search2, result2...][,default])

CASE [expression] WHEN condition1 THEN result1 [WHEN condition2 THEN result2]

[...]

[ELSE result3]

END

Note: BigQuery supports subqueries in condition statements. This can be used to reproduce Snowflake'sDECODE. User must useIS NULL instead of= NULL to matchNULL select expressions withNULL search expressions.

EQUAL_NULL(expression1, expression2)

BigQuery does not support a direct alternative to Snowflake'sEQUAL_NULL.

GREATEST(expression1, [,expression2]...)

GREATEST(expression1, [,expression2]...)

IFF(condition, true_result, false_result)

IF(condition, true_result, false_result)

IFNULL(expression1, expression2)

IFNULL(expression1, expression2)

[ NOT ] IN ...

[ NOT ] IN ...

expression1IS [ NOT ] DISTINCT FROM expression2

BigQuery does not support a direct alternative to Snowflake'sIS [ NOT ] DISTINCT FROM.

expressionIS [ NOT ] NULL

expressionIS [ NOT ] NULL

IS_NULL_VALUE(variant_expr)

BigQuery does not supportVARIANT data types.

LEAST(expression,...)

LEAST(expression,...)

NULLIF(expression1,expression2)

NULLIF(expression1,expression2)

NVL(expression1, expression2)

IFNULL(expression1,expression2)

NVL2(expr1,expr2,expr2)

IF(expr1 IS NOT NULL,expr2,expr3)

REGR_VALX(expr1,expr2)

IF(expr1 IS NULL, NULL, expr2)

Note:BigQuery does not support a direct alternative to Snowflake'sREGR...functions.

REGR_VALY(expr1,expr2)

IF(expr2 IS NULL, NULL, expr1)


Note:BigQuery does not support a direct alternative to Snowflake'sREGR...functions.

ZEROIFNULL(expression)

IFNULL(expression,0)

Context functions

The following table shows mappings between common Snowflake context functionswith their BigQuery equivalents.

SnowflakeBigQuery

CURRENT_ACCOUNT()

SESSION_USER()


Note: Not direct comparison. Snowflake returns account ID, BigQuery returns user email address.

CURRENT_CLIENT()

Concept not used in BigQuery

CURRENT_DATABASE()

SELECT catalog_name

FROMINFORMATION_SCHEMA.SCHEMATA

This returns a table of project names. Not a direct comparison.

CURRENT_DATE[()]


Note: Snowflake does not enforce '()' afterCURRENT_DATE command to comply with ANSI standards.

CURRENT_DATE([timezone])


Note: BigQuery'sCURRENT_DATE supports optional time zone specification.

CURRENT_REGION()

SELECT location

FROMINFORMATION_SCHEMA.SCHEMATA


Note: BigQuery'sINFORMATION_SCHEMA.SCHEMATA returns more generalized location references than Snowflake'sCURRENT_REGION(). Not a direct comparison.

CURRENT_ROLE()

Concept not used in BigQuery

CURRENT_SCHEMA()

SELECT schema_name

FROMINFORMATION_SCHEMA.SCHEMATA

This returns a table of all datasets (also called schemas) available in the project or region. Not a direct comparison.

CURRENT_SCHEMAS()

Concept not used in BigQuery

CURRENT_SESSION()

Concept not used in BigQuery

CURRENT_STATEMENT()

SELECT query

FROMINFORMATION_SCHEMA.JOBS_BY_*


Note: BigQuery'sINFORMATION_SCHEMA.JOBS_BY_* allows for searching for queries by job type, start/end type, etc.

CURRENT_TIME[([frac_sec_prec])]


Note: Snowflake allows for optional fractional second precision. Valid values range from 0-9 nanoseconds. Default value is 9. To comply with ANSI, this can be called without '()'.

CURRENT_TIME()

CURRENT_TIMESTAMP[([frac_sec_prec])]


Note: Snowflake allows for optional fractional second precision. Valid values range from 0-9 nanoseconds. Default value is 9. To comply with ANSI, this can be called without '()'. SetTIMEZONE as a session parameter.

CURRENT_DATETIME([timezone])CURRENT_TIMESTAMP()


Note:CURRENT_DATETIME returnsDATETIME data type (not supported in Snowflake).CURRENT_TIMESTAMP returnsTIMESTAMP data type.

CURRENT_TRANSACTION()

SELECT job_id

FROMINFORMATION_SCHEMA.JOBS_BY_*

Note: BigQuery'sINFORMATION_SCHEMA.JOBS_BY_* allows for searching for job IDs by job type, start/end type, etc.

CURRENT_USER[()]


Note: Snowflake does not enforce '()' afterCURRENT_USER command to comply with ANSI standards.

SESSION_USER()


SELECT user_email

FROMINFORMATION_SCHEMA.JOBS_BY_*

Note: Not direct comparison. Snowflake returns username; BigQuery returns user email address.

CURRENT_VERSION()

Concept not used in BigQuery

CURRENT_WAREHOUSE()

SELECT catalg_name

FROMINFORMATION_SCHEMA.SCHEMATA

LAST_QUERY_ID([num])

SELECT job_id

FROMINFORMATION_SCHEMA.JOBS_BY_*


Note: BigQuery'sINFORMATION_SCHEMA.JOBS_BY_* allows for searching for job IDs by job type, start/end type, etc.

LAST_TRANSACTION()

SELECT job_id

FROMINFORMATION_SCHEMA.JOBS_BY_*


Note: BigQuery'sINFORMATION_SCHEMA.JOBS_BY_* allows for searching for job IDs by job type, start/end type, etc.

LOCALTIME()


Note: Snowflake does not enforce '()' afterLOCALTIME command to comply with ANSI standards.

CURRENT_TIME()

LOCALTIMESTAMP()

CURRENT_DATETIME([timezone])CURRENT_TIMESTAMP()


Note:CURRENT_DATETIME returnsDATETIME data type (not supported in Snowflake).CURRENT_TIMESTAMP returnsTIMESTAMP data type.

Conversion functions

The following table shows mappings between common Snowflake conversion functionswith their BigQuery equivalents.

Keep in mind that functions that seem identical in Snowflake andBigQuery may return different data types.

SnowflakeBigQuery

CAST(expression AS type)


expression :: type

CAST(expression AS type)

TO_ARRAY(expression)

[expression]


ARRAY(subquery)

TO_BINARY(expression[, format])


Note: Snowflake supportsHEX,BASE64, andUTF-8 conversion. Snowflake also supportsTO_BINARY using theVARIANT data type. BigQuery does not have an alternative to theVARIANT data type.

TO_HEX(CAST(expression AS BYTES))TO_BASE64(CAST(expression AS BYTES))

CAST(expression AS BYTES)


Note: BigQuery's defaultSTRING casting usesUTF-8 encoding. Snowflake does not have an option to supportBASE32 encoding.

TO_BOOLEAN(expression)


Note:
  • INT64
    TRUE:
    otherwise,FALSE: 0
  • STRING
    TRUE: "true"/"t"/"yes"/"y"/"on"/"1", FALSE: "false"/"f"/"no"/"n"/"off"/"0"

CAST(expression AS BOOL)


Note:
  • INT64
    TRUE:
    otherwise,FALSE: 0
  • STRING
    TRUE: "true", FALSE: "false"

TO_CHAR(expression[, format])


TO_VARCHAR(expression[, format])


Note: Snowflake's format models can be foundhere. BigQuery does not have an alternative to theVARIANT data type.

CAST(expression AS STRING)


Note: BigQuery's input expression can be formatted usingFORMAT_DATE,FORMAT_DATETIME,FORMAT_TIME, orFORMAT_TIMESTAMP.

TO_DATE(expression[, format])


DATE(expression[, format])


Note: Snowflake supports the ability to directly convertINTEGER types toDATE types. Snowflake's format models can be foundhere. BigQuery does not have an alternative to theVARIANT data type.

CAST(expression AS DATE)


Note: BigQuery's input expression can be formatted usingFORMAT,FORMAT_DATETIME, orFORMAT_TIMESTAMP.

TO_DECIMAL(expression[, format]

[,precision[, scale]]


TO_NUMBER(expression[, format]

[,precision[, scale]]


TO_NUMERIC(expression[, format]

[,precision[, scale]]


Note: Snowflake's format models for theDECIMAL,NUMBER, andNUMERIC data types can be foundhere. BigQuery does not have an alternative to theVARIANT data type.

ROUND(CAST(expression AS NUMERIC)

, x)


Note: BigQuery's input expression can be formatted usingFORMAT.

TO_DOUBLE(expression[, format])


Note: Snowflake's format models for theDOUBLEdata types can be foundhere. BigQuery does not have an alternative to theVARIANT data type.

CAST(expression AS FLOAT64)


Note: BigQuery's input expression can be formatted usingFORMAT.

TO_JSON(variant_expression)

BigQuery does not have an alternative to Snowflake'sVARIANT data type.

TO_OBJECT(variant_expression)

BigQuery does not have an alternative to Snowflake'sVARIANT data type.

TO_TIME(expression[, format])


TIME(expression[, format])


Note: Snowflake's format models for theSTRINGdata types can be foundhere. BigQuery does not have an alternative to theVARIANT data type.

CAST(expression AS TIME)


Note: BigQuery does not have an alternative to Snowflake'sVARIANT data type. BigQuery's input expression can be formatted usingFORMAT,FORMAT_DATETIME,FORMAT_TIMESTAMP, orFORMAT_TIME.

TO_TIMESTAMP(expression[, scale])


TO_TIMESTAMP_LTZ(expression[, scale])


TO_TIMESTAMP_NTZ(expression[, scale])


TO_TIMESTAMP_TZ(expression[, scale])


Note: BigQuery does not have an alternative to theVARIANT data type.

CAST(expression AS TIMESTAMP)


Note: BigQuery's input expression can be formatted usingFORMAT,FORMAT_DATE,FORMAT_DATETIME,FORMAT_TIME. Timezone can be included/not included throughFORMAT_TIMESTAMP parameters.

TO_VARIANT(expression)

BigQuery does not have an alternative to Snowflake'sVARIANT data type.

TO_XML(variant_expression)

BigQuery does not have an alternative to Snowflake'sVARIANT data type.

TRY_CAST(expression AS type)

SAFE_CAST(expression AS type)

TRY_TO_BINARY(expression[, format])

TO_HEX(SAFE_CAST(expression AS BYTES))TO_BASE64(SAFE_CAST(expression AS BYTES))

SAFE_CAST(expression AS BYTES)

TRY_TO_BOOLEAN(expression)

SAFE_CAST(expression AS BOOL)

TRY_TO_DATE(expression)

SAFE_CAST(expression AS DATE)

TRY_TO_DECIMAL(expression[, format]

[,precision[, scale]]


TRY_TO_NUMBER(expression[, format]

[,precision[, scale]]


TRY_TO_NUMERIC(expression[, format]

[,precision[, scale]]

ROUND(

SAFE_CAST(expression AS NUMERIC)

, x)

TRY_TO_DOUBLE(expression)

SAFE_CAST(expression AS FLOAT64)

TRY_TO_TIME(expression)

SAFE_CAST(expression AS TIME)

TRY_TO_TIMESTAMP(expression)


TRY_TO_TIMESTAMP_LTZ(expression)


TRY_TO_TIMESTAMP_NTZ(expression)


TRY_TO_TIMESTAMP_TZ(expression)

SAFE_CAST(expression AS TIMESTAMP)

BigQuery also offers the following conversion functions, which do nothave a direct analogue in Snowflake:

Data generation functions

The following table shows mappings between common Snowflake data generationfunctions with their BigQuery equivalents.

SnowflakeBigQuery

NORMAL(mean, stddev, gen)

BigQuery does not support a direct comparison to Snowflake'sNORMAL.

RANDOM([seed])

IF(RAND()>0.5, CAST(RAND()*POW(10, 18) AS INT64),

(-1)*CAST(RAND()*POW(10, 18) AS

INT64))


Note: BigQuery does not support seeding

RANDSTR(length, gen)

BigQuery does not support a direct comparison to Snowflake'sRANDSTR.
SEQ1 / SEQ2 / SEQ4 / SEQ8BigQuery does not support a direct comparison to Snowflake'sSEQ_.

UNIFORM(min, max, gen)

CAST(min + RAND()*(max-min) AS INT64)


Note:Use persistent UDFs to create an equivalent to Snowflake'sUNIFORM. Examplehere.
UUID_STRING([uuid, name])

Note: Snowflake returns 128 random bits. Snowflake supports both version 4 (random) and version 5 (named) UUIDs.

GENERATE_UUID()


Note: BigQuery returns 122 random bits. BigQuery only supports version 4 UUIDs.

ZIPF(s, N, gen)

BigQuery does not support a direct comparison to Snowflake'sZIPF.

Date and time functions

The following table shows mappings between common Snowflake date and timefunctions with their BigQuery equivalents. BigQuery data andtime functions includeDate functions,Datetime functions,Time functions, andTimestamp functions.

SnowflakeBigQuery

ADD_MONTHS(date, months)

CAST(

DATE_ADD(

date,

INTERVAL integer MONTH

) AS TIMESTAMP

)

CONVERT_TIMEZONE(source_tz, target_tz, source_timestamp)


CONVERT_TIMEZONE(target_tz, source_timestamp)

PARSE_TIMESTAMP(

"%c%z",

FORMAT_TIMESTAMP(

"%c%z",

timestamp,

target_timezone

)

)


Note: source_timezone is always UTC in BigQuery

DATE_FROM_PARTS(year, month, day)


Note: Snowflake supports overflow and negative dates. For example,DATE_FROM_PARTS(2000, 1 + 24, 1)returns Jan 1, 2002. This is not supported in BigQuery.

DATE(year, month, day)


DATE(timestamp_expression[, timezone])


DATE(datetime_expression)

DATE_PART(part, dateOrTime)


Note: Snowflake supports the day of week ISO, nanosecond, and epoch second/millisecond/microsecond/nanosecond part types. BigQuery does not. See full list of Snowflake part typeshere.

EXTRACT(part FROM dateOrTime)


Note: BigQuery supports the week(<weekday>), microsecond, and millisecond part types. Snowflake does not. See full list of BigQuery part typeshere andhere.

DATE_TRUNC(part, dateOrTime)


Note: Snowflake supports the nanosecond part type. BigQuery does not. See full list of Snowflake part typeshere.

DATE_TRUNC(date, part)


DATETIME_TRUNC(datetime, part)


TIME_TRUNC(time, part)


TIMESTAMP_TRUNC(timestamp,part[, timezone])


Note: BigQuery supports the week(<weekday>), ISO week, and ISO year part types. Snowflake does not.

DATEADD(part, value, dateOrTime)

DATE_ADD(date, INTERVAL value part)

DATEDIFF(

part,

start_date_or_time,

end_date_or_time

)


Note: Snowflake supports calculating the difference between two date, time, and timestamp types in this function.

DATE_DIFF(

end_date,

start_date,

part

)


DATETIME_DIFF(

end_datetime,

start_datetime,

part

)


TIME_DIFF(

start_time,

end_time,

part

)


TIMESTAMP_DIFF(

end_timestamp,

start_timestamp,

part

)


Note: BigQuery supports the week(<weekday>) and ISO year part types.

DAYNAME(dateOrTimestamp)

FORMAT_DATE('%a', date)


FORMAT_DATETIME('%a', datetime)


FORMAT_TIMESTAMP('%a', timestamp)

EXTRACT(part FROM dateOrTime)


Note: Snowflake supports the day of week ISO, nanosecond, and epoch second/millisecond/microsecond/nanosecond part types. BigQuery does not. See full list of Snowflake part typeshere.

EXTRACT(part FROM dateOrTime)


Note: BigQuery supports the week(<weekday>), microsecond, and millisecond part types. Snowflake does not. See full list of BigQuery part typeshere andhere.

[HOUR, MINUTE, SECOND](timeOrTimestamp)

EXTRACT(part FROM timestamp [AT THE ZONE timezone])

LAST_DAY(dateOrTime[, part])

DATE_SUB( DATE_TRUNC(

DATE_ADD(date, INTERVAL

1 part),

part),

INTERVAL 1 DAY)

MONTHNAME(dateOrTimestamp)

FORMAT_DATE('%b', date)


FORMAT_DATETIME('%b', datetime)


FORMAT_TIMESTAMP('%b', timestamp)

NEXT_DAY(dateOrTime, dowString)

DATE_ADD(

DATE_TRUNC(

date,

WEEK(dowString)),

INTERVAL 1 WEEK)


Note: dowString might need to be reformatted. For example, Snowflake's 'su' will be BigQuery's 'SUNDAY'.

PREVIOUS_DAY(dateOrTime, dowString)

DATE_TRUNC(

date,

WEEK(dowString)

)


Note: dowString might need to be reformatted. For example, Snowflake's 'su' will be BigQuery's 'SUNDAY'.

TIME_FROM_PARTS(hour, minute, second[, nanosecond)


Note: Snowflake supports overflow times. For example,TIME_FROM_PARTS(0, 100, 0)returns 01:40:00... This is not supported in BigQuery. BigQuery does not support nanoseconds.

TIME(hour, minute, second)


TIME(timestamp, [timezone])


TIME(datetime)

TIME_SLICE(dateOrTime, sliceLength,part[, START]


TIME_SLICE(dateOrTime, sliceLength,part[, END]

DATE_TRUNC(

DATE_SUB(CURRENT_DATE(),

INTERVAL value MONTH),

MONTH)


DATE_TRUNC(

DATE_ADD(CURRENT_DATE(),

INTERVAL value MONTH),

MONTH)


Note: BigQuery does not support a direct, exact comparison to Snowflake'sTIME_SLICE. UseDATETINE_TRUNC,TIME_TRUNC,TIMESTAMP_TRUNC for appropriate data type.

TIMEADD(part, value, dateOrTime)

TIME_ADD(time, INTERVAL value part)

TIMEDIFF(

part,

expression1,

expression2,

)


Note: Snowflake supports calculating the difference between two date, time, and timestamp types in this function.

DATE_DIFF(

dateExpression1,

dateExpression2,

part

)


DATETIME_DIFF(

datetimeExpression1,

datetimeExpression2,

part

)


TIME_DIFF(

timeExpression1,

timeExpression2,

part

)


TIMESTAMP_DIFF(

timestampExpression1,

timestampExpression2,

part

)


Note: BigQuery supports the week(<weekday>) and ISO year part types.

TIMESTAMP_[LTZ, NTZ, TZ _]FROM_PARTS(year, month, day, hour, second[, nanosecond][, timezone])

TIMESTAMP(

string_expression[, timezone] | date_expression[, timezone] |

datetime_expression[, timezone]

)


Note: BigQuery requires timestamps be inputted asSTRING types. Example:"2008-12-25 15:30:00"

TIMESTAMPADD(part, value, dateOrTime)

TIMESTAMPADD(timestamp,INTERVAL value part)

TIMESTAMPDIFF(

part,

expression1,

expression2,

)


Note: Snowflake supports calculating the difference between two date, time, and timestamp types in this function.

DATE_DIFF(

dateExpression1,

dateExpression2,

part

)


DATETIME_DIFF(

datetimeExpression1,

datetimeExpression2,

part

)


TIME_DIFF(

timeExpression1,

timeExpression2,

part

)


TIMESTAMP_DIFF(

timestampExpression1,

timestampExpression2,

part

)


Note: BigQuery supports the week(<weekday>) and ISO year part types.

TRUNC(dateOrTime, part)


Note: Snowflake supports the nanosecond part type. BigQuery does not. See full list of Snowflake part typeshere.

DATE_TRUNC(date, part)


DATETIME_TRUNC(datetime, part)


TIME_TRUNC(time, part)


TIMESTAMP_TRUNC(timestamp,part[, timezone])


Note: BigQuery supports the week(<weekday>), ISO week, and ISO year part types. Snowflake does not.

[YEAR*, DAY*, WEEK*, MONTH, QUARTER](dateOrTimestamp)

EXTRACT(part FROM timestamp [AT THE ZONE timezone])

BigQuery also offers the following date and time functions, which donot have a direct analogue in Snowflake:

Information schema and table functions

BigQuery does not conceptually support many of Snowflake's informationschema and table functions. Snowflake offers the following information schemaand table functions, which do not have a direct analogue in BigQuery:

Below is a list of associated BigQuery and Snowflake information schemaand table functions.

SnowflakeBigQuery
QUERY_HISTORY

QUERY_HISTORY_BY_*
INFORMATION_SCHEMA.JOBS_BY_*

Note: Not a direct alternative.
TASK_HISTORYINFORMATION_SCHEMA.JOBS_BY_*

Note: Not a direct alternative.

BigQuery offers the following information schema and table functions,which do not have a direct analogue in Snowflake:

Numeric functions

The following table shows mappings between common Snowflake numeric functionswith their BigQuery equivalents.

SnowflakeBigQuery

ABS(expression)

ABS(expression)

ACOS(expression)

ACOS(expression)

ACOSH(expression)

ACOSH(expression)

ASIN(expression)

ASIN(expression)

ASINH(expression)

ASINH(expression)

ATAN(expression)

ATAN(expression)

ATAN2(y, x)

ATAN2(y, x)

ATANH(expression)

ATANH(expression)

CBRT(expression)

POW(expression, ⅓)

CEIL(expression [, scale])

CEIL(expression)


Note: BigQuery'sCEIL does not support the ability to indicate precision or scale. ROUNDdoes not allow you to specify to round up.

COS(expression)

COS(expression)

COSH(expression)

COSH(expression)

COT(expression)

1/TAN(expression)

DEGREES(expression)

(expression)*(180/ACOS(-1))

EXP(expression)

EXP(expression)

FACTORIAL(expression)

BigQuery does not have a direct alternative to Snowflake'sFACTORIAL. Use a user-defined function.

FLOOR(expression [, scale])

FLOOR(expression)


Note: BigQuery'sFLOOR does not support the ability to indicate precision or scale. ROUNDdoes not allow you to specify to round up.TRUNC performs synonymously for positive numbers but not negative numbers, as it evaluates absolute value.

HAVERSINE(lat1, lon1, lat2, lon2)

ST_DISTANCE(ST_GEOGPOINT(lon1, lat1),

ST_GEOGPOINT(lon2, lat2)

)/1000


Note: Not an exact match, but close enough.

LN(expression)

LN(expression)

LOG(base, expression)

LOG(expression [,base])


LOG10(expression)


Note:Default base forLOG is 10.

MOD(expression1, expression2)

MOD(expression1, expression2)

PI()

ACOS(-1)

POW(x, y)


POWER(x, y)

POW(x, y)


POWER(x, y)

RADIANS(expression)

(expression)*(ACOS(-1)/180)

ROUND(expression [, scale])

ROUND(expression, [, scale])

SIGN(expression)

SIGN(expression)

SIN(expression)

SIN(expression)

SINH(expression)

SINH(expression)

SQRT(expression)

SQRT(expression)

SQUARE(expression)

POW(expression, 2)

TAN(expression)

TAN(expression)

TANH(expression)

TANH(expression)

TRUNC(expression [, scale])


TRUNCATE(expression [, scale])

TRUNC(expression [, scale])


Note: BigQuery's returned value must be smaller than the expression; it does not support equal to.

BigQuery also offers the followingmathematicalfunctions, which do not have a direct analogue in Snowflake:

Semi-structured data functions

SnowflakeBigQuery
ARRAY_APPENDCustom user-defined function
ARRAY_CATARRAY_CONCAT
ARRAY_COMPACTCustom user-defined function
ARRAY_CONSTRUCT[ ]
ARRAY_CONSTRUCT_COMPACTCustom user-defined function
ARRAY_CONTAINSCustom user-defined function
ARRAY_INSERTCustom user-defined function
ARRAY_INTERSECTIONCustom user-defined function
ARRAY_POSITIONCustom user-defined function
ARRAY_PREPENDCustom user-defined function
ARRAY_SIZEARRAY_LENGTH
ARRAY_SLICECustom user-defined function
ARRAY_TO_STRINGARRAY_TO_STRING
ARRAYS_OVERLAPCustom user-defined function
AS_<object_type>CAST
AS_ARRAYCAST
AS_BINARYCAST
AS_BOOLEANCAST
AS_CHAR , AS_VARCHARCAST
AS_DATECAST
AS_DECIMAL , AS_NUMBERCAST
AS_DOUBLE , AS_REALCAST
AS_INTEGERCAST
AS_OBJECTCAST
AS_TIMECAST
AS_TIMESTAMP_*CAST
CHECK_JSONCustom user-defined function
CHECK_XMLCustom user-defined function
FLATTENUNNEST
GETCustom user-defined function
GET_IGNORE_CASECustom user-defined function

GET_PATH , :

Custom user-defined function
IS_<object_type>Custom user-defined function
IS_ARRAYCustom user-defined function
IS_BINARYCustom user-defined function
IS_BOOLEANCustom user-defined function
IS_CHAR , IS_VARCHARCustom user-defined function
IS_DATE , IS_DATE_VALUECustom user-defined function
IS_DECIMALCustom user-defined function
IS_DOUBLE , IS_REALCustom user-defined function
IS_INTEGERCustom user-defined function
IS_OBJECTCustom user-defined function
IS_TIMECustom user-defined function
IS_TIMESTAMP_*Custom user-defined function
OBJECT_CONSTRUCTCustom user-defined function
OBJECT_DELETECustom user-defined function
OBJECT_INSERTCustom user-defined function
PARSE_JSONJSON_EXTRACT
PARSE_XMLCustom user-defined function
STRIP_NULL_VALUECustom user-defined function
STRTOK_TO_ARRAYSPLIT
TRY_PARSE_JSONCustom user-defined function
TYPEOFCustom user-defined function
XMLGETCustom user-defined function

String and binary functions

SnowflakeBigQuery

string1 || string2

CONCAT(string1, string2)

ASCII

TO_CODE_POINTS(string1)[OFFSET(0)]

BASE64_DECODE_BINARY

SAFE_CONVERT_BYTES_TO_STRING(

FROM_BASE64(<bytes_input>)

)

BASE64_DECODE_STRING

SAFE_CONVERT_BYTES_TO_STRING(

FROM_BASE64(<string1>)

)

BASE64_ENCODE

TO_BASE64(

SAFE_CAST(<string1> AS BYTES)

)

BIT_LENGTH

BYTE_LENGTH * 8

CHARACTER_LENGTH

CHARINDEX(substring, string)

STRPOS(string, substring)

CHR,CHAR

CODE_POINTS_TO_STRING([number])

COLLATECustom user-defined function
COLLATIONCustom user-defined function
COMPRESSCustom user-defined function

CONCAT(string1, string2)

CONCAT(string1, string2)

Note: BigQuery'sCONCAT(...) supports concatenating any number of strings.
CONTAINSCustom user-defined function
DECOMPRESS_BINARYCustom user-defined function
DECOMPRESS_STRINGCustom user-defined function
EDITDISTANCEEDIT_DISTANCE
ENDSWITHCustom user-defined function
HEX_DECODE_BINARY

SAFE_CONVERT_BYTES_TO_STRING(

FROM_HEX(<string1>)

HEX_DECODE_STRING

SAFE_CONVERT_BYTES_TO_STRING(

FROM_HEX(<string1>)

HEX_ENCODE

TO_HEX(

SAFE_CAST(<string1> AS BYTES))

ILIKECustom user-defined function
ILIKE ANYCustom user-defined function
INITCAPINITCAP
INSERTCustom user-defined function
LEFTUser Defined Function
LENGTH

LENGTH(expression)

LIKELIKE
LIKE ALLCustom user-defined function
LIKE ANYCustom user-defined function
LOWER

LOWER(string)

LPAD

LPAD(string1, length[, string2])

LTRIM

LTRIM(string1, trim_chars)

MD5,MD5_HEX

MD5(string)

MD5_BINARYCustom user-defined function
OCTET_LENGTHCustom user-defined function
PARSE_IPCustom user-defined function
PARSE_URLCustom user-defined function
POSITION

STRPOS(string, substring)

REPEAT

REPEAT(string, integer)

REPLACE

REPLACE(string1, old_chars, new_chars)

REVERSE

number_characters

)

REVERSE(expression)

RIGHTUser Defined Function
RPADRPAD
RTRIM

RTRIM(string, trim_chars)

RTRIMMED_LENGTHCustom user-defined function
SHA1,SHA1_HEX

SHA1(string)

SHA1_BINARYCustom user-defined function
SHA2,SHA2_HEXCustom user-defined function
SHA2_BINARYCustom user-defined function
SOUNDEXCustom user-defined function
SPACECustom user-defined function
SPLITSPLIT
SPLIT_PARTCustom user-defined function
SPLIT_TO_TABLECustom user-defined function
STARTSWITHCustom user-defined function
STRTOK

SPLIT(instring, delimiter)[ORDINAL(tokennum)]


Note: The entire delimiter string argument is used as a single delimiter. The default delimiter is a comma.
STRTOK_SPLIT_TO_TABLECustom user-defined function
SUBSTR,SUBSTRINGSUBSTR
TRANSLATECustom user-defined function
TRIMTRIM
TRY_BASE64_DECODE_BINARYCustom user-defined function
TRY_BASE64_DECODE_STRING

SUBSTR(string, 0, integer)

TRY_HEX_DECODE_BINARY

SUBSTR(string, -integer)

TRY_HEX_DECODE_STRING

LENGTH(expression)

UNICODECustom user-defined function

UPPER

UPPER

String functions (regular expressions)

SnowflakeBigQuery
REGEXP

IF(REGEXP_CONTAINS,1,0)=1

REGEXP_COUNT

ARRAY_LENGTH(

REGEXP_EXTRACT_ALL(

source_string,

pattern

)

)


Ifposition is specified:

ARRAY_LENGTH(

REGEXP_EXTRACT_ALL(

SUBSTR(source_string, IF(position<= 0, 1, position)),

pattern

)

)


Note: BigQuery provides regular expression support using there2 library; see that documentation for its regular expression syntax.
REGEXP_INSTR

IFNULL(

STRPOS(

source_string,

REGEXP_EXTRACT(

source_string,

pattern)

), 0)


Ifposition is specified:

IFNULL(

STRPOS(

SUBSTR(source_string, IF(position<= 0, 1, position)),

REGEXP_EXTRACT(

SUBSTR(source_string, IF(position<= 0, 1, position)),

pattern)

) + IF(position<= 0, 1, position) - 1, 0)


Ifoccurrence is specified:

IFNULL(

STRPOS(

SUBSTR(source_string, IF(position<= 0, 1, position)),

REGEXP_EXTRACT_ALL(

SUBSTR(source_string, IF(position<= 0, 1, position)),

pattern

)[SAFE_ORDINAL(occurrence)]

) + IF(position<= 0, 1, position) - 1, 0)


Note: BigQuery provides regular expression support using there2 library; see that documentation for its regular expression syntax.

REGEXP_LIKE

IF(REGEXP_CONTAINS,1,0)=1

REGEXP_REPLACE

REGEXP_REPLACE(

source_string,

pattern,

""

)


Ifreplace_string is specified:

REGEXP_REPLACE(

source_string,

pattern,

replace_string

)


Ifposition is specified:

CASE

WHEN position > LENGTH(source_string) THEN source_string

WHEN position<= 0 THEN

REGEXP_REPLACE(

source_string,

pattern,

""

)

ELSE

CONCAT(

SUBSTR(

source_string, 1, position - 1),

REGEXP_REPLACE(

SUBSTR(source_string, position),

pattern,

replace_string

)

)

END


Note: BigQuery provides regular expression support using there2 library; see that documentation for its regular expression syntax.
REGEXP_SUBSTR

REGEXP_EXTRACT(

source_string,

pattern

)


Ifposition is specified:

REGEXP_EXTRACT(

SUBSTR(source_string, IF(position<= 0, 1, position)),

pattern

)


Ifoccurrence is specified:

REGEXP_EXTRACT_ALL(

SUBSTR(source_string, IF(position<= 0, 1, position)),

pattern

)[SAFE_ORDINAL(occurrence)]


Note: BigQuery provides regular expression support using there2 library; see that documentation for its regular expression syntax.
RLIKE

IF(REGEXP_CONTAINS,1,0)=1

System functions

SnowflakeBigQuery
SYSTEM$ABORT_SESSIONCustom user-defined function
SYSTEM$ABORT_TRANSACTIONCustom user-defined function
SYSTEM$CANCEL_ALL_QUERIESCustom user-defined function
SYSTEM$CANCEL_QUERYCustom user-defined function
SYSTEM$CLUSTERING_DEPTHCustom user-defined function
SYSTEM$CLUSTERING_INFORMATIONCustom user-defined function
SYSTEM$CLUSTERING_RATIO — DeprecatedCustom user-defined function
SYSTEM$CURRENT_USER_TASK_NAMECustom user-defined function
SYSTEM$DATABASE_REFRESH_HISTORYCustom user-defined function
SYSTEM$DATABASE_REFRESH_PROGRESS , SYSTEM$DATABASE_REFRESH_PROGRESS_BY_JOBCustom user-defined function
SYSTEM$GET_AWS_SNS_IAM_POLICYCustom user-defined function
SYSTEM$GET_PREDECESSOR_RETURN_VALUECustom user-defined function
SYSTEM$LAST_CHANGE_COMMIT_TIMECustom user-defined function
SYSTEM$PIPE_FORCE_RESUMECustom user-defined function
SYSTEM$PIPE_STATUSCustom user-defined function
SYSTEM$SET_RETURN_VALUECustom user-defined function
SYSTEM$SHOW_OAUTH_CLIENT_SECRETSCustom user-defined function
SYSTEM$STREAM_GET_TABLE_TIMESTAMPCustom user-defined function
SYSTEM$STREAM_HAS_DATACustom user-defined function
SYSTEM$TASK_DEPENDENTS_ENABLECustom user-defined function
SYSTEM$TYPEOFCustom user-defined function
SYSTEM$USER_TASK_CANCEL_ONGOING_EXECUTIONSCustom user-defined function
SYSTEM$WAITCustom user-defined function
SYSTEM$WHITELISTCustom user-defined function
SYSTEM$WHITELIST_PRIVATELINKCustom user-defined function

Table functions

SnowflakeBigQuery
GENERATORCustom user-defined function
GET_OBJECT_REFERENCESCustom user-defined function
RESULT_SCANCustom user-defined function
VALIDATECustom user-defined function

Utility and hash functions

SnowflakeBigQuery
GET_DDLFeature Request
HASHHASH is a Snowflake-specific proprietary function. Can't be translated without knowing the underlying logic used by Snowflake.

Window functions

SnowflakeBigQuery
CONDITIONAL_CHANGE_EVENTCustom user-defined function
CONDITIONAL_TRUE_EVENTCustom user-defined function
CUME_DISTCUME_DIST
DENSE_RANKDENSE_RANK
FIRST_VALUEFIRST_VALUE
LAGLAG
LAST_VALUELAST_VALUE
LEADLEAD
NTH_VALUENTH_VALUE
NTILENTILE
PERCENT_RANKPERCENT_RANK
RANKRANK
RATIO_TO_REPORTCustom user-defined function
ROW_NUMBERROW_NUMBER
WIDTH_BUCKETCustom user-defined function

BigQuery also supportsSAFE_CAST(expressionAS typename), which returns NULL if BigQuery is unable to perform acast (for example,SAFE_CAST("apple"AS INT64) returns NULL).

Operators

The following sections list Snowflake operators and their BigQueryequivalents.

Arithmetic operators

The following table shows mappings between Snowflakearithmetic operatorswith their BigQuery equivalents.

SnowflakeBigQuery

(Unary) (+'5')

CAST("5" AS NUMERIC)

a + b

a + b

(Unary) (-'5')

(-1) * CAST("5" AS NUMERIC)


Note: BigQuery supports standard unary minus, but does not convert integers in string format toINT64,NUMERIC, orFLOAT64 type.

a - b

a - b

date1 - date2


date1 - 365

DATE_DIFF(date1, date2, date_part)DATE_SUB(date1, date2, date_part)

a * b

a * b

a / b

a / b

a % b

MOD(a, b)

To view Snowflake scale and precision details when performing arithmeticoperations, see the Snowflakedocumentation.

Comparison operators

Snowflakecomparison operatorsand BigQuerycomparison operatorsare the same.

Logical/boolean operators

Snowflakelogical/boolean operatorsand BigQuerylogical/boolean operatorsare the same.

Set operators

The following table shows mappings between Snowflakeset operatorswith their BigQuery equivalents.

SnowflakeBigQuery

SELECT ...INTERSECTSELECT ...

SELECT ...

INTERSECT DISTINCT

SELECT...

SELECT ...MINUSSELECT ...

SELECT ...EXCEPTSELECT …


Note: MINUSand EXCEPTare synonyms.

SELECT ...EXCEPT DISTINCTSELECT ...

SELECT ...UNIONSELECT ...

SELECT ...UNION ALLSELECT ...

SELECT ...UNION DISTINCTSELECT ...


SELECT ...UNION ALLSELECT ...

Subquery operators

The following table shows mappings between Snowflakesubquery operatorswith their BigQuery equivalents.

SnowflakeBigQuery

SELECT ...FROM ...WHERE col <operator>ALL …SELECT ...FROM ...WHERE col <operator>ANY ...

BigQuery does not support a direct alternative to Snowflake's ALL/ANY.

SELECT ...FROM ...

WHERE[NOT] EXISTS...

SELECT ...FROM ...

WHERE[NOT] EXISTS...

SELECT ...FROM ...

WHERE[NOT] IN...

SELECT ...FROM ...

WHERE[NOT] IN...

SELECT * FROM table1

UNION

SELECT * FROM table2

EXCEPT

SELECT * FROM table3

SELECT * FROM table1

UNION ALL

(

SELECT * FROM table2

EXCEPT

SELECT * FROM table3

)


Note: BigQuery requires parentheses to separate different set operations. If the same set operator is repeated, parentheses are not necessary.

DML syntax

This section addresses differences in data management language syntax betweenSnowflake and BigQuery.

INSERT statement

Snowflake offers a configurableDEFAULT keyword for columns. InBigQuery, theDEFAULT value for nullable columns is NULL andDEFAULT is not supported for required columns. MostSnowflakeINSERT statementsare compatible with BigQuery. The following table shows exceptions.

SnowflakeBigQuery

INSERT [OVERWRITE] INTO table

VALUES [... | DEFAULT | NULL] ...


Note: BigQuery does not support insertingJSON objects with anINSERTstatement.

INSERT [INTO] table (column1 [, ...])

VALUES (DEFAULT [, ...])

Note: BigQuery does not support a direct alternative to Snowflake'sOVERWRITE. UseDELETE instead.

INSERT INTO table (column1 [, ...])SELECT...FROM ...

INSERT [INTO] table (column1, [,...])

SELECT ...

FROM ...

INSERT [OVERWRITE] ALL <intoClause>...INSERT [OVERWRITE] {FIRST | ALL}{WHEN condition THEN <intoClause>}

[...]

[ELSE <intoClause>]

...

Note:<intoClause> represents standardINSERT statement, listed above.
BigQuery does not support conditional and unconditional multi-tableINSERTs.

BigQuery also supports inserting values using a subquery (where one ofthe values is computed using a subquery), which is not supported in Snowflake.For example:

INSERTINTOtable(column1,column2)VALUES('value_1',(SELECTcolumn2FROMtable2))

COPY statement

Snowflake supports copying data from stages files to an existing table and froma table to a named internal stage, a named external stage, and an externallocation (Amazon S3, Google Cloud Storage, or Microsoft Azure).

BigQuery does not use the SQLCOPY command to load data, but you canuse any of several non-SQLtools and options toload data into BigQuery tables. You can also use data pipeline sinksprovided inApache SparkorApache Beamto write data into BigQuery.

UPDATE statement

Most SnowflakeUPDATE statements are compatible with BigQuery. Thefollowing table shows exceptions.

SnowflakeBigQuery

UPDATE tableSET col = value [,...][FROM ...][WHERE ...]

UPDATE table

SET column = expression [,...]

[FROM ...]

WHERE TRUE


Note: AllUPDATE statements in BigQuery require aWHERE keyword, followed by a condition.

DELETE andTRUNCATE TABLE statements

TheDELETE andTRUNCATE TABLE statements are both ways to remove rows from atable without affecting the table schema or indexes.

In Snowflake, bothDELETE andTRUNCATE TABLE maintain deleted data usingSnowflake's Time Travel for recovery purposes for the data retention period.However, DELETE does not delete the external file load history and loadmetadata.

In BigQuery, theDELETE statement must have aWHERE clause. Formore information aboutDELETE in BigQuery, see theBigQueryDELETEexamplesin the DML documentation.

SnowflakeBigQuery

DELETE FROM table_name[USING ...]

[WHERE ...]



TRUNCATE [TABLE] [IF EXISTS] table_name

DELETE [FROM] table_name [alias]

WHERE ...


Note: BigQuery DELETEstatements require aWHEREclause.

MERGE statement

TheMERGE statement can combineINSERT,UPDATE, andDELETE operationsinto a single "upsert" statement and perform the operations automatically. TheMERGE operation must match at most one source row for each target row.

BigQuery tables are limited to 1,000 DML statements per day, so youshould optimally consolidate INSERT, UPDATE, and DELETE statements into a singleMERGE statement as shown in the following table:

SnowflakeBigQuery

MERGE INTO targetUSING sourceON target.key = source.keyWHEN MATCHED AND source.filter ='Filter_exp' THEN

UPDATE SET target.col1 = source.col1, target.col1 = source.col2,

...


Note: Snowflake supports a ERROR_ON_NONDETERMINISTIC_MERGE session parameter to handle nondeterministic results.

MERGE target

USING source

ON target.key = source.key

WHEN MATCHED AND source.filter = 'filter_exp' THEN

UPDATE SET

target.col1 = source.col1,

target.col2 = source.col2,

...



Note: All columns must be listed if updating all columns.

GET andLIST statements

TheGETstatement downloads data files from one of the following Snowflake stages to alocal directory/folder on a client machine:

  • Named internal stage
  • Internal stage for a specified table
  • Internal stage for the current user

TheLIST(LS) statement returns a list of files that have been staged (that is, uploadedfrom a local file system or unloaded from a table) in one of the followingSnowflake stages:

  • Named internal stage
  • Named external stage
  • Stage for a specified table
  • Stage for the current user

BigQuery does not support the concept of staging and does not haveGET andLIST equivalents.

PUT andREMOVE statements

ThePUTstatement uploads (that is, stages) data files from a local directory/folder ona client machine to one of the following Snowflake stages:

  • Named internal stage
  • Internal stage for a specified table
  • Internal stage for the current user

TheREMOVE(RM) statement removes files that have been staged in one of the followingSnowflake internal stages:

  • Named internal stage
  • Stage for a specified table
  • Stage for the current user

BigQuery does not support the concept of staging and does not havePUT andREMOVE equivalents.

DDL syntax

This section addresses differences in data definition language syntax betweenSnowflake and BigQuery.

Database, Schema, and Share DDL

Most of Snowflake's terminology matches that of BigQuery's except thatSnowflake Database is similar to BigQuery Dataset. See thedetailed Snowflake to BigQuery terminology mapping.

CREATE DATABASE statement

Snowflake supports creating and managing a database viadatabase management commandswhile BigQuery provides multiple options like using Console, CLI,Client Libraries, etc. forcreating datasets. Thissection will use BigQuery CLI commands corresponding to the Snowflakecommands to address the differences.

SnowflakeBigQuery

CREATE DATABASE <name>


Note: Snowflake provides theserequirements for naming databases. It allows only 255 characters in the name.

bq mk <name>


Note: BigQuery has similardataset naming requirements as Snowflake except that it allows 1024 characters in the name.

CREATE OR REPLACE DATABASE <name>

Replacing the dataset is not supported in BigQuery.

CREATE TRANSIENT DATABASE <name>

Creating temporary dataset is not supported in BigQuery.

CREATE DATABASE IF NOT EXISTS <name>

Concept not supported in BigQuery

CREATE DATABASE <name>

CLONE <source_db>

[ { AT | BEFORE }

( { TIMESTAMP => <timestamp> |

OFFSET => <time_difference> |

STATEMENT => <id> } ) ]

Cloning datasets is not yet supported in BigQuery.

CREATE DATABASE <name>

DATA_RETENTION_TIME_IN_DAYS = <num>

Time travel at the dataset level is not supported in BigQuery. However, time travel for table and query results is supported.

CREATE DATABASE <name>

DEFAULT_DDL_COLLATION = '<collation_specification>'

Collation in DDL is not supported in BigQuery.

CREATE DATABASE <name>

COMMENT = '<string_literal>'

bq mk \

--description "<string_literal>" \

<name>

CREATE DATABASE <name>

FROM SHARE <provider_account>.<share_name>

Creating shared datasets is not supported in BigQuery. However, users canshare the dataset via Console/UI once the dataset is created.

CREATE DATABASE <name>

AS REPLICA OF

<region>.<account>.<primary_db_name>

AUTO_REFRESH_MATERIALIZED_VIEWS_ON_SECONDARY = { TRUE | FALSE }


Note: Snowflake provides the option forautomatic background maintenance of materialized views in the secondary database which is not supported in BigQuery.

bq mk --transfer_config \

--target_dataset = <name> \

--data_source = cross_region_copy \ --params='

{"source_dataset_id":"<primary_db_name>"

,"source_project_id":"<project_id>"

,"overwrite_destination_table":"true"}'

Note: BigQuery supportscopying datasets using theBigQuery Data Transfer Service.See here for a dataset copying prerequisites.

BigQuery also offers the followingbq mk command options, which donot have a direct analogue in Snowflake:

  • --location <dataset_location>
  • --default_table_expiration <time_in_seconds>
  • --default_partition_expiration <time_in_seconds>

ALTER DATABASE statement

This section will use BigQuery CLI commands corresponding to theSnowflake commands to address the differences in ALTER statements.

SnowflakeBigQuery

ALTER DATABASE [ IF EXISTS ] <name> RENAME TO <new_db_name>

Renaming datasets is not supported in BigQuery but copying datasets is supported.

ALTER DATABASE <name>

SWAP WITH <target_db_name>

Swapping datasets is not supported in BigQuery.

ALTER DATABASE <name>

SET

[DATA_RETENTION_TIME_IN_DAYS = <num>]

[ DEFAULT_DDL_COLLATION = '<value>']

Managing data retention and collation at dataset level is not supported in BigQuery.

ALTER DATABASE <name>

SET COMMENT = '<string_literal>'

bq update \

--description "<string_literal>" <name>

ALTER DATABASE <name>

ENABLE REPLICATION TO ACCOUNTS <snowflake_region>.<account_name>

[ , <snowflake_region>.<account_name> ... ]

Concept not supported in BigQuery.

ALTER DATABASE <name>

DISABLE REPLICATION [ TO ACCOUNTS <snowflake_region>.<account_name>

[ , <snowflake_region>.<account_name> ... ]]

Concept not supported in BigQuery.

ALTER DATABASE <name>

SET AUTO_REFRESH_MATERIALIZED_VIEWS_ON_SECONDARY = { TRUE | FALSE }

Concept not supported in BigQuery.

ALTER DATABASE <name> REFRESH

Concept not supported in BigQuery.

ALTER DATABASE <name>

ENABLE FAILOVER TO ACCOUNTS <snowflake_region>.<account_name>

[ , <snowflake_region>.<account_name> ... ]

Concept not supported in BigQuery.

ALTER DATABASE <name>

DISABLE FAILOVER [ TO ACCOUNTS <snowflake_region>.<account_name>

[ , <snowflake_region>.<account_name> ... ]]

Concept not supported in BigQuery.

ALTER DATABASE <name>

PRIMARY

Concept not supported in BigQuery.

DROP DATABASE statement

This section will use BigQuery CLI command corresponding to theSnowflake command to address the difference in DROP statement.

SnowflakeBigQuery

DROP DATABASE [ IF EXISTS ] <name>

[ CASCADE | RESTRICT ]


Note: In Snowflake, dropping a database does not permanently remove it from the system. A version of the dropped database is retained for the number of days specified by theDATA_RETENTION_TIME_IN_DAYS parameter for the database.

bq rm -r -f -d <name>


Where

-ris to remove all objects in the dataset

-f is to skip confirmation for execution

-dindicates dataset

Note: In BigQuery, deleting a dataset is permanent. Also, cascading is not supported at the dataset level as all the data and objects in the dataset are deleted.

Snowflake also supportsUNDROP DATASETcommand which restores the most recent version of a dropped datasets. This iscurrently not supported in BigQuery at the dataset level.

USE DATABASE statement

Snowflake provides the option to set the database for a user session usingUSE DATABASEcommand. This removes the need for specifying fully-qualified object names inSQL commands. BigQuery does not provide any alternative to Snowflake'sUSE DATABASE command.

SHOW DATABASE statement

This section will use BigQuery CLI command corresponding to theSnowflake command to address the difference in SHOW statement.

SnowflakeBigQuery

SHOW DATABASES


Note: Snowflake provides a single option to list and show details about all the databases including dropped databases that are within the retention period.
bq ls --format=prettyjson
and / or

bq show <dataset_name>


Note: In BigQuery, the ls command provides only dataset names and basic information, and the show command provides details like last modified timestamp, ACLs, and labels of a dataset. BigQuery also provides more details about the datasets viaInformation Schema.

SHOW TERSE DATABASES


Note: With the TERSE option, Snowflake allows to display only specific information/fields about datasets.
Concept not supported in BigQuery.

SHOW DATABASES HISTORY

Time travel concept is not supported in BigQuery at the dataset level.
SHOW DATABASES

[LIKE '<pattern>']

[STARTS WITH '<name_string>']

Filtering results by dataset names is not supported in BigQuery. However,filtering by labels is supported.
SHOW DATABASES

LIMIT <rows> [FROM '<name_string>']


Note: By default, Snowflake does not limit the number of results. However, the value forLIMIT cannot exceed 10K.

bq ls \

--max_results <rows>


Note: By default, BigQuery only displays 50 results.

BigQuery also offers the followingbq command options, which donot have a direct analogue in Snowflake:

  • bq ls --format=pretty: Returns basic formatted results
  • *bq ls -a: *Returns only anonymous datasets (the ones starting with anunderscore)
  • bq ls --all: Returns all datasets including anonymous ones
  • bq ls --filter labels.key:value: Returns results filtered by dataset label
  • bq ls --d: Excludes anonymous datasets form results
  • bq show --format=pretty: Returns detailed basic formatted results for alldatasets

SCHEMA management

Snowflake provides multipleschema managementcommandssimilar to its database management commands. This concept of creating andmanaging schema is not supported in BigQuery.

However, BigQuery allows you to specify a table's schema when you loaddata into a table, and when you create an empty table. Alternatively, you canuse schemaauto-detection forsupported data formats.

SHARE management

Snowflake provides multipleshare managementcommandssimilar to its database and schema management commands. This concept ofcreating and managing share is not supported in BigQuery.

Table, View, and Sequence DDL

CREATE TABLE statement

Most SnowflakeCREATE TABLE statements are compatible with BigQuery,except for the following syntax elements, which are not used inBigQuery:

SnowflakeBigQuery

CREATE TABLE table_name

(

col1 data_type1 NOT NULL,

col2 data_type2 NULL,

col3 data_type3 UNIQUE,

col4 data_type4 PRIMARY KEY,

col5 data_type5

)


Note:UNIQUEandPRIMARY KEY constraints are informational and are not enforced by the Snowflake system.

CREATE TABLE table_name

(

col1 data_type1 NOT NULL,

col2 data_type2,

col3 data_type3,

col4 data_type4,

col5 data_type5,

)

CREATE TABLE table_name

(

col1 data_type1[,...]

table_constraints

)


where table_constraintsare:

[UNIQUE(column_name [, ... ])]

[PRIMARY KEY(column_name [, ...])]

[FOREIGN KEY(column_name [, ...])

REFERENCES reftable [(refcolumn)]


Note:UNIQUEandPRIMARY KEY constraints are informational and are not enforced by the Snowflake system.

CREATE TABLE table_name

(

col1 data_type1[,...]

)

PARTITION BY column_name

CLUSTER BY column_name [, ...]


Note: BigQuery does not useUNIQUE,PRIMARY KEY, orFOREIGNKEY table constraints. To achieve similar optimization that these constraints provide during query execution, partition and cluster your BigQuery tables.CLUSTER BY supports up to four columns.

CREATE TABLE table_name

LIKE original_table_name

Seethis example to learn how to use theINFORMATION_SCHEMA tables to copy column names, data types, and NOT NULL constraints to a new table.

CREATE TABLE table_name

(

col1 data_type1

)

BACKUP NO


Note:In Snowflake, theBACKUP NO setting is specified to "save processing time when creating snapshots and restoring from snapshots and to reduce storage space."
TheBACKUP NO table option is not used nor needed because BigQuery automatically keeps up to 7 days of historical versions of all your tables, without any effect on processing time nor billed storage.

CREATE TABLE table_name

(

col1 data_type1

)

table_attributes


where table_attributesare:

[DISTSTYLE {AUTO|EVEN|KEY|ALL}]

[DISTKEY (column_name)]

[[COMPOUND|INTERLEAVED] SORTKEY

(column_name [, ...])]

BigQuery supports clustering which allows storing keys in sorted order.

CREATE TABLE table_name

AS SELECT ...

CREATE TABLE table_name

AS SELECT ...

CREATE TABLE IF NOT EXISTS table_name

...

CREATE TABLE IF NOT EXISTS table_name

...

BigQuery also supports the DDL statementCREATE OR REPLACETABLEstatement which overwrites a table if it already exists.

BigQuery'sCREATE TABLEstatement also supports the following clauses,which do not have a Snowflake equivalent:

For more information aboutCREATE TABLE in BigQuery, seeCREATE TABLE statement examplesin the DDL documentation.

ALTER TABLE statement

This section will use BigQuery CLI commands corresponding to theSnowflake commands to address the differences in ALTER statements for tables.

SnowflakeBigQuery

ALTER TABLE [ IF EXISTS ] <name> RENAME TO <new_name>

ALTER TABLE [IF EXISTS] <name>

SET OPTIONS (friendly_name="<new_name>")

ALTER TABLE <name>

SWAP WITH <target_db_name>

Swapping tables is not supported in BigQuery.

ALTER TABLE <name>

SET

[DEFAULT_DDL_COLLATION = '<value>']

Managing data collation for tables is not supported in BigQuery.

ALTER TABLE <name>

SET

[DATA_RETENTION_TIME_IN_DAYS = <num>]

ALTER TABLE [IF EXISTS] <name>

SET OPTIONS (expiration_timestamp=<timestamp>)

ALTER TABLE <name>

SET

COMMENT = '<string_literal>'

ALTER TABLE [IF EXISTS] <name>

SET OPTIONS (description='<string_literal>')

Additionally, Snowflake providesclustering, column, and constraint optionsfor altering tables that are not supported by BigQuery.

DROP TABLE andUNDROP TABLE statements

This section will use BigQuery CLI command corresponding to theSnowflake command to address the difference in DROP and UNDROP statements.

SnowflakeBigQuery

DROP TABLE [IF EXISTS] <table_name>

[CASCADE | RESTRICT]


Note: In Snowflake, dropping a table does not permanently remove it from the system. A version of the dropped table is retained for the number of days specified by the DATA_RETENTION_TIME_IN_DAYSparameter for the database.

bq rm -r -f -d <dataset_name>.<table_name>


Where

-r is to remove all objects in the dataset
-f is to skip confirmation for execution
-d indicates dataset

Note: In BigQuery, deleting a table is also not permanent but a snapshot is currently maintained only for 7 days.

UNDROP TABLE <table_name>

bq cp \ <dataset_name>.<table_name>@<unix_timestamp> <dataset_name>.<new_table_name>


Note: In BigQuery, you need to first, determine a UNIX timestamp of when the table existed (in milliseconds). Then, copy the table at that timestamp to a new table. The new table must have a different name than the deleted table.

CREATE EXTERNAL TABLE statement

BigQuery allows creating bothpermanent and temporary external tablesand querying data directly from:

Snowflake allows creating apermanent external tablewhich when queried, reads data from a set of one or more files in a specifiedexternal stage.

This section will use BigQuery CLI command corresponding to theSnowflake command to address the differences in CREATE EXTERNAL TABLE statement.

SnowflakeBigQuery
CREATE [OR REPLACE] EXTERNAL TABLE

table

((<col_name> <col_type> AS <expr> )

| (<part_col_name> <col_type> AS <part_expr>)[ inlineConstraint ]

[ , ... ] )

LOCATION = externalStage

FILE_FORMAT =

({FORMAT_NAME='<file_format_name>'

|TYPE=source_format [formatTypeOptions]})


Where:

externalStage = @[namespace.]ext_stage_name[/path]


Note: Snowflake allows staging the files containing data to be read and specifying format type options for external tables. Snowflake format types - CSV, JSON, AVRO, PARQUET, ORC are all supported by BigQuery except the XML type.

[1]bq mk \

--external_table_definition=definition_file \

dataset.table


OR


[2]bq mk \

--external_table_definition=schema_file@source_format={Cloud Storage URI | drive_URI} \

dataset.table


OR


[3]bq mk \

--external_table_definition=schema@source_format = {Cloud Storage URI | drive_URI} \

dataset.table


Note: BigQuery allows creating a permanent table linked to your data source using a table definition file [1], a JSON schema file [2] or an inline schema definition [3]. Staging files to be read and specifying format type options is not supported in BigQuery.

CREATE [OR REPLACE] EXTERNAL TABLE [IF EXISTS]

<table_name>

((<col_name> <col_type> AS <expr> )

[ , ... ] )

[PARTITION BY (<identifier>, ...)]

LOCATION = externalStage

[REFRESH_ON_CREATE = {TRUE|FALSE}]

[AUTO_REFRESH = {TRUE|FALSE}]

[PATTERN = '<regex_pattern>']

FILE_FORMAT = ({FORMAT_NAME = '<file_format_name>' | TYPE = { CSV | JSON | AVRO | ORC | PARQUET} [ formatTypeOptions]})

[COPY GRANTS]

[COMMENT = '<string_literal>']

bq mk \

--external_table_definition=definition_file \

dataset.table


Note: BigQuery currently does not support any of the optional parameter options provided by Snowflake for creating external tables. For partitioning, BigQuery supports using the_FILE_NAME pseudocolumn to create partitioned tables/views on top of the external tables. For more information, seeQuery the_FILE_NAME pseudocolumn.

Additionally, BigQuery also supportsquerying externally partitioned datain AVRO, PARQUET, ORC, JSON and CSV formats that is stored on Google CloudStorage using adefault hive partitioning layout.

CREATE VIEW statement

The following table shows equivalents between Snowflake and BigQueryfor theCREATE VIEW statement.

SnowflakeBigQuery

CREATE VIEW view_name AS SELECT ...

CREATE VIEW view_name AS SELECT ...

CREATE OR REPLACE VIEW view_name AS SELECT ...

CREATE OR REPLACE VIEW

view_name AS SELECT ...

CREATE VIEW view_name

(column_name, ...)

AS SELECT ...

CREATE VIEW view_name

AS SELECT ...

Not supportedCREATE VIEW IF NOT EXISTS

view_name

OPTIONS(view_option_list)

AS SELECT ...

CREATE VIEW view_name

AS SELECT ...

WITH NO SCHEMA BINDING

In BigQuery, to create a view all referenced objects must already exist.

BigQuery allows to queryexternal data sources.

CREATE SEQUENCE statement

Sequences are not used in BigQuery, this can be achieved with thefollowing batch way. For more information on surrogate keys and slowly changingdimensions (SCD), see the following guides:

INSERT INTO dataset.table SELECT *, ROW_NUMBER() OVER () AS idFROM dataset.table

Data loading and unloading DDL

Snowflake supports data loading and unloading via stage, file format and pipemanagement commands. BigQuery also provides multiple options for suchas bq load, BigQuery Data Transfer Service, bq extract, etc. Thissection highlights the differences in the usage of these methodologies for dataloading and unloading.

Account and Session DDL

Snowflake's Account and Session concepts are not supported in BigQuery.BigQuery allows management of accounts viaCloud IAM at all levels. Also, multi statementtransactions are not yet supported in BigQuery.

User-defined functions (UDF)

A UDF enables you to create functions for custom operations. These functionsaccept columns of input, perform actions, and return the result of those actionsas a value

BothSnowflakeandBigQuerysupport UDF using SQL expressions and Javascript Code.

See theGoogleCloudPlatform/bigquery-utils/GitHub repository for a library of common BigQuery UDFs.

CREATE FUNCTION syntax

The following table addresses differences in SQL UDF creation syntax betweenSnowflake and BigQuery.

SnowflakeBigQuery

CREATE [ OR REPLACE ] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS sql_function_definition

s

CREATE [OR REPLACE] FUNCTION function_name

([sql_arg_name sql_arg_data_type[,..]])

AS sql_function_definition


Note: In BigQuerySQL UDF, return data type is optional. BigQuery infers the result type of the function from the SQL function body when a query calls the function.

CREATE [OR REPLACE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS TABLE (col_name, col_data_type[,..])

AS sql_function_definition


CREATE [OR REPLACE] FUNCTION function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS sql_function_definition


Note:In BigQuerySQL UDF, returning table type is currently not supported but is on the product roadmap and will be available soon. However, BigQuery supports returning ARRAY of type STRUCT.

CREATE [SECURE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS sql_function_definition


Note: Snowflake provides secure option to restrict UDF definition and details only to authorized users (that is, users who are granted the role that owns the view).

CREATE FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS sql_function_definition


Note: Function security is not a configurable parameter in BigQuery. BigQuery supports creating IAM roles and permissions to restrict access to underlying data and function definition.

CREATE [OR REPLACE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

[ { CALLED ON NULL INPUT | { RETURNS NULL ON NULL INPUT | STRICT } } ]

AS sql_function_definition

CREATE [OR REPLACE] FUNCTION function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS sql_function_definition


Note: Function behaviour for null inputs is implicitly handled in BigQuery and need not be specified as a separate option.

CREATE [OR REPLACE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

[VOLATILE | IMMUTABLE]

AS sql_function_definition

CREATE [OR REPLACE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS sql_function_definition


Note:Function volatility is not a configurable parameter in BigQuery. All BigQuery UDF volatility is equivalent to Snowflake'sIMMUTABLE volatility (that is, it does not do database lookups or otherwise use information not directly present in its argument list).

CREATE [OR REPLACE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS [' | $$]

sql_function_definition

[' | $$]

CREATE [OR REPLACE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS sql_function_definition


Note: Using single quotes or a character sequence like dollar quoting ($$) is not required or supported in BigQuery. BigQuery implicitly interprets the SQL expression.

CREATE [OR REPLACE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

[COMMENT = '<string_literal>']

AS sql_function_definition

CREATE [OR REPLACE] FUNCTION

function_name

([sql_arg_name sql_arg_data_type[,..]])

RETURNS data_type

AS sql_function_definition


Note:Adding comments or descriptions in UDFs is currently not supported in BigQuery.

CREATE [OR REPLACE] FUNCTION function_name

(x integer, y integer)

RETURNS integer

AS $$

SELECT x + y

$$


Note: Snowflake does not support ANY TYPE for SQL UDFs. However, it supports usingVARIANT data types.

CREATE [OR REPLACE] FUNCTION function_name

(x ANY TYPE, y ANY TYPE)

AS

SELECT x + y



Note: BigQuery supports using ANY TYPE as argument type. The function will accept an input of any type for this argument. For more information, seetemplated parameter in BigQuery.

BigQuery also supports theCREATE FUNCTION IF NOT EXISTSstatementwhich treats the query as successful and takes no action if a function with thesame name already exists.

BigQuery'sCREATE FUNCTIONstatement also supports creatingTEMPORARY or TEMP functions,which do not have a Snowflake equivalent. Seecalling UDFsfor details on executing a BigQuery persistent UDF.

DROP FUNCTION syntax

The following table addresses differences in DROP FUNCTION syntax betweenSnowflake and BigQuery.

SnowflakeBigQuery

DROP FUNCTION [IF EXISTS]

function_name

([arg_data_type, ... ])

DROP FUNCTION [IF EXISTS] dataset_name.function_name


Note: BigQuery does not require using the function's signature (argument data type) for deleting the function.

BigQuery requires that you specify theproject_name ifthe function is not located in the current project.

Additional function commands

This section covers additional UDF commands supported by Snowflake that are notdirectly available in BigQuery.

ALTER FUNCTION syntax

Snowflake supports the following operations usingALTER FUNCTIONsyntax.

  • Renaming a UDF
  • Converting to (or reverting from) a secure UDF
  • Adding, overwriting, removing a comment for a UDF

As configuring function security and adding function comments is not availablein BigQuery, ALTER FUNCTION syntax is currently not supported. However,theCREATE FUNCTIONstatement can be used to create a UDF with the same function definition but adifferent name.

DESCRIBE FUNCTION syntax

Snowflake supports describing a UDF usingDESC[RIBE] FUNCTIONsyntax. This is currently not supported in BigQuery. However, queryingUDF metadata via INFORMATION SCHEMA will be available soon as part of theproduct roadmap.

SHOW USER FUNCTIONS syntax

In Snowflake,SHOW USER FUNCTIONSsyntax can be used to list all UDFs for which users have access privileges. Thisis currently not supported in BigQuery. However, querying UDF metadatavia INFORMATION SCHEMA will be available soon as part of the product roadmap.

Stored procedures

Snowflakestored proceduresare written in JavaScript, which can execute SQL statements by calling aJavaScript API. In BigQuery, stored procedures are defined using ablock of SQLstatements.

CREATE PROCEDURE syntax

In Snowflake, a stored procedure is executed with aCALL commandwhile in BigQuery, stored procedures areexecutedlike any other BigQuery function.

The following table addresses differences in stored procedure creation syntaxbetween Snowflake and BigQuery.

SnowflakeBigQuery

CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

RETURNS data_type

AS procedure_definition;


Note: Snowflake requires that stored procedures return a single value. Hence, return data type is a required option.
CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_mode arg_name arg_data_type[,..]])

BEGIN

procedure_definition

END;


arg_mode: IN | OUT | INOUT


Note: BigQuery doesn't support a return type for stored procedures. Also, it requires specifying argument mode for each argument passed.

CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

RETURNS data_type

AS

$$

javascript_code

$$;

CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

BEGIN

statement_list

END;

CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

RETURNS data_type

[{CALLED ON NULL INPUT | {RETURNS NULL ON NULL INPUT | STRICT}}]

AS procedure_definition;

CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

BEGIN

procedure_definition

END;


Note: Procedure behavior for null inputs is implicitly handled in BigQuery and need not be specified as a separate option.
CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

RETURNS data_type

[VOLATILE | IMMUTABLE]

AS procedure_definition;

CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

BEGIN

procedure_definition

END;


Note:Procedure volatility is not a configurable parameter in BigQuery. It's equivalent to Snowflake'sIMMUTABLE volatility.
CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

RETURNS data_type

[COMMENT = '<string_literal>']

AS procedure_definition;

CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

BEGIN

procedure_definition

END;


Note:Adding comments or descriptions in procedure definitions is currently not supported in BigQuery.
CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

RETURNS data_type

[EXECUTE AS { CALLER | OWNER }]

AS procedure_definition;


Note: Snowflake supports specifying the caller or owner of the procedure for execution

CREATE [OR REPLACE] PROCEDURE

procedure_name

([arg_name arg_data_type[,..]])

BEGIN

procedure_definition

END;


Note: BigQuery stored procedures are always executed as the caller

BigQuery also supports theCREATE PROCEDURE IF NOT EXISTS statementwhich treats the query as successful and takes no action if a function with thesame name already exists.

DROP PROCEDURE syntax

The following table addresses differences in DROP FUNCTION syntax betweenSnowflake and BigQuery.

SnowflakeBigQuery

DROP PROCEDURE [IF EXISTS]

procedure_name

([arg_data_type, ... ])

DROP PROCEDURE [IF EXISTS] dataset_name.procedure_name


Note: BigQuery does not require using procedure's signature (argument data type) for deleting the procedure.

BigQuery requires that you specify theproject_name ifthe procedure is not located in the current project.

Additional procedure commands

Snowflake provides additional commands likeALTER PROCEDURE,DESC[RIBE] PROCEDURE,andSHOW PROCEDURESto manage the stored procedures. These are currently not supported inBigQuery.

Metadata and transaction SQL statements

SnowflakeBigQuery

BEGIN [ { WORK | TRANSACTION } ] [ NAME <name> ];START_TRANSACTION [ name <name> ];

BigQuery always uses Snapshot Isolation. For details, seeConsistency guarantees elsewhere in this document.

COMMIT;

Not used in BigQuery.

ROLLBACK;

Not used in BigQuery

SHOW LOCKS [ IN ACCOUNT ];SHOW TRANSACTIONS [ IN ACCOUNT ];Note: If the user has the ACCOUNTADMIN role, the user can see locks/transactions for all users in the account.

Not used in BigQuery.

Multi-statement and multi-line SQL statements

Both Snowflake and BigQuery support transactions (sessions)and therefore support statements separated by semicolons that are consistentlyexecuted together. For more information, seeMulti-statement transactions.

Metadata columns for staged files

Snowflake automatically generates metadata for files in internal and externalstages. This metadata can bequeried andloadedinto a table alongside regular data columns. The following metadata columns canbe utilized:

Consistency guarantees and transaction isolation

Both Snowflake and BigQuery are atomic—that is, ACID-compliant on aper-mutation level across many rows.

Transactions

Each Snowflake transaction is assigned a unique start time (includesmilliseconds) that is set as the transaction ID. Snowflake only supports theREAD COMMITTEDisolation level. However, a statement can see changes made by another statementif they are both in the same transaction - even though those changes are notcommitted yet. Snowflake transactions acquire locks on resources (tables) whenthat resource is being modified. Users can adjust the maximum time a blockedstatement will wait until the statement times out. DML statements areautocommitted if theAUTOCOMMITparameter is turned on.

BigQuery alsosupports transactions.BigQuery helps ensureoptimistic concurrency control(first to commit wins) withsnapshot isolation, in whicha query reads the last committed data before the query starts. This approachguarantees the same level of consistency on a per-row, per-mutation basis andacross rows within the same DML statement, yet avoids deadlocks. In the case ofmultiple DML updates against the same table, BigQuery switches topessimistic concurrency control.Load jobs can run completely independently and append to tables. However,BigQuery does not yet provide an explicit transaction boundary orsession.

Rollback

If a Snowflake transaction's session is unexpectedly terminated before thetransaction is committed or rolled back, the transaction is left in a detachedstate. The user should run SYSTEM$ABORT_TRANSACTION to abort the detachedtransaction or Snowflake will roll back the detached transaction after four idlehours. If a deadlock occurs, Snowflake detects the deadlock and selects the morerecent statement to roll back. If the DML statement in an explicitly openedtransaction fails, the changes are rolled back, but the transaction is kept openuntil it is committed or rolled back. DDL statements in Snowflake cannot berolled back as they are autocommitted.

BigQuery supports theROLLBACK TRANSACTION statement.There is noABORT statement in BigQuery.

Database limits

Always checkthe BigQuery public documentation forthe latest quotas and limits. Many quotas for large-volume users can be raisedby contacting the Cloud Support team.

All Snowflake accounts have soft-limits set by default. Soft-limits are setduring account creation and can vary. Many Snowflake soft-limits can be raisedthrough the Snowflake account team or a support ticket.

The following table shows a comparison of the Snowflake and BigQuerydatabase limits.

LimitSnowflakeBigQuery
Size of query text1 MB1 MB
Maximum number of concurrent queriesXS Warehouse - 8
S Warehouse - 16
M Warehouse - 32
L Warehouse - 64
XL Warehouse - 128
100

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-03-05 UTC.