Generate metadata for translation and assessment
Preview
This product is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
This document describes how to create metadata and query log files by using thedwh-migration-dumper command-line extraction tool. The metadata files describe the SQL objects in your sourcesystem.
BigQuery Migration Service uses this information to improve thetranslation of your SQL scripts from your source system dialect toGoogleSQL.
The BigQuery migration assessment uses metadata files and query log files to analyze your existing data warehouse and help assess the effort of moving your data warehouse to BigQuery.
Overview
You can use thedwh-migration-dumper tool to extract metadata information from thedatabase platform that you are migrating to BigQuery. While using theextraction tool isn't required for translation, it is required forBigQuery migration assessment and we strongly recommend using it for all migration tasks.
For more information, seeCreate metadata files.
You can use thedwh-migration-dumper tool to extract metadata from thefollowing database platforms:
- Teradata
- Amazon Redshift
- Apache Hive
- Apache Impala
- Apache Spark
- Azure Synapse
- Greenplum
- SQL Server
- IBM Netezza
- Oracle
- PostgreSQL
- Snowflake
- Trino or PrestoSQL
- Vertica
- BigQuery
For most of these databases you can also extract query logs.
Thedwh-migration-dumper tool queries system tables to gather data definitionlanguage (DDL) statements related to user and system databases. It does notquery the contents of user databases. The tool saves the metadata informationfrom the system tables as CSV files and then zips these files intoa single package. You then upload this zip file to Cloud Storage when you uploadyour source files for translation or assessment.
When using the query logs option, thedwh-migration-dumper tool queries system tables for DDL statements and query logs related to user and system databases. These are saved in CSV or yaml format to a subdirectory, and then packed into a zip package. At no point are the contents of user databases queried themselves. At this point, the BigQuery migration assessment requires individual CSV, YAML and text files for query logs so you should unzip all of these files from query logs zip file and upload them for assessment.
Thedwh-migration-dumper tool can run on Windows, macOS, and Linux.
Thedwh-migration-dumper tool is available under theApache 2 license.
If you choose not to use thedwh-migration-dumper tool for translation, you can manually providemetadata files by collecting the data definition language (DDL) statements forthe SQL objects in your source system into separate text files.
Providing metadata and query logs extracted with the tool is required for migration assessment using BigQuery migration assessment.
Compliance requirements
We provide the compileddwh-migration-dumper tool binary for ease of use. If youneed to audit the tool to ensure that it meets compliance requirements, youcan review the source code from thedwh-migration-dumper tool GitHub repository,and compile your own binary.
Prerequisites
Install Java
The server on which you plan to rundwh-migration-dumper tool must haveJava 8 or higher installed. If it doesn't, download Java from theJava downloads pageand install it.
Required permissions
The user account that you specify for connecting thedwh-migration-dumper tool tothe source system must have permissions to read metadata from that system.Confirm that this account has appropriate role membership to query the metadataresources available for your platform. For example,INFORMATION_SCHEMA is ametadata resource that is common across several platforms.
Install thedwh-migration-dumper tool
To install thedwh-migration-dumper tool, follow these steps:
- On the machine where you want to run the
dwh-migration-dumpertool, download thezip file from thedwh-migration-dumpertool GitHub repository. To validate the
dwh-migration-dumpertool zip file, download theSHA256SUMS.txtfile and run the following command:Bash
sha256sum--checkSHA256SUMS.txt
If verification fails, seeTroubleshooting.
Windows PowerShell
(Get-FileHashRELEASE_ZIP_FILENAME).Hash-eq((Get-ContentSHA256SUMS.txt)-Split" ")[0]
Replace the
RELEASE_ZIP_FILENAMEwith the downloadedzip filename of thedwh-migration-dumpercommand-line extraction tool release—for example,dwh-migration-tools-v1.0.52.zipThe
Trueresult confirms successful checksum verification.The
Falseresult indicates verification error. Make sure the checksum andzip files are downloaded from the same release version and placed in thesame directory.Extract the zip file. The extraction tool binary is in the
/binsubdirectory of the folder created by extracting the zip file.Update the
PATHenvironment variable to include the installation path forthe extraction tool.
Run thedwh-migration-dumper tool
Thedwh-migration-dumper tool uses the following format:
dwh-migration-dumper[FLAGS]Running thedwh-migration-dumper tool creates an output file nameddwh-migration-<source platform>-metadata.zip—for example,dwh-migration-teradata-metadata.zip, in your working directory.
dwh-migration-dumper "-Dteradata-logs.utility-logs-table=historicdb.ArchivedUtilityLogs".Use the following instructions to learn how to run thedwh-migration-dumper toolfor your source platform.
Teradata
To allow thedwh-migration-dumper tool to connect to Teradata, downloadtheir JDBC driver from Teradata'sdownload page.
The following table describes the commonly used flags for extractingTeradata metadata and query logs by using the extraction tool. Forinformation about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--assessment | Turns on assessment mode when generating database logs or extracting metadata. The | Required when using for running assessment, not required for translation. | |
--connector | The name of the connector to use, in this caseteradata for metadata orteradata-logs for query logs. | Yes | |
--database | A list of the databases to extract, separated by commas. The database names might be case-sensitive, depending on the Teradata server configuration. If this flag is used in combination with the This flag cannot be used in combination with the | No | |
--driver | The absolute or relative path to the driver JAR file to use for this connection. You can specify multiple driver JAR files, separating them by commas. | Yes | |
--host | localhost | The hostname or IP address of the database server. | No |
--password | The password to use for the database connection. | If not specified, the extraction tool uses a secure prompt to request it. | |
--port | 1025 | The port of the database server. | No |
--user | The username to use for the database connection. | Yes | |
--query-log-alternates | For the To extract the query logs from an alternative location, we recommend that you use the By default, the query logs are extracted from the tables Example: | No | |
-Dteradata.tmode | The transaction mode for the connection. The following values are supported:
Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata-logs.log-date-column | For the To improve performance of joining tables that are specified by the Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata-logs.query-logs-table | For the By default, the query logs are extracted from the Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata-logs.sql-logs-table | For the By default, the query logs containing SQL text are extracted from the Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata-logs.utility-logs-table | For the By default, the utility logs are extracted from the table Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata-logs.res-usage-scpu-table | For the By default, the SCPU resource usage logs are extracted from the table Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata-logs.res-usage-spma-table | For the By default, the SPMA resource usage logs are extracted from the table Example (Bash): Example (Windows PowerShell): | No | |
--query-log-start | The start time (inclusive) for query logs to extract. The value is truncated to the hour. This flag is only available for theteradata-logs connector. Example: | No | |
--query-log-end | The end time (exclusive) for query logs to extract. The value is truncated to the hour. This flag is only available for theteradata-logs connector. Example: | No | |
-Dteradata.metadata.tablesizev.max-rows | For the Limit the number of rows extracted from the view Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata.metadata.diskspacev.max-rows | For the Limit the number of rows extracted from the view Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata.metadata.databasesv.users.max-rows | For the Limit the number of rows that represent users ( Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata.metadata.databasesv.dbs.max-rows | For the Limit the number of rows that represent databases ( Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata.metadata.max-text-length | For the Maximum length of the text column when extracting the data from the Example (Bash): Example (Windows PowerShell): | No | |
-Dteradata-logs.max-sql-length | For the Maximum length of the Example (Bash): Example (Windows PowerShell): | No |
Examples
The following example shows how to extract metadata for twoTeradata databases on the local host:
dwh-migration-dumper\--connectorteradata\--useruser\--passwordpassword\--databasedatabase1,database2\--driverpath/terajdbc4.jarThe following example shows how to extract query logs for Assessment on thelocal host for authentication:
dwh-migration-dumper\--connectorteradata-logs\--assessment\--useruser\--passwordpassword\--driverpath/terajdbc4.jarTables and views extracted by thedwh-migration-dumper tool
The following tables and views are extracted when you use theteradata connector:
DBC.ColumnsVDBC.DatabasesVDBC.DBCInfoDBC.FunctionsVDBC.IndicesVDBC.PartitioningConstraintsVDBC.TablesVDBC.TableTextV
The following additional tables and views are extracted when you use theteradata connector with--assessment flag:
DBC.All_RI_ChildrenVDBC.All_RI_ParentsVDBC.AllTempTablesVXDBC.DiskSpaceVDBC.RoleMembersVDBC.StatsVDBC.TableSizeV
The following tables and views are extracted when you use theteradata-logs connector:
DBC.DBQLogTbl(changes toDBC.QryLogVif--assessmentflag is used)DBC.DBQLSqlTbl
The following additional tables and views are extracted when you use theteradata-logs connector with--assessment flag:
DBC.DBQLUtilityTblDBC.ResUsageScpuDBC.ResUsageSpma
Redshift
You can use any of the following Amazon Redshift authentication andauthorization mechanisms with the extraction tool:
- A username and password.
- An AWS Identity and Access Management (Identity and Access Management (IAM)) access key ID and secret key.
- An AWS IAM profile name.
To authenticate with the username and password, use the Amazon Redshiftdefault PostgreSQL JDBC driver. To authenticate with AWS IAM, use the AmazonRedshift JDBC driver, which you can download from theirdownload page.
The following table describes the commonly used flags for extractingAmazon Redshift metadata and query logs by using thedwh-migration-dumper tool. For informationabout all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--assessment | Turning on assessment mode when generating database logs or extracting metadata. It generates required metadata statistics for BigQuery migration assessment when used for metadata extraction. When used for query logs extraction it generates query metrics statistics for BigQuery migration assessment. | Required when running in assessment mode, not required for translation. | |
--connector | The name of the connector to use, in this caseredshift for metadata orredshift-raw-logs for query logs. | Yes | |
--database | If not specified, Amazon Redshift uses the--user value as the default database name. | The name of the database to connect to. | No |
--driver | If not specified, Amazon Redshift uses the default PostgreSQL JDBC driver. | The absolute or relative path to the driver JAR file to use for this connection. You can specify multiple driver JAR files, separating them by commas. | No |
--host | localhost | The hostname or IP address of the database server. | No |
--iam-accesskeyid | The AWS IAM access key ID to use for authentication. The access key is a string of characters, something like Use in conjunction with the | Not explicitly, but you must provide authentication information through one of the following methods:
| |
--iam-profile | The AWS IAM profile to use for authentication. You can retrieve a profile value to use by examining the Do not use this flag with the | Not explicitly, but you must provide authentication information through one of the following methods:
| |
--iam-secretaccesskey | The AWS IAM secret access key to use for authentication. The secret access key is a string of characters, something like Use in conjunction with the | Not explicitly, but you must provide authentication information through one of the following methods:
| |
--password | The password to use for the database connection. Do not use this flag with the | Not explicitly, but you must provide authentication information through one of the following methods:
| |
--port | 5439 | The port of the database server. | No |
--user | The username to use for the database connection. | Yes | |
--query-log-start | The start time (inclusive) for query logs to extract. The value is truncated to the hour. This flag is only available for theredshift-raw-logs connector. Example: | No | |
--query-log-end | The end time (exclusive) for query logs to extract. The value is truncated to the hour. This flag is only available for theredshift-raw-logs connector. Example: | No |
Examples
The following example shows how to extract metadata from an Amazon Redshiftdatabase on a specified host, using AWS IAM keys for authentication:
dwh-migration-dumper\--connectorredshift\--databasedatabase\--driverpath/redshift-jdbc42-version.jar\--hosthost.region.redshift.amazonaws.com\--iam-accesskeyidaccess_key_ID\--iam-secretaccesskeysecret_access-key\--useruserThe following example shows how to extract metadata from an Amazon Redshiftdatabase on the default host, using the username andpassword for authentication:
dwh-migration-dumper\--connectorredshift\--databasedatabase\--passwordpassword\--useruserThe following example shows how to extract metadata from an Amazon Redshiftdatabase on a specified host, using an AWS IAM profilefor authentication:
dwh-migration-dumper\--connectorredshift\--databasedatabase\--driverpath/redshift-jdbc42-version.jar\--hosthost.region.redshift.amazonaws.com\--iam-profileprofile\--useruser\--assessmentThe following example shows how to extract query logs for Assessment froman Amazon Redshift database on a specified host, using an AWS IAM profilefor authentication:
dwh-migration-dumper\--connectorredshift-raw-logs\--databasedatabase\--driverpath/redshift-jdbc42-version.jar\--host123.456.789.012\--iam-profileprofile\--useruser\--assessmentTables and views extracted by thedwh-migration-dumper tool
The following tables and views are extracted when you use theredshift connector:
SVV_COLUMNSSVV_EXTERNAL_COLUMNSSVV_EXTERNAL_DATABASESSVV_EXTERNAL_PARTITIONSSVV_EXTERNAL_SCHEMASSVV_EXTERNAL_TABLESSVV_TABLESSVV_TABLE_INFOINFORMATION_SCHEMA.COLUMNSPG_CASTPG_DATABASEPG_LANGUAGEPG_LIBRARYPG_NAMESPACEPG_OPERATORPG_PROCPG_TABLE_DEFPG_TABLESPG_TYPEPG_VIEWS
The following additional tables and views are extracted when you use theredshift connector with--assessment flag:
SVV_DISKUSAGESTV_MV_INFOSTV_WLM_SERVICE_CLASS_CONFIGSTV_WLM_SERVICE_CLASS_STATE
The following tables and views are extracted when you use theredshift-raw-logs connector:
STL_DDLTEXTSTL_QUERYSTL_QUERYTEXTPG_USER
The following additional tables and views are extracted when you use theredshift-raw-logs connector with--assessment flag:
STL_QUERY_METRICSSVL_QUERY_QUEUE_INFOSTL_WLM_QUERY
For information about the system views and tables in Redshift, seeRedshift system views andRedshift system catalog tables.
Hive/Impala/Spark or Trino/PrestoSQL
Thedwh-migration-dumper tool only supports authentication to Apache Hive metastorethrough Kerberos. So the--user and--password flags aren't used, instead usethe--hive-kerberos-url flag to supply the Kerberos authentication details.
The following table describes the commonly used flags for extractingApache Hive, Impala, Spark, Presto, or Trino metadata by using theextraction tool. For information about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--assessment | Turns on assessment mode when extracting metadata. The | Required for assessment. Not required for translation. | |
--connector | The name of the connector to use, in this casehiveql. | Yes | |
--hive-metastore-dump-partition-metadata | true | Causes the Don't use this flag with the | No |
--hive-metastore-version | 2.3.6 | When you run the | No |
--host | localhost | The hostname or IP address of the database server. | No |
--port | 9083 | The port of the database server. | No |
--hive-kerberos-url | The Kerberos principal and host to use for authentication. | Required for clusters with enabled Kerberos authentication. | |
-Dhiveql.rpc.protection | The RPC protection configuration level. This determines the Quality of Protection (QOP) of the Simple Authentication and Security Layer (SASL) connection between cluster and the Must be equal to the value of the
Example (Bash): Example (Windows PowerShell): | Required for clusters with enabled Kerberos authentication. |
Examples
The following example shows how to extract metadata for a Hive 2.3.7 databaseon a specified host, without authentication and using an alternate port for connection:
dwh-migration-dumper\--connectorhiveql\--hive-metastore-version2.3.7\--hosthost\--portportTo use Kerberos authentication, sign in as a user that has read permissionsto the Hive metastore and generate a Kerberos ticket. Then, generate themetadata zip file with the following command:
JAVA_OPTS="-Djavax.security.auth.useSubjectCredsOnly=false"\dwh-migration-dumper\--connectorhiveql\--hosthost\--portport\--hive-kerberos-urlprincipal/kerberos_hostAzure Synapse or Microsoft SQL Server
To allow thedwh-migration-dumper tool to connect to Azure Synapse orMicrosoft SQL Server, download their JDBC driver from Microsoft'sdownload page.
The following table describes the commonly used flags for extractingAzure Synapse or Microsoft SQL Server metadata by using the extraction tool.For information about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--connector | The name of the connector to use, in this casesqlserver. | Yes | |
--database | The name of the database to connect to. | Yes | |
--driver | The absolute or relative path to the driver JAR file to use for this connection. You can specify multiple driver JAR files, separating them by commas. | Yes | |
--host | localhost | The hostname or IP address of the database server. | No |
--password | The password to use for the database connection. | Yes | |
--port | 1433 | The port of the database server. | No |
--user | The username to use for the database connection. | Yes |
Examples
The following example shows how to extract metadata from an Azure Synapsedatabase on a specified host:
dwh-migration-dumper\--connectorsqlserver\--databasedatabase\--driverpath/mssql-jdbc.jar\--hostserver_name.sql.azuresynapse.net\--passwordpassword\--useruserGreenplum
To allow thedwh-migration-dumper tool to connect to Greenplum, download theirJDBC driver from VMware Greenplum'sdownload page.
The following table describes the commonly used flags for extractingGreenplum metadata by using the extraction tool. Forinformation about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--connector | The name of the connector to use, in this casegreenplum. | Yes | |
--database | The name of the database to connect to. | Yes | |
--driver | The absolute or relative path to the driver JAR file to use for this connection. You can specify multiple driver JAR files, separating them by commas. | Yes | |
--host | localhost | The hostname or IP address of the database server. | No |
--password | The password to use for the database connection. | If not specified, the extraction tool uses a secure prompt to request it. | |
--port | 5432 | The port of the database server. | No |
--user | The username to use for the database connection. | Yes |
Examples
The following example shows how to extract metadata for a Greenplum databaseon a specified host:
dwh-migration-dumper\--connectorgreenplum\--databasedatabase\--driverpath/greenplum.jar\--hosthost\--passwordpassword\--useruser\Netezza
To allow thedwh-migration-dumper tool to connect to IBM Netezza, you must gettheir JDBC driver. You can usually get the driver from the/nz/kit/sbindirectory on your IBM Netezza appliance host. If you can't locate it there, askyour system administrator for help, or readInstalling and Configuring JDBCin the IBM Netezza documentation.
The following table describes the commonly used flags for extractingIBM Netezza metadata by using the extraction tool. Forinformation about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--connector | The name of the connector to use, in this casenetezza. | Yes | |
--database | A list of the databases to extract, separated by commas. | Yes | |
--driver | The absolute or relative path to the driver JAR file to use for this connection. You can specify multiple driver JAR files, separating them by commas. | Yes | |
--host | localhost | The hostname or IP address of the database server. | No |
--password | The password to use for the database connection. | Yes | |
--port | 5480 | The port of the database server. | No |
--user | The username to use for the database connection. | Yes |
Examples
The following example shows how to extract metadata for two IBM Netezzadatabases on a specified host:
dwh-migration-dumper\--connectornetezza\--databasedatabase1,database2\--driverpath/nzjdbc.jar\--hosthost\--passwordpassword\--useruserPostgreSQL
To allow thedwh-migration-dumper tool to connect to PostgreSQL, download theirJDBC driver from PostgreSQL'sdownload page.
The following table describes the commonly used flags for extractingPostgreSQL metadata by using the extraction tool. Forinformation about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--connector | The name of the connector to use, in this casepostgresql. | Yes | |
--database | The name of the database to connect to. | Yes | |
--driver | The absolute or relative path to the driver JAR file to use for this connection. You can specify multiple driver JAR files, separating them by commas. | Yes | |
--host | localhost | The hostname or IP address of the database server. | No |
--password | The password to use for the database connection. | If not specified, the extraction tool uses a secure prompt to request it. | |
--port | 5432 | The port of the database server. | No |
--user | The username to use for the database connection. | Yes |
Examples
The following example shows how to extract metadata for a PostgreSQL databaseon a specified host:
dwh-migration-dumper\--connectorpostgresql\--databasedatabase\--driverpath/postgresql-version.jar\--hosthost\--passwordpassword\--useruserOracle
To allow thedwh-migration-dumper tool to connect to Oracle, download theirJDBC driver from Oracle'sdownload page.
The following table describes the commonly used flags for extractingOracle metadata by using the extraction tool. Forinformation about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--connector | The name of the connector to use, in this caseoracle. | Yes | |
--driver | The absolute or relative path to the driver JAR file to use for this connection. You can specify multiple driver JAR files, separating them by commas. | Yes | |
--host | localhost | The hostname or IP address of the database server. | No |
--oracle-service | The Oracle service name to use for the connection. | Not explicitly, but you must specify either this flag or the--oracle-sid flag. | |
--oracle-sid | The Oracle system identifier (SID) to use for the connection. | Not explicitly, but you must specify either this flag or the--oracle-service flag. | |
--password | The password to use for the database connection. | If not specified, the extraction tool uses a secure prompt to request it. | |
--port | 1521 | The port of the database server. | No |
--user | The username to use for the database connection. The user you specify must have the role | Yes |
Examples
The following example shows how to extract metadata for an Oracle databaseon a specified host, using the Oracle service for the connection:
dwh-migration-dumper\--connectororacle\--driverpath/ojdbc8.jar\--hosthost\--oracle-serviceservice_name\--passwordpassword\--useruserSnowflake
The following table describes the commonly used flags for extractingSnowflake metadata by using thedwh-migration-dumper tool. Forinformation about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--assessment | Turns on assessment mode when generating database logs or extracting metadata. The | Only for assessment. | |
--connector | The name of the connector to use, in this casesnowflake. | Yes | |
--database | The name of the database to extract. You can only extract from one database at a time from Snowflake. This flag is not allowed in assessment mode. | Only for translation. | |
--host | localhost | The hostname or IP address of the database server. | No |
--private-key-file | The path to the RSA private key used for authentication. We recommend using a | No, if not provided extraction tool uses a password based authentication. | |
--private-key-password | The password that was used when creating the RSA private key. | No, it is required only if the private key is encrypted. | |
--password | The password to use for the database connection. | If not specified, the extraction tool uses a secure prompt to request it. However, we recommend using key-pair based authentication instead. | |
--query-log-start | The start time (inclusive) for query logs to extract. The value is truncated to the hour. This flag is only available for the Example: | No | |
--query-log-end | The end time (exclusive) for query logs to extract. The value is truncated to the hour. This flag is only available for the Example: | No | |
--role | The Snowflake role to use for authorization. You only need to specify this for large installations where you need to get metadata from theSNOWFLAKE.ACCOUNT_USAGE schema instead ofINFORMATION_SCHEMA. For more information, seeWorking with large Snowflake instances. | No | |
--user | The username to use for the database connection. | Yes | |
--warehouse | The Snowflake warehouse to use for processing metadata queries. | Yes |
Examples
The following example shows how to extract metadata for assessment:
dwh-migration-dumper\--connectorsnowflake\--assessment\--host"account.snowflakecomputing.com"\--rolerole\--useruser\--private-key-fileprivate-key-file\--private-key-passwordprivate-key-password\--warehousewarehouseThe following example shows how to extract metadata for a typically sizedSnowflake database on the local host:
dwh-migration-dumper\--connectorsnowflake\--databasedatabase\--useruser\--private-key-fileprivate-key-file\--private-key-passwordprivate-key-password\--warehousewarehouseThe following example shows how to extract metadata for a largeSnowflake database on a specified host:
dwh-migration-dumper\--connectorsnowflake\--databasedatabase\--host"account.snowflakecomputing.com"\--rolerole\--useruser\--private-key-fileprivate-key-file\--private-key-passwordprivate-key-password\--warehousewarehouseAlternatively, you can use the following example to extract metadata usingpassword-based authentication:
dwh-migration-dumper\--connectorsnowflake\--databasedatabase\--host"account.snowflakecomputing.com"\--passwordpassword\--useruser\--warehousewarehouseWorking with large Snowflake instances
Thedwh-migration-dumper tool reads metadata from the SnowflakeINFORMATION_SCHEMA. However, there is a limit to the amount of data you canretrieve fromINFORMATION_SCHEMA. If you run theextraction tool and receive the errorSnowflakeSQLException:Information schema query returned too much data, youmust take the following steps so that you can read metadata from theSNOWFLAKE.ACCOUNT_USAGE schema instead:
- Open theShares option in the Snowflake web interface.
Create a database from the
SNOWFLAKE.ACCOUNT_USAGEshare:-- CREATE DATABASEdatabase FROM SHARE SNOWFLAKE.ACCOUNT_USAGE;Create a role:
CREATE ROLErole;Grant
IMPORTEDprivileges on the new database to the role:GRANT IMPORTED PRIVILEGES ON DATABASEdatabase TO ROLErole;Grant the role to the user you intend to use to run the
dwh-migration-dumpertool:GRANT ROLErole TO USERuser;
Vertica
To allow thedwh-migration-dumper tool to connect to Vertica,download their JDBC driver from theirdownload page.
The following table describes the commonly used flags for extractingVertica metadata by using the extraction tool. Forinformation about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--connector | The name of the connector to use, in this casevertica. | Yes | |
--database | The name of the database to connect to. | Yes | |
--driver | The absolute or relative path to the driver JAR file to use for this connection. You can specify multiple driver JAR files, separating them by commas. | Yes | |
--host | localhost | The hostname or IP address of the database server. | No |
--password | The password to use for the database connection. | Yes | |
--port | 5433 | The port of the database server. | No |
--user | The username to use for the database connection. | Yes |
Examples
The following example shows how to extract metadata from a Verticadatabase on the local host:
dwh-migration-dumper\--driverpath/vertica-jdbc.jar\--connectorvertica\--databasedatabase--useruser--passwordpasswordBigQuery
The following table describes the commonly used flags for extractingBigQuery metadata by using the extraction tool. Forinformation about all supported flags, seeglobal flags.
| Name | Default value | Description | Required |
|---|---|---|---|
--connector | The name of the connector to use, in this casebigquery. | Yes | |
--database | The list of projects to extract metadata and query logs from, separated by commas. | Yes | |
--schema | The list of datasets to extract metadata and query logs from, separated by commas. | Yes |
Examples
The following example shows how to extract metadata from a Verticadatabase on the local host:
dwh-migration-dumper\--connectorbigquery\--databasePROJECT1,PROJECT2--schemaDATASET1,DATASET2Global flags
The following table describes the flags that can be used with any of thesupported source platforms.
| Name | Description |
|---|---|
--connector | The connector name for the source system. |
--database | Usage varies by source system. |
--driver | The absolute or relative path to the driver JAR file to use when connecting to the source system. You can specify multiple driver JAR files, separating them by commas. |
--dry-run or-n | Show what actions the extraction tool would make without executing them. |
--help | Displays command-line help. |
--host | The hostname or IP address of the database server to connect to. |
--jdbcDriverClass | Optionally overrides the vendor-specified JDBC driver class name. Use this if you have a custom JDBC client. |
--output | The path of the output zip file. For example,dir1/dir2/teradata-metadata.zip. If you don't specify a path, the output file is created in your working directory. If you specify the path to a directory, the default zip filename is created in the specified directory. If the directory does not exist, it is created. To use Cloud Storage, use the following format: To authenticate using Google Cloud credentials, seeAuthenticate for using client libraries. |
--password | The password to use for the database connection. |
--port | The port of the database server. |
--save-response-file | Saves your command line flags in a JSON file for easy re-use. The file is nameddumper-response-file.json and is created in the working directory. To use the response file, provide the path to it prefixed by@ when you run the extraction tool, for exampledwh-migration-dumper @path/to/dumper-response-file.json. |
--schema | A list of the schemas to extract, separated by commas. Oracle doesn't differentiate between aschema and the database user who created the schema, so you can use either schema names or user names with the |
--thread-pool-size | Sets the thread pool size, which affects the connection pool size. The default size of the thread pool is the number of cores on the server running the If the extraction tool seems slow or otherwise in need of more resources, you can raise the number of threads used. If there are indications that other processes on the server require more bandwidth, you can lower the number of threads used. |
--url | The URL to use for the database connection, instead of the URI generated by the JDBC driver. The generated URI should be sufficient in most cases. Only override the generated URI when you need to use a JDBC connection setting that is specific to the source platform and is not already set by one of the flags listed in this table. |
--user | The username to use for the database connection. |
--version | Displays the product version. |
--telemetry | Collects insights into the performance characteristics of runs, such as duration, run counts, and resource usage. This is enabled by default. To disable telemetry, set this flag to |
Troubleshooting
This section explains some common issues and troubleshooting techniques forthedwh-migration-dumper tool.
Out of memory error
Thejava.lang.OutOfMemoryError error in thedwh-migration-dumper tool terminaloutput is often related to insufficient memory for processing retrieved data.To address this issue, increase available memory or reduce the number ofprocessing threads.
You can increase maximum memory by exporting theJAVA_OPTS environmentvariable:
Linux
exportJAVA_OPTS="-Xmx4G"Windows
setJAVA_OPTS="-Xmx4G"You can reduce the number of processing threads (default is 32) by includingthe--thread-pool-size flag. This option is supported forhiveql andredshift* connectors only.
dwh-migration-dumper--thread-pool-size=1
Handling aWARN...Task failed error
You might sometimes see aWARN [main]o.c.a.d.MetadataDumper [MetadataDumper.java:107] Task failed: … error in thedwh-migration-dumper tool terminal output. The extraction toolsubmits multiple queries to the source system, and the output of each queryis written to its own file. Seeing this issue indicates that one of thesequeries failed. However, failure of one query doesn't prevent the executionof the other queries. If you see more than a couple ofWARNerrors, review the issue details and see if there isanything that you need to correct in order for the query to run appropriately.For example, if the database user you specified when running theextraction tool lacks permissions to read all metadata, try againwith a user with the correct permissions.
Corrupted ZIP file
To validate thedwh-migration-dumper tool zip file, download theSHA256SUMS.txt file and run the following command:
Bash
sha256sum--checkSHA256SUMS.txt
TheOK result confirms successful checksum verification. Any other messageindicates verification error:
FAILED: computed checksum did NOT match: the zip file is corrupted andhas to be downloaded again.FAILED: listed file could not be read: the zip file version can't belocated. Make sure the checksum and zip files are downloaded from thesame release version and placed in the same directory.
Windows PowerShell
(Get-FileHashRELEASE_ZIP_FILENAME).Hash-eq((Get-ContentSHA256SUMS.txt)-Split" ")[0]
Replace theRELEASE_ZIP_FILENAME with the downloadedzip filename of thedwh-migration-dumper command-line extraction tool release—for example,dwh-migration-tools-v1.0.52.zip
TheTrue result confirms successful checksum verification.
TheFalse result indicates verification error. Make sure the checksum andzip files are downloaded from the same release version and placed in thesame directory.
Teradata query logs extraction is slow
To improve performance of joining tables that are specified bythe-Dteradata-logs.query-logs-table and-Dteradata-logs.sql-logs-tableflags, you can include an additional columnof typeDATE in theJOIN condition. This column must be defined inboth tables and it must be part of the Partitioned Primary Index. To includethis column, use the-Dteradata-logs.log-date-column flag.
Example:
Bash
dwh-migration-dumper\-Dteradata-logs.query-logs-table=historicdb.ArchivedQryLogV\-Dteradata-logs.sql-logs-table=historicdb.ArchivedDBQLSqlTbl\-Dteradata-logs.log-date-column=ArchiveLogDate
Windows PowerShell
dwh-migration-dumper`"-Dteradata-logs.query-logs-table=historicdb.ArchivedQryLogV"`"-Dteradata-logs.sql-logs-table=historicdb.ArchivedDBQLSqlTbl"`"-Dteradata-logs.log-date-column=ArchiveLogDate"
Teradata row size limit exceeded
Teradata 15 has a 64kB row size limit. If the limit is exceeded, the dumperfails with the following message:none[Error 9804] [SQLState HY000] Response Row size or Constant Row size overflow
To resolve this error, either extend the row limit to 1MB or split the rows intomultiple rows:
- Install and enable the 1MB Perm and Response Rows feature and current TTUsoftware. For more information, seeTeradata Database Message 9804
- Split the long query text into multiple rows by using the
-Dteradata.metadata.max-text-lengthand-Dteradata-logs.max-sql-lengthflags.
The following command shows the usage of the-Dteradata.metadata.max-text-length flag to split the long query text intomultiple rows of at most 10000 characters each:
Bash
dwh-migration-dumper\--connectorteradata\-Dteradata.metadata.max-text-length=10000
Windows PowerShell
dwh-migration-dumper`--connectorteradata`"-Dteradata.metadata.max-text-length=10000"
The following command shows the usage of the-Dteradata-logs.max-sql-length flag to split the long query text intomultiple rows of at most 10000 characters each:
Bash
dwh-migration-dumper\--connectorteradata-logs\-Dteradata-logs.max-sql-length=10000
Windows PowerShell
dwh-migration-dumper`--connectorteradata-logs`"-Dteradata-logs.max-sql-length=10000"
Oracle connection issue
In common cases like invalid password or hostname,dwh-migration-dumper toolprints a meaningful error message describing the root issue. However, in somecases, the error message returned by the Oracle server may be generic anddifficult to investigate.
One of these issues isIO Error: Got minus one from a read call. This errorindicates that the connection to Oracle server has been established but theserver did not accept the client and closed the connection. This issue typicallyoccurs when the server acceptsTCPS connections only. By default,dwh-migration-dumper tool uses theTCP protocol. To solve this issue you mustoverride the Oracle JDBC connection URL.
Instead of providing theoracle-service,host andport flags, you canresolve this issue by providing theurl flag in the following format:jdbc:oracle:thin:@tcps://{HOST_NAME}:{PORT}/{ORACLE_SERVICE}. Typically, theTCPS port number used by the Oracle server is2484.
Example dumper command:
dwh-migration-dumper\--connectororacle-stats\--url"jdbc:oracle:thin:@tcps://host:port/oracle_service"\--assessment\--driver"jdbc_driver_path"\--user"user"\--passwordIn addition to changing connection protocol to TCPS you might need to providethe trustStore SSL configuration that is required to verify Oracle servercertificate. A missing SSL configuration will result in anUnable to find validcertification path error message. To resolve this, set the JAVA_OPTSenvironment variable:
setJAVA_OPTS=-Djavax.net.ssl.trustStore="jks_file_location"-Djavax.net.ssl.trustStoreType=JKS-Djavax.net.ssl.trustStorePassword="password"Depending on your Oracle server configuration, you might also need to providethe keyStore configuration. SeeSSL With Oracle JDBCDriver for moreinformation about configuration options.
What's next
After you run thedwh-migration-dumper tool,upload the output to Cloud Storagealong with the source files for translation.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.