Load Oracle data into BigQuery
You can load data from Oracle to BigQuery using theBigQuery Data Transfer Service for Oracle connector. With theBigQuery Data Transfer Service, you can schedule recurring transfer jobs thatadd your latest data from Oracle toBigQuery.
Limitations
Oracle transfers are subject to the following limitations:
- The maximum number of simultaneous connections to an Oracledatabase is limited, and as a result, the number of simultaneous transfer runsto a single Oracle database is limited to that maximum amount.
- You mustset up a networkattachment in cases where apublic IP is not available for an Oracle database connection,with the following requirements:
- The data source must be accessible from the subnet where the networkattachment resides.
- The network attachment must not be in the subnet within the range
240.0.0.0/24. - Network attachments cannot be deleted if there are active connections to theattachment. To delete a network attachment,contact Cloud Customer Care.
- For the
usmulti-region, the network attachment must be in theus-central1region. For theeumulti-region, the network attachment must be in theeurope-west4region.
- The minimum interval time between recurring Oracle transfers is15 minutes. The default interval for a recurring transfer is 24 hours.
- A single transfer configuration can only support one data transfer run at a giventime. In the case where a second data transfer is scheduled to run before thefirst transfer is completed, then only the first data transfer completes whileany other data transfers that overlap with the first transfer is skipped.
- To avoid skipped transfers within a single transfer configuration, werecommend that you increase the duration of time between large datatransfers by configuring theRepeat frequency.
- During a data transfer, the Oracle connector identifies indexedand partitioned key columns to transfer your data in parallel batches. For this reason, werecommend that you specify primary key columns or use indexed columns in yourtable to improve the performance and reduce the error rate in your datatransfers.
- If you have indexed or primary key constraints, only the following columntypes are supported for creating parallel batches:
INTEGERTINYINTSMALLINTFLOATREALDOUBLENUMERICBIGINTDECIMALDATE
- Oracle data transfers that don't use primary key or indexedcolumns can't support more than 2,000,000 records per table.
- If you have indexed or primary key constraints, only the following columntypes are supported for creating parallel batches:
- If your configured network attachment and virtual machine (VM) instance arelocated in different regions, there might be cross-region data movement whenyou transfer data from Oracle.
Incremental transfer limitations
Incremental Oracle transfers are subject to the followinglimitations:- You can only choose
TIMESTAMPcolumns as watermark columns. - Incremental ingestion is only supported for assets with valid watermark columns.
- Values in a watermark column must be monotonically increasing.
- Incremental transfers cannot sync delete operations in the source table.
- A single transfer configuration can only support either incremental or fullingestion.
- You cannot update objects in the
assetlist after the first incrementalingestion run. - You cannot change the write mode in a transfer configuration after the firstincremental ingestion run.
- You cannot change the watermark column or the primary key after the firstincremental ingestion run.
- When you update an existing transfer configuration to the incrementalingestion mode, the first data transfer run after that updatewill retrieve all available data from your data source and will overwrite data in the destination BigQuery table. Any subsequent incrementaltransfer run will retrieve only the new and updated rows from your data source.
Data ingestion options
The following sections provide more information about the data ingestion optionswhen you set up an Oracle data transfer.
TLS configuration
The Oracle connector supports the configuration for transport levelsecurity (TLS) to encrypt your data transfers into BigQuery. TheOracle connector supports the following TLS configurations:
- Encrypt data, and verify CA and hostname: This mode performs a fullvalidation of the server using TLS over the TCPS protocol. It encrypts alldata in transit and verifies that the database server's certificate is signedby a trusted Certificate Authority (CA). This mode also checks that thehostname you're connecting to exactly matches the Common Name (CN) or aSubject Alternative Name (SAN) on the server's certificate. This mode preventsattackers from using a valid certificate for a different domain to impersonateyour database server.
- If your hostname does not match the certificate CN or SAN, the connectionfails. You must configure a DNS resolution to match the certificate or use adifferent security mode.
- Use this mode for the most secure option to prevent person-in-the-middle(PITM) attacks.
- Encrypt data, and verify CA only: This mode encrypts all data using TLSover the TCPS protocol and verifies that the server's certificate is signed bya CA that the client trusts. However, this mode does not verify the server'shostname. This mode successfully connects as long as the certificate is validand issued by a trusted VA, regardless of whether the hostname in thecertificate matches the hostname you are connecting to.
- Use this mode if you want to ensure that you are connecting to a serverwhose certificate is signed by a trusted CA, but the hostname is notverifiable or you don't have control over the hostname configuration.
- Encryption only: This mode encrypts alldata transferred between the client and the server using Oracle's NativeNetwork Encryption over the standard TCP port. It doesnot perform any certificate or hostname validation.
- This mode provides some level of security by protecting data in transit, butit can be vulnerable to PITM attacks.
- Use this mode if you need to ensure all data is encrypted but can't ordon't want to verify the server's identity. We recommend using this modewhen working with private VPCs.
- No encryption or verification: This mode does not encrypt any data anddoes not perform any certificate or hostname verification. All data is sent asplain text.
- We don't recommend using this mode in an environment where sensitive data ishandled.
- We only recommend using this mode for testing purposes on an isolatednetwork where security is not a concern.
Trusted Server Certificate (PEM)
If you are using either theEncrypt data, and verify CA and hostname mode ortheEncrypt data, and verify CA mode, then you can also provide one or morePEM-encoded certificates. These certificates are required in some scenarioswhere the BigQuery Data Transfer Service needs to verify the identity of yourdatabase server during the TLS connection:
- If you are using a certificate signed by a private CA within your organizationor a self-signed certificate, you must provide the full certificate chain orthe single self-signed certificate. This is required for certificates issuedby internal CAs of managed cloud provider services, such as the AmazonRelational Database Service (RDS).
- If your database server certificate is signed by a public CA (for example,Let's Encrypt, DigiCert, or GlobalSign), you don't need to provide acertificate. The root certificates for these public CAs are pre-installed andtrusted by the BigQuery Data Transfer Service.
You can provide PEM-encoded certificates in theTrusted PEM Certificatefield when you create a Oracle transfer configuration, with thefollowing requirements:
- The certificate must be a valid PEM-encoded certificate chain.
- The certificate must be entirely correct. Any missing certificates in thechain or incorrect content causes the TLS connection to fail.
- For a single certificate, you can provide single, self-signed certificatefrom the database server.
- For a full certificate chain issued by a private CA, you must provide thefull chain of trust. This includes the certificate from the database serverand any intermediate and root CA certificates.
Full or incremental transfers
You can specify how data is loaded into BigQuery by selectingeither theFull orIncremental writepreference in the transfer configuration when youset up anOracle transfer. Incremental transfersare supported inpreview.
Note: To request feedback or support for incremental transfers, send email todts-preview-support@google.com.You can selectFull to transfer all data from your Oracledatasets with each data transfer.Alternatively, you can selectIncremental(Preview) to onlytransfer data that was changed since the last data transfer, instead of loadingthe entire dataset with each data transfer. If you selectIncremental foryour data transfer, you must specify either theAppend orUpsert writemodes to define how data is written to BigQuery during anincremental data transfer. The following sections describe the available writemodes.
Append write mode
TheAppend write mode only inserts new rows to your destination table. This optionstrictly appends transferred data without checking for existing records, sothis mode can potentially cause data duplication in the destination table.
When you select theAppend mode, you must select a watermark column. Awatermark column is required for the Oracle connector to track changesin the source table.
For Oracle transfers, we recommend selecting a column that is onlyupdated when the record was created, and won't change with subsequent updates.For example, theCREATED_AT column.Upsert write mode
TheUpsert write mode either updates a row or inserts a new row in yourdestination table by checking for a primary key. You can specify a primary keyto let the Oracle connector determine what changes are needed to keepyour destination table up-to-date with your source table. If the specifiedprimary key is present in the destination BigQuery table during a data transfer, then the Oracleconnector updates that row with new data from the source table. If a primary keyis not present during a data transfer, then the Oracle connectorinserts a new row.
When you select theUpsert mode, you must select a watermark column and aprimary key:
- A watermark column is required for the Oracle connector to track changes in the source table.
- Select a watermark column that updates every time a row is modified. We recommend columns similar to the
UPDATED_ATorLAST_MODIFIEDcolumn.
- Select a watermark column that updates every time a row is modified. We recommend columns similar to the
- The primary key can be one or more columns on your table that are requiredfor the Oracle connector to determine if itneeds to insert or update a row.
- Select columns that contain non-null values that are unique across allrows of the table. We recommend columns that include system-generatedidentifiers, unique reference codes (for example, auto-incrementing IDs), orimmutable time-based sequence IDs.
- To prevent potential data loss or data corruption, the primary key columnsthat you select must have unique values. If you have doubts about theuniqueness of your chosen primary key column, then we recommend that youuse theAppend write mode instead.
Before you begin
The following sections describe the steps that you need to take before youcreate an Oracle transfer.
Oracle prerequisites
- Create a User credentialin the Oracle database.
- Grant
Create Sessionsystem privileges to the userto allow session creation. - Assign a tablespace to the user account.
You must also have the following Oracle database information whencreating anOracle transfer.
| Parameter Name | Description |
|---|---|
database | Name of the database. |
host | Hostname or IP address of the database. |
port | Port number of the database. |
username | Username to access the database. |
password | Password to access the database. |
BigQuery prerequisites
- Verify that you have completed all actions required toenable the BigQuery Data Transfer Service.
- Create a BigQuery datasetto store your data.
- If you intend to set up transfer run notifications for Pub/Sub,verify that you have the
pubsub.topics.setIamPolicyIdentity and Access Management (IAM) permission. Pub/Subpermissions are not required if you only set up email notifications. For moreinformation, seeBigQuery Data Transfer Service run notifications.
Required BigQuery roles
To get the permissions that you need to create a BigQuery Data Transfer Service data transfer, ask your administrator to grant you theBigQuery Admin (roles/bigquery.admin) IAM role on your project. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the permissions required to create a BigQuery Data Transfer Service data transfer. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to create a BigQuery Data Transfer Service data transfer:
- BigQuery Data Transfer Service permissions:
bigquery.transfers.updatebigquery.transfers.get
- BigQuery permissions:
bigquery.datasets.getbigquery.datasets.getIamPolicybigquery.datasets.updatebigquery.datasets.setIamPolicybigquery.jobs.create
You might also be able to get these permissions withcustom roles or otherpredefined roles.
For more information, seeGrantbigquery.admin access.
Load Oracle data into BigQuery
Add Oracle data into BigQuery by setting up atransfer configuration using one of the following options:
Console
Go to the Data transfers page in the Google Cloud console.
ClickCreate transfer.
In theSource type section, forSource, selectOracle.
In theData source details section, do the following:
- ForNetwork attachment, select an existing network attachment orclickCreate Network Attachment.
- ForHost, enter the hostname or IP of the database.
- ForPort, enter the port number that the Oracledatabase is using for incoming connections, such as
1521. - ForDatabase name, enter the name of the Oracledatabase.
- ForConnection type, enter the connection URL type, either
SERVICE,SID, orTNS. - ForUsername, enter the username of the user initiating theOracle database connection.
- ForPassword, enter the password of the user initiating theOracle database connection.
- ForTLS Mode, select an option from the drop-down menu. For moreinformation about TLS modes, seeTLS configuration.
- ForTrusted PEM Certificate, enter the public certificate of thecertificate authority (CA) that issued the TLS certificate of thedatabase server. For more information, seeTrusted Server Certificate (PEM).
- ForIngestion type, selectFull orIncremental.
- If you selectIncremental(Preview),forWrite mode, select eitherAppend orUpsert. For moreinformation about the different write modes, seeFull orincrementaltransfers.
- ForOracle objects to transfer, clickBrowse:
- Select any objects to be transferred to the BigQuerydestination dataset. You can also manually enter any objects to includein the data transfer in this field.
- If you have selectedAppend as your incremental write mode, youmust select a column as the watermark column.
- If you have selectedUpsert as your incremental write mode, youmust select a column as the watermark column, and then select one ormore columns as the primary key.
In theDestination settings section, forDataset, select thedataset that you created to store your data.
In theTransfer config name section, forDisplay name, enter aname for the data transfer.
In theSchedule options section:
- In theRepeat frequency list, select an option to specify howoften this data transfer runs. To specify a custom repeat frequency,selectCustom. If you selectOn-demand, then this transferruns when youmanually trigger the transfer.
- If applicable, select eitherStart now orStart at set time,and provide a start date and run time.
Optional: In theNotification options section, do the following:
- To enable email notifications, click theEmail notificationtoggle. When you enable this option, the transfer administratorreceives an email notification when a transfer run fails.
- To enablePub/Sub transfer run notificationsfor this transfer, click thePub/Sub notifications toggle. You canselect yourtopic name, or you can clickCreate a topic to create one.
ClickSave.
bq
Enter thebq mk commandand supply the transfer creation flag--transfer_config:
bqmk--transfer_config--project_id=PROJECT_ID--data_source=DATA_SOURCE--display_name=DISPLAY_NAME--target_dataset=DATASET--params='PARAMETERS'
Where:
- PROJECT_ID (optional): your Google Cloud project ID.If
--project_idisn't supplied to specify a particular project, thedefault project is used. - DATA_SOURCE: the data source —
oracle. - DISPLAY_NAME: the display name for the transferconfiguration. The data transfer name can be any value that lets youidentify the transfer if you need to modify it later.
- DATASET: the target dataset for the transferconfiguration.
PARAMETERS: the parameters for the created transferconfiguration in JSON format. For example:
--params='{"param":"param_value"}'. The following are the parameters foran Oracle data transfer:connector.networkAttachment(optional): name of the network attachmentto connect to the Oracle database.connector.authentication.Username: username of the Oracleaccount.connector.authentication.Password: password of the Oracleaccount.connector.database: name of the Oracledatabase.connector.endpoint.host: the hostname or IP of the database.connector.endpoint.port: the port number that the Oracledatabase is using for incoming connections, such as1520.connector.connectionType: the connection URL type, eitherSERVICE,SID, orTNS.connector.tls.mode: specify aTLS configuration to usewith this transfer:ENCRYPT_VERIFY_CA_AND_HOSTto encrypt data, and verify CA and hostnameENCRYPT_VERIFY_CAto encrypt data, and verify CA onlyENCRYPT_VERIFY_NONEfor data encryption onlyDISABLEfor no encryption or verification
connector.tls.trustedServerCertificate: (optional) provide one or morePEM-encoded certificates.Required only ifconnector.tls.modeisENCRYPT_VERIFY_CA_AND_HOSTorENCRYPT_VERIFY_CA.ingestionType: specify eitherFULLorINCREMENTAL. Incrementaltransfers are supported inpreview. Formore information, seeFull or incremental transfers.writeMode: specify eitherWRITE_MODE_APPENDorWRITE_MODE_UPSERT.watermarkColumns: specify columns in your table as watermark columns.This field is required for incremental transfers.primaryKeys: specify columns in your table as primary keys.This field is required for incremental transfers.assets: the path to the Oracle objects to be transferred toBigQuery, using the format:DATABASE_NAME/SCHEMA_NAME/TABLE_NAME
When specifying multiple assets during an incremental transfer, the valuesof thewatermarkColumns andprimaryKeys fields correspond to theposition of values in theassets field. In the following example,dep_id corresponds to the tableDB1/USER1/DEPARTMENT, whilereport_byandreport_title corresponds to the tableDB1/USER1/EMPLOYEES.
"primaryKeys":[['dep_id'],['report_by','report_title']],"assets":["DB1/USER1/DEPARTMENT","DB1/USER1/EMPLOYEES"],
For example, the following command creates an Oracle data transferin the default project with all the required parameters:
bqmk--transfer_config--target_dataset=mydataset--data_source=oracle--display_name='My Transfer'--params='{"assets":["DB1/USER1/DEPARTMENT","DB1/USER1/EMPLOYEES"], "connector.authentication.username": "User1", "connector.authentication.password":"ABC12345", "connector.database":"DB1", "connector.endpoint.host":"192.168.0.1", "connector.endpoint.port":1520, "connector.connectionType":"SERVICE", "connector.tls.mode": "ENCRYPT_VERIFY_CA_AND_HOST", "connector.tls.trustedServerCertificate": "PEM-encoded certificate", "connector.networkAttachment": "projects/dev-project1/regions/us-central1/networkattachments/na1" "ingestionType":"incremental", "writeMode":"WRITE_MODE_APPEND", "watermarkColumns":["createdAt","createdAt"], "primaryKeys":[['dep_id'], ['report_by','report_title']]}'
API
Use theprojects.locations.transferConfigs.createmethod and supply an instance of theTransferConfigresource.
To manually run a data transfer outside of your regular schedule, you can startabackfill run.
Data type mapping
The following table maps Oracle data types to thecorresponding BigQuery data types.
| Oracle data type | BigQuery data type |
|---|---|
BFILE | BYTES |
BINARY_DOUBLE | FLOAT |
BINARY_FLOAT | FLOAT |
BLOB | BYTES |
CHAR | STRING |
CLOB | STRING |
DATE | DATETIME |
FLOAT | FLOAT |
INTERVAL DAY TO SECOND | STRING |
INTERVAL YEAR TO MONTH | STRING |
LONG | STRING |
LONG RAW | BYTES |
NCHAR | STRING |
NCLOB | STRING |
NUMBER (without precision and scale) | STRING |
NUMBER (with precision and scale lower than the BigQuery Numeric range) | NUMERIC |
NUMBER (with precision and scale lower than the BigQuery BigNumeric range) | BIGNUMERIC |
NUMBER (with precision and scale greater than the BigQuery BigNumeric range) | STRING |
NVARCHAR2 | STRING |
RAW | BYTES |
ROWID | STRING |
TIMESTAMP | DATETIME |
TIMESTAMP WITH LOCAL TIME ZONE | DATETIME |
TIMESTAMP WITH TIME ZONE | TIMESTAMP |
UROWID | STRING |
VARCHAR | STRING |
VARCHAR2 | STRING |
Troubleshoot transfer setup
If you are having issues setting up your data transfer, seeOracle transfer issues.
Pricing
For pricing information about Oracle transfers, seeData Transfer Service pricing.
What's next
- For an overview of the BigQuery Data Transfer Service, seeIntroduction to BigQuery Data Transfer Service.
- For information on using transfers including getting information about atransfer configuration, listing transfer configurations, and viewing atransfer's run history, seeWorking with transfers.
- Learn how toload data with cross-cloud operations.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.