- Notifications
You must be signed in to change notification settings - Fork0
SQLAlchemy dialect for BigQuery
License
r1b/python-bigquery-sqlalchemy
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
In order to use this library, you first need to go through the following steps:
- Select or create a Cloud Platform project.
- [Optional]Enable billing for your project.
- Enable the BigQuery Storage API.
- Setup Authentication.
Install this library in avirtualenv using pip.virtualenv is a tool tocreate isolated Python environments. The basic problem it addresses is one ofdependencies and versions, and indirectly permissions.
Withvirtualenv, it's possible to install this library without needing systeminstall permissions, and without clashing with the installed systemdependencies.
Python >= 3.8
Python <= 3.7.
pip install virtualenvvirtualenv <your-env>source <your-env>/bin/activate<your-env>/bin/pip install sqlalchemy-bigquery
pip install virtualenvvirtualenv <your-env><your-env>\Scripts\activate<your-env>\Scripts\pip.exe install sqlalchemy-bigquery
When handling large datasets, you may see speed increases by also installing thebqstorage dependencies. See the instructions above about creating a virtualenvironment and then install sqlalchemy-bigquery using the bqstorage extras:
source <your-env>/bin/activate<your-env>/bin/pip install sqlalchemy-bigquery[bqstorage]
fromsqlalchemyimport*fromsqlalchemy.engineimportcreate_enginefromsqlalchemy.schemaimport*engine=create_engine('bigquery://project')table=Table('dataset.table',MetaData(bind=engine),autoload=True)print(select([func.count('*')],from_obj=table().scalar()))
project inbigquery://project is used to instantiate BigQuery client with the specific project ID. To infer project from the environment, usebigquery:// – withoutproject
Follow theGoogle Cloud library guide for authentication.
Alternatively, you can choose either of the following approaches:
- provide the path to a service account JSON file in
create_engine()using thecredentials_pathparameter:
# provide the path to a service account JSON fileengine=create_engine('bigquery://',credentials_path='/path/to/keyfile.json')
- pass the credentials in
create_engine()as a Python dictionary using thecredentials_infoparameter:
# provide credentials as a Python dictionarycredentials_info= {"type":"service_account","project_id":"your-service-account-project-id"}engine=create_engine('bigquery://',credentials_info=credentials_info)
To specify location of your datasets passlocation tocreate_engine():
engine=create_engine('bigquery://project',location="asia-northeast1")
To query tables from non-default projects or datasets, use the following format for the SQLAlchemy schema name:[project.]dataset, e.g.:
# If neither dataset nor project are the defaultsample_table_1=Table('natality',schema='bigquery-public-data.samples')# If just dataset is not the defaultsample_table_2=Table('natality',schema='bigquery-public-data')
By default,arraysize is set to5000.arraysize is used to set the batch size for fetching results. To change it, passarraysize tocreate_engine():
engine=create_engine('bigquery://project',arraysize=1000)
By default,list_tables_page_size is set to1000.list_tables_page_size is used to set the max_results fordataset.list_tables operation. To change it, passlist_tables_page_size tocreate_engine():
engine=create_engine('bigquery://project',list_tables_page_size=100)
If you want to have theClient use a default dataset, specify it as the "database" portion of the connection string.
engine=create_engine('bigquery://project/dataset')
When using a default dataset, don't include the dataset name in the table name, e.g.:
table=Table('table_name')
Note that specifying a default dataset doesn't restrict execution of queries to that particular dataset when using raw queries, e.g.:
# Set default dataset to dataset_aengine=create_engine('bigquery://project/dataset_a')# This will still execute and return rows from dataset_bengine.execute('SELECT * FROM dataset_b.table').fetchall()
There are many situations where you can't callcreate_engine directly, such as when using tools likeFlask SQLAlchemy. For situations like these, or for situations where you want theClient to have adefault_query_job_config, you can pass many arguments in the query of the connection string.
Thecredentials_path,credentials_info,credentials_base64,location,arraysize andlist_tables_page_size parameters are used by this library, and the rest are used to create aQueryJobConfig
Note that if you want to use query strings, it will be more reliable if you use three slashes, so'bigquery:///?a=b' will work reliably, but'bigquery://?a=b' might be interpreted as having a "database" of?a=b, depending on the system being used to parse the connection string.
Here are examples of all the supported arguments. Any not present are either for legacy sql (which isn't supported by this library), or are too complex and are not implemented.
engine=create_engine('bigquery://some-project/some-dataset''?''credentials_path=/some/path/to.json''&''location=some-location''&''arraysize=1000''&''list_tables_page_size=100''&''clustering_fields=a,b,c''&''create_disposition=CREATE_IF_NEEDED''&''destination=different-project.different-dataset.table''&''destination_encryption_configuration=some-configuration''&''dry_run=true''&''labels=a:b,c:d''&''maximum_bytes_billed=1000''&''priority=INTERACTIVE''&''schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION''&''use_query_cache=true''&''write_disposition=WRITE_APPEND')
In cases where you wish to include the full credentials in the connection URI you can base64 the credentials JSON file and supply the encoded string to thecredentials_base64 parameter.
engine=create_engine('bigquery://some-project/some-dataset''?''credentials_base64=eyJrZXkiOiJ2YWx1ZSJ9Cg==''&''location=some-location''&''arraysize=1000''&''list_tables_page_size=100''&''clustering_fields=a,b,c''&''create_disposition=CREATE_IF_NEEDED''&''destination=different-project.different-dataset.table''&''destination_encryption_configuration=some-configuration''&''dry_run=true''&''labels=a:b,c:d''&''maximum_bytes_billed=1000''&''priority=INTERACTIVE''&''schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION''&''use_query_cache=true''&''write_disposition=WRITE_APPEND')
To create the base64 encoded string you can use the command line toolbase64, oropenssl base64, orpython -m base64.
Alternatively, you can use an online generator likewww.base64encode.org <https://www.base64encode.org>_ to paste your credentials JSON file to be encoded.
The above connection string parameters allow you to influence how the BigQuery client used to execute your queries will be instantiated.If you need additional control, you can supply a BigQuery client of your own:
fromgoogle.cloudimportbigquerycustom_bq_client=bigquery.Client(...)engine=create_engine('bigquery://some-project/some-dataset?user_supplied_client=True',connect_args={'client':custom_bq_client},)
To add metadata to a table:
table=Table('mytable', ...,bigquery_description='my table description',bigquery_friendly_name='my table friendly name',bigquery_default_rounding_mode="ROUND_HALF_EVEN",bigquery_expiration_timestamp=datetime.datetime.fromisoformat("2038-01-01T00:00:00+00:00"),)
To add metadata to a column:
Column('mycolumn',doc='my column description')
To create a clustered table:
table=Table('mytable', ...,bigquery_clustering_fields=["a","b","c"])
To create a time-unit column-partitioned table:
fromgoogle.cloudimportbigquerytable=Table('mytable', ...,bigquery_time_partitioning=bigquery.TimePartitioning(field="mytimestamp",type_="MONTH",expiration_ms=1000*60*60*24*30*6,# 6 months ),bigquery_require_partition_filter=True,)
To create an ingestion-time partitioned table:
fromgoogle.cloudimportbigquerytable=Table('mytable', ...,bigquery_time_partitioning=bigquery.TimePartitioning(),bigquery_require_partition_filter=True,)
To create an integer-range partitioned table
fromgoogle.cloudimportbigquerytable=Table('mytable', ...,bigquery_range_partitioning=bigquery.RangePartitioning(field="zipcode",range_=bigquery.PartitionRange(start=0,end=100000,interval=10), ),bigquery_require_partition_filter=True,)
Because this client uses the grpc library, it's safe toshare instances across threads.
In multiprocessing scenarios, the bestpractice is to create client instancesafter the invocation ofos.fork by multiprocessing.pool.Pool ormultiprocessing.Process.
About
SQLAlchemy dialect for BigQuery
Resources
License
Code of conduct
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Languages
- Python91.8%
- Shell7.6%
- Dockerfile0.6%