- Notifications
You must be signed in to change notification settings - Fork7
SQLAchemy and Apache Superset extensions for QuestDB
License
questdb/questdb-connect
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This repository contains the official implementation of QuestDB's dialect forSQLAlchemy,as well as an engine specification forApache Superset, usingpsycopg2 for database connectivity.
The Python module is available here:
Psycopg2 is a widely used and trusted Python module for connecting to, and working with, QuestDB and otherPostgreSQL databases.
SQLAlchemy is a SQL toolkit and ORM library for Python. It provides a high-level API for communicating withrelational databases, including schema creation and modification. The ORM layer abstracts away the complexitiesof the database, allowing developers to work with Python objects instead of raw SQL statements.
Apache Superset is an open-source business intelligence web application that enables users to visualize andexplore data through customizable dashboards and reports. It provides a rich set of data visualizations, includingcharts, tables, and maps.
- Python from 3.9 to 3.11 (superset itself use version3.9.x)
- Psycopg2
('psycopg2-binary~=2.9.6')
- SQLAlchemy
('SQLAlchemy>=1.4')
You need to install these packages because questdb-connect depends on them. Note thatquestdb-connect
v1.1is compatible with bothSQLAlchemy
v1.4 and v2.0 whilequestdb-connect
v1.0 is compatible withSQLAlchemy
v1.4 only.
These are versions released for testing purposes.
You can install this package using pip:
pip install questdb-connect
Use the QuestDB dialect by specifying it in your SQLAlchemy connection string:
questdb://admin:quest@localhost:8812/mainquestdb://admin:quest@host.docker.internal:8812/main
From that point on use standard SQLAlchemy. Example with raw SQL API:
importdatetimeimporttimeimportuuidfromsqlalchemyimportcreate_engine,textdefmain():engine=create_engine('questdb://admin:quest@localhost:8812/main')withengine.begin()asconnection:# Create the tableconnection.execute(text(""" CREATE TABLE IF NOT EXISTS signal ( source SYMBOL, value DOUBLE, ts TIMESTAMP, uuid UUID ) TIMESTAMP(ts) PARTITION BY HOUR WAL; """))# Insert 2 rowsconnection.execute(text(""" INSERT INTO signal (source, value, ts, uuid) VALUES (:source1, :value1, :ts1, :uuid1), (:source2, :value2, :ts2, :uuid2) """), {'source1':'coconut','value1':16.88993244,'ts1':datetime.datetime.utcnow(),'uuid1':uuid.uuid4(),'source2':'banana','value2':3.14159265,'ts2':datetime.datetime.utcnow(),'uuid2':uuid.uuid4() })# WAL is applied asynchronously, so we need to wait for it to be applied before queryingtime.sleep(1)# Start a new transactionwithengine.begin()asconnection:# Query the table for rows where value > 10result=connection.execute(text("SELECT source, value, ts, uuid FROM signal WHERE value > :value"), {'value':10} )forrowinresult:print(row.source,row.value,row.ts,row.uuid)if__name__=='__main__':main()
Alternatively, you can use the ORM API:
importdatetimeimportuuidimporttimefromquestdb_connectimportSymbol,PartitionBy,UUID,Double,Timestamp,QDBTableEnginefromsqlalchemyimportColumn,MetaData,create_engine,textfromsqlalchemy.ormimportdeclarative_base,sessionmakerBase=declarative_base(metadata=MetaData())classSignal(Base):# Stored in a QuestDB table 'signal'. The tables has WAL enabled, is partitioned by hour, designated timestamp is 'ts'__tablename__='signal'__table_args__= (QDBTableEngine(None,'ts',PartitionBy.HOUR,is_wal=True),)source=Column(Symbol)value=Column(Double)ts=Column(Timestamp)uuid=Column(UUID,primary_key=True)def__repr__(self):returnf"Signal(source={self.source}, value={self.value}, ts={self.ts}, uuid={self.uuid})"defmain():engine=create_engine('questdb://admin:quest@localhost:8812/main')# Create the tableBase.metadata.create_all(engine)Session=sessionmaker(bind=engine)session=Session()# Insert 2 rowssession.add(Signal(source='coconut',value=16.88993244,ts=datetime.datetime.utcnow(),uuid=uuid.uuid4() ))session.add(Signal(source='banana',value=3.14159265,ts=datetime.datetime.utcnow(),uuid=uuid.uuid4() ))session.commit()# WAL is applied asynchronously, so we need to wait for it to be applied before queryingtime.sleep(1)# Query the table for rows where value > 10signals=session.query(Signal).filter(Signal.value>10).all()forsignalinsignals:print(signal.source,signal.value,signal.ts,signal.uuid)if__name__=='__main__':main()
ORM (Object-Relational Mapping) API is not recommended for QuestDB due to its fundamentaldesign differences from traditional transactional databases. While ORMs excel at managingrelationships and complex object mappings in systems like PostgreSQL or MySQL, QuestDB isspecifically optimized for time-series data operations and high-performance ingestion. Itintentionally omits certain SQL features that ORMs typically rely on, such as generatedcolumns, foreign keys, and complex joins, in favor of time-series-specific optimizations.
For optimal performance and to fully leverage QuestDB's capabilities, we strongly recommendusing the raw SQL API, which allows direct interaction with QuestDB's time-series-focusedquery engine and provides better control over time-based operations.
QuestDB differs from traditional relational databases in its handling of data uniqueness. While most databases enforceprimary keys to guarantee unique record identification, QuestDB operates differently due to its time-series optimizedarchitecture.
When using SQLAlchemy with QuestDB:
- You can define primary keys in your SQLAlchemy models, but QuestDB won't enforce uniqueness for individual columns
- Duplicate rows with identical primary key values can exist in the database
- Data integrity must be managed at the application level
- QuestDB supportdeduplication during ingestion to avoid data duplication, this can be enabled in the table creation
- Composite Keys + QuestDB Deduplication
Composite keys can be used to define uniqueness based on multiple columns. This approach:
- Can combine timestamp with any number of additional columns
- Works with QuestDB's deduplication capabilities
- Useful for scenarios where uniqueness is defined by multiple attributes
- Common combinations might include:
- timestamp + device_id + metric_type
- timestamp + location + sensor_id
- timestamp + instrument_id + exchange + side
Deduplication is often enabled in QuestDB regardless of the primary key definition sinceit's required to avoid data duplication during ingestion.
Example:
fromquestdb_connectimportQDBTableEngine,PartitionBy,Double,Timestamp,SymbolclassMeasurement(Base):__tablename__='signal'__table_args__= (QDBTableEngine(None,'timestamp',PartitionBy.HOUR,is_wal=True),)timestamp=Column(Timestamp,primary_key=True)sensor_id=Column(Symbol,primary_key=True)location=Column(Symbol,primary_key=True)value=Column(Double)
Choose your approach based on your data model and whether you need to leverage QuestDB's deduplication capabilities.
- UUID-based Identification
UUIDs are ideal for QuestDB applications because they:
- Are globally unique across distributed systems
- Can be generated client-side without database coordination
- Work well with high-throughput data ingestion
Example:
fromquestdb_connectimportSymbol,PartitionBy,UUID,Double,Timestamp,QDBTableEngineclassSignal(Base):__tablename__='signal'__table_args__= (QDBTableEngine(None,'ts',PartitionBy.HOUR,is_wal=True),)source=Column(Symbol)value=Column(Double)ts=Column(Timestamp)uuid=Column(UUID,primary_key=True)# other columns...
This repository also contains an engine specification for Apache Superset, which allows you to connectto QuestDB from within the Superset interface.
Follow the officialQuestDB Superset guide available on theQuestDB website to install and configure the QuestDB engine in Superset.
This package is open-source, contributions are welcome. If you find a bug or would like to request a feature,please open an issue on the GitHub repository. Have a look at the instructions fordevelopersif you would like to push a PR.
About
SQLAchemy and Apache Superset extensions for QuestDB