- Notifications
You must be signed in to change notification settings - Fork54
A portable embedded database using Arrow.
License
tonbo-io/tonbo
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Website |Rust Doc |Blog |Community
Tonbo is an embedded, persistent database offering fast KV-like methods for conveniently writing and scanning type-safe structured data. Tonbo can be used to build data-intensive applications, including other types of databases.
Tonbo is implemented with aLog-Structured Merge Tree, constructed usingApache Arrow &Apache Parquet data blocks. Leveraging Arrow and Parquet, Tonbo supports:
- Pushdown limits, predicates, and projection operators
- Zero-copy deserialization
- Various storage backends: OPFS, S3, etc. (to be supported in v0.2.0)
These features enhance the efficiency of queries on structured data.
Tonbo is designed to integrate seamlessly with other Arrow analytical tools, such as DataFusion. For an example, refer to thispreview (official support for DataFusion will be included in v0.2.0).
Note: Tonbo is currently unstable; API and file formats may change in upcoming minor versions. Please avoid using it in production and stay tuned for updates.
use std::ops::Bound;use futures_util::stream::StreamExt;use tonbo::{executor::tokio::TokioExecutor,Record,Projection,DB};/// Use macro to define schema of column family just like ORM/// It provides type-safe read & write API#[derive(Record,Debug)]pubstructUser{#[record(primary_key)]name:String,email:Option<String>,age:u8,}#[tokio::main]asyncfnmain(){// pluggable async runtime and I/Olet db =DB::new("./db_path/users".into(),TokioExecutor::current()).await.unwrap();// insert with owned value db.insert(User{name:"Alice".into(),email:Some("alice@gmail.com".into()),age:22,}).await.unwrap();{// tonbo supports transactionlet txn = db.transaction().await;// get from primary keylet name ="Alice".into();// get the zero-copy reference of record without any allocations.let user = txn.get(&name,// tonbo supports pushing down projectionProjection::All,).await.unwrap();assert!(user.is_some());assert_eq!(user.unwrap().get().age,Some(22));{let upper ="Blob".into();// range scan of usersletmut scan = txn.scan((Bound::Included(&name),Bound::Excluded(&upper))).await// tonbo supports pushing down projection.projection(&["email"]).take().await.unwrap();whileletSome(entry) = scan.next().await.transpose().unwrap(){assert_eq!( entry.value(),Some(UserRef{ name:"Alice", email:Some("alice@gmail.com"), age:Some(22),}));}}// commit transaction txn.commit().await.unwrap();}}
- Fully asynchronous API.
- Zero-copy rusty API ensuring safety with compile-time type and lifetime checks.
- Vendor-agnostic:
- Various usage methods, async runtimes, and file systems:
- Rust library:
- Python library (viaPyO3 &pydantic):
- asyncio (viapyo3-asyncio).
- JavaScript library:
- WASM and OPFS.
- Dynamic library with a C interface.
- Most lightweight implementation to Arrow / Parquet LSM Trees:
- Define schema using just Arrow schema and store data in Parquet files.
- (Optimistic) Transactions.
- Leveled compaction strategy.
- Push down filter, limit and projection.
- Various usage methods, async runtimes, and file systems:
- Runtime schema definition.
- SQL (viaApache DataFusion).
- Fusion storage across RAM, flash, SSD, and remote Object Storage Service (OSS) for each column-family, balancing performance and cost efficiency per data block:
- Remote storage (viaArrow object_store orApache OpenDAL).
- Distributed query and compaction.
- Blob storage (likeBlobDB in RocksDB).
Follow the Contributing Guide tocontribute.Please feel free to ask any question or contact us on GithubDiscussions orissues.
About
A portable embedded database using Arrow.