
Posted on • Edited on • Originally published atumlboard.com
Moving from Electron to Tauri 2
Part 2: Local Data Storage — Implementing a database backend in Rust for a Tauri application.
TL;DR: Several file-based embedded database wrappers are available for Rust. This article examines some of them and demonstrates how to incorporate them into an application written with Tauri and Rust.
This is the second post on portingUMLBoard to Tauri. While the previousarticle focused on interprocess communication, this time, we will examine how we could implement the local data storage subsystem with Rust.
Porting inter-process communication to Tauri(seelast post)- Accessing a document-based local data store with Rust (this post!)
- Validate the SVG compatibility of different Webview
- Check if Rust has a library for automatic graph layouting
To achieve this, we will first look at some available embedded datastore options for Rust and then see how to integrate them into theprototype we created in our last post.
But before we start coding, let’s glimpse at the current status quo:
Basic Architecture
The current, Electron-based UMLBoard main process uses a layered architecture split into several application services. Each service accesses the database through a repository for reading and writing user diagrams. The datalayer usesnedb, (or better, its fork@seald-io/nedb), an embedded, single-file JSON database.
This design allows for a better separation between application code and database-specific implementations.
Porting from Typescript to Rust
To port this architecture to Rust, we must reimplement each layer individually. We will do this by defining four subtasks:
- Find a file-based database implementation in Rust.
- Implement a repository layer between the database and our services.
- Connect the repository with our business logic.
- Integrate everything into our Tauri application.
Following the strategy from our last post, we will go through each step one by one.
1. Finding a suitable file-based database implementation in Rust.
There are numerous Rust wrappers for both SQL and NoSQL databases. Let's look at some of them.
(Please note this list is by no means complete, so if you think I missed an important one, let me know, and I can add it here.)
1.unqlite A Rust wrapper for theUnQLite database engine. It looks quite powerful but is not actively developed anymore -- the last commit was a few years ago.
2.PoloDB A lightweight embedded JSON database. While it's in active development, at the time of writing this (Spring 2023), it doesn't yet support asynchronous data access -- not a huge drawback, given that we're only accessing local files, but let's see what else we have.
3.Diesel The de-facto SQL ORM and query builder for Rust. Supports several SQL databases, including SQLite -- unfortunately, the SQLite driver does not yet support asynchronous operations[^1].
4.SeaORM Another SQL ORM for Rust with support for SQLite and asynchronous data access, making it a good fit for our needs. Yet, while trying to implement a prototype repository, I realized that defining a generic repository for SeaORM can become quite complex due to the number of required type arguments and constraints.
5.BonsaiDB A document-based database, currently in alpha but actively developed. Supports local data storage and asynchronous access. Also provides the possibility to implement more complex queries throughViews.
6.SurrealDB A wrapper for theSurrealDB database engine. Supports local data storage viaRocksDB and asynchronous operations.
Among these options,BonsaiDB andSurrealDB look most promising: They support asynchronous data access, don't require a separate process, and have a relatively easy-to-use API. So, let's try to integrate both of them into our application.
2. Implementing a Repository Layer in Rust
Since we want to test two different database engines, therepository pattern looks like a good option as it allows us to decouple the database from the application logic. In that way, we can easily switch the underlying database system.
Defining our repository's behavior in Rust is best achieved with atrait. For our proof-of-concept, some defaultCRUD operations will be sufficient:
// Trait describing the common behavior of// a repository. TEntity is the type of// domain entity handled by this repository.pubtraitRepository<TEntity>{fnquery_all(&self)->Vec<TEntity>;fnquery_by_id(&self,id:&str)->Option<TEntity>;fninsert(&self,data:TEntity)->TEntity;fnedit(&self,id:&str,data:TEntity)->Option<TEntity>;}
Our repository is generic over its entity type, allowing us to reuse our implementation for different entities. We need one implementation of this trait for every database engine we want to support.
Let's see what the implementation will look like forBonsaiDB andSurrealDB.
BonsaiDB
First, we declare a structBonsaiRepository that holds a reference to BonsaiDB'sAsyncDatabase
object we need to interact with our DB.
pubstructBonsaiRepository<'a,TData>{// gives access to a BonsaiDB databasedb:&'aAsyncDatabase,// required as generic type is not (yet) used in the structphantom:PhantomData<TData>}
Our struct has a generic argument, so we can already specify the entity type upon instance creation. However, since the compiler complains that the type is not used in the struct, we must define aphantom
field to suppress thiserror.
But most importantly, we need to implement theRepository trait for our struct:
// Repository implementation for BonsaiDB database#[async_trait]impl<'a,TData>Repository<TData>forBonsaiRepository<'a,TData>// bounds are necessary to comply with BonsaiDB APIwhereTData:SerializedCollection<Contents=TData>+Collection<PrimaryKey=String>+'static+Unpin{asyncfnquery_all(&self)->Vec<TData>{letdocs=TData::all_async(self.db).await.unwrap();letentities:Vec<_>=docs.into_iter().map(|f|f.contents).collect();entities}// note that id is not required here, as already part of dataasyncfninsert(&self,data:TData,id:&str)->TData{letnew_doc=data.push_into_async(self.db).await.unwrap();new_doc.contents}asyncfnedit(&self,id:&str,data:TData)->TData{letdoc=TData::overwrite_async(id,data,self.db).await.unwrap();doc.contents}asyncfnquery_by_id(&self,id:&str)->Option<TData>{letdoc=TData::get_async(id,self.db).await.unwrap().unwrap();Some(doc.contents)}}
Rust doesn't yet support asynchronous trait functions, so we must use theasync_trait here. Our generic type also needs a constraint to signal Rust that we are working withBonsaiDB entities. These entities consist of a header (which contains meta-data like the id) and the content (holding the domain data). We're going to handle the ids ourselves, so we only need the content object.
Please also note that I skipped error handling for brevity throughout the prototype.
SurrealDB
Our SurrealDB implementation works similarly, but this time, we must also provide the database table name as SurrealDB requires it as part of the primary key.
pubstructSurrealRepository<'a,TData>{// reference to SurrealDB's Database objectdb:&'aSurreal<Db>,// required as generic type not usedphantom:PhantomData<TData>,// this is needed by SurrealDB API to identify objectstable_name:&'staticstr}
Again, our trait implementation mainly wraps the underlaying database API:
// Repository implementation for SurrealDB database#[async_trait]impl<'a,TData>Repository<TData>forSurrealRepository<'a,TData>whereTData:Sync+Send+DeserializeOwned+Serialize{asyncfnquery_all(&self)->Vec<TData>{letentities:Vec<TData>=self.db.select(self.table_name).await.unwrap();entities}// here we need the id, although its already stored in the dataasyncfninsert(&self,data:TData,id:&str)->TData{letcreated=self.db.create((self.table_name,id)).content(data).await.unwrap();created}asyncfnedit(&self,id:&str,data:TData)->TData{letupdated=self.db.update((self.table_name,id)).content(data).await.unwrap();updated}asyncfnquery_by_id(&self,id:&str)->Option<TData>{letentity=self.db.select((self.table_name,id)).await.unwrap();entity}}
To complete our repository implementation, we need one last thing: Anentity we can store in the database. For this, we use a simplified version of a widespread UML domain type, aClassifier.
This is a general type in UML used to describe concepts like aClass,Interface, orDatatype. OurClassifier
struct contains some typical domain properties, but also an_id field which serves as primary key.
#[derive(Debug,Serialize,Deserialize,Default,Collection)]#[collection(// custom key definition for BonsaiDBname="classifiers",primary_key=String,natural_id=|classifier:&Classifier|Some(classifier._id.clone()))]pubstructClassifier{pub_id:String,pubname:String,pubposition:Point,pubis_interface:bool,pubcustom_dimension:Option<Dimension>}
To tellBonsaiDB that we manage entity ids through the_id
field, we need to decorate our type with an additional macro.
While having our own ids requires a bit more work on our side, it helps us to abstract database-specific implementations away and makes adding new database engines easier.
3. Connect the repository with our business logic
To connect the database backend with our application logic, we inject theRepository trait into theClassifierService and narrow the type argument toClassifier. The actual implementation type of the repository (and thus, its size) is not known at compile time, so we have to use thedyn
keyword in the declaration.
// classifier service holding a typed repositorypubstructClassifierService{// constraints required by Tauri to support multi threadingrepository:Box<dynRepository<Classifier>+Send+Sync>}implClassifierService{pubfnnew(repository:Box<dynRepository<Classifier>+Send+Sync>)->Self{Self{repository}}}
Since this is only for demonstration purposes, our service will delegate most of the work to the repository without any sophisticated business logic. For managing our entities' primary keys, we rely on theuuid crate to generate unique ids.
The following snippet contains only an excerpt, for the complete implementation, please see theGithub repository.
implClassifierService{pubasyncfncreate_new_classifier(&self,new_name:&str)->Classifier{// we have to manage the ids on our own, so create a new one hereletid=Uuid::new_v4().to_string();letnew_classifier=self.repository.insert(Classifier{_id:id.to_string(),name:new_name.to_string(),is_interface:false,..Default::default()},&id).await;new_classifier}pubasyncfnupdate_classifier_name(&self,id:&str,new_name:&str)->Classifier{letmutclassifier=self.repository.query_by_id(id).await.unwrap();classifier.name=new_name.to_string();// we need to copy the id because "edit" owns the containing structletid=classifier._id.clone();letupdated=self.repository.edit(&id,classifier).await;updated}}
4. Integrate everything into our Tauri application
Let’s move on to the last part, where we will assemble everything together to get our application up and running.
For this prototype, we will focus only on two simple use cases:
(1) On application startup, the main process sends all available classifiers to the webview (a new classifier will automatically be created if none exists), and
(2) editing the classifier's name via the edit field will update the entity in the database.
See the following diagram for the complete workflow:
The second use case may sound familiar: We already implemented it partially in our last post, but without persisting the changes to a database backend.
Last time, our service implemented anActionHandler trait to handle incoming actions from the webview. While this approach worked, it was limited to only a single type of action, theClassifierActions.
This time, we have more than one action type:ApplicationActions that control the overall program flow andClassifierActions for behavior specific to classifier entities.
To handle both types uniformly, we split our trait into a non-genericActionDispatcher responsible for routing actions to their corresponding handler, and anActionHandler with the actual domain logic.
Our service must implement both traits, first the dispatcher
// Dispatcher logic to choose the correct handler// depending on the action's domain#[async_trait]implActionDispatcherforClassifierService{asyncfndispatch_action(&self,domain:String,action:Value)->Value{ifdomain==CLASSIFIER_DOMAIN{ActionHandler::<ClassifierAction>::convert_and_handle(self,action).await}elseifdomain==APPLICATION_DOMAIN{ActionHandler::<ApplicationAction>::convert_and_handle(self,action).await}else{// this should normally not happen, either// throw an error or change return type to Optiontodo!();}}}
and then theActionHandler for every type of action we want to support:
// handling of classifier related actions#[async_trait]implActionHandler<ClassifierAction>forClassifierService{asyncfnhandle_action(&self,action:ClassifierAction)->ClassifierAction{letresponse=matchaction{// rename the entity and return the new entity stateClassifierAction::RenameClassifier(data)=>{letclassifier=self.update_classifier_name(&data.id,&data.new_name).await;ClassifierAction::ClassifierRenamed(EditNameDto{id:classifier._id,new_name:classifier.name})},// cancel the rename operation by returning the original nameClassifierAction::CancelClassifierRename{id}=>{letclassifier=self.get_by_id(&id).await;ClassifierAction::ClassifierRenameCanceled(EditNameDto{id,new_name:classifier.name})},// return error if we don't know how to handle the action_=>ClassifierAction::ClassifierRenameError};returnresponse;}}// handling of actions related to application workflow#[async_trait]implActionHandler<ApplicationAction>forClassifierService{asyncfnhandle_action(&self,action:ApplicationAction)->ApplicationAction{letresponse=matchaction{ApplicationAction::ApplicationReady=>{// implementation omitted},_=>ApplicationAction::ApplicationLoadError};returnresponse;}}
The dispatcher'sActionHandler calling syntax seems odd but is needed to specify the correct trait implementation.
With theActionDispatcher trait, we can now define our Tauri app state. It contains only a dictionary where we store one dispatcher per action domain. Consequently, ourClassifierService must be registered twice since it can handle actions of two domains.
Since the dictionary owns its values, we use anAtomic Reference Counter (Arc) to store our service references.
// the application context our Tauri Commands will have access tostructApplicationContext{action_dispatchers:HashMap<String,Arc<dynActionDispatcher+Sync+Send>>}// initialize the application context with our action dispatchersimplApplicationContext{asyncfnnew()->Self{// create our database and repository// note: to use BonsaiDB instead, replace the database and repository// here with the corresponding implementationletsurreal_db=Surreal::new::<File>("umlboard.db").await.unwrap();surreal_db.use_ns("uml_ns").use_db("uml_db").await.unwrap();letrepository=Box::new(SurrealRepository::new(Box::new(surreal_db),"classifiers"));// create the classifier application serviceletservice=Arc::new(ClassifierService::new(repository));// setup our action dispatcher map and add the service for each// domain it can handleletmutdispatchers:HashMap<String,Arc<dynActionDispatcher+Sync+Send>>=HashMap::new();dispatchers.insert(CLASSIFIER_DOMAIN.to_string(),service.clone());dispatchers.insert(APPLICATION_DOMAIN.to_string(),service.clone());Self{action_dispatchers:dispatchers}}}
The last part is easy again: We update our Tauri command and choose the correct dispatcher depending on the incoming action:
#[tauri::command]asyncfnipc_message(message:IpcMessage,context:State<'_,ApplicationContext>)->Result<IpcMessage,()>{letdispatcher=context.action_dispatchers.get(&message.domain).unwrap();letresponse=dispatcher.dispatch_action(message.domain.to_string(),message.action).await;Ok(IpcMessage{domain:message.domain,action:response})}
Et voilá:
It was quite some work, but everything fits together now:
We can load our entities from the database, let the user change an entity's name, and persist the changes to our database so that they are still available the next time the application starts.
Jobs done!
Conclusion
In this post, we created a proof-of-concept for adding a database backend for maintaining our application state to a Tauri app. Using a repository trait, we were able to decouple our application logic from our database, letting us switch between different database backends.
However, our repository interface is rather minimalist, but more complex scenarios would definitely require a more advanced query API. But that's something for another post...
Also, migrating from an old domain model to a new one would be another good topic for a follow-up article.
What about you? Have you already worked with Tauri and Rust?
Please share your experience in the comments or via Twitter@umlboard.
Image of a local database generated withBing Image Creator.
Source code for this project is available onGithub.
Originally published athttps://umlboard.com.
Top comments(0)
For further actions, you may consider blocking this person and/orreporting abuse