You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
fnmain(){use sqlite_gen_hello_worldas queries;letmut db = rusqlite::Connection::open_in_memory().unwrap(); queries::migrate(&db).unwrap(); queries::create_user(&db,"rust human",0).unwrap();for user_idin queries::list_users(&db).unwrap(){let user = queries::get_user(&db, user_id).unwrap();println!("User {}: {}", user_id, user.name);}Ok(())}
User 1: rust human
Usage details
Features
pg - enables generating code for PostgreSQL
sqlite - enables generating code for Sqlite
chrono - enable datetime field/expression types
Schema IDs and IDs
"Schema IDs" are internal ids used for matching fields across versions, to identify renames, deletes, etc. Schema IDs must not change once used in a version. I recommend using randomly generated IDs, via a key macro. Changing Schema IDs will result in a delete followed by a create.
"IDs" are used both in SQL (for fields) and Rust (in parameters and returned data structures), so must be valid in both (however, some munging is automatically applied to ids in Rust if they clash with keywords). Depending on the database, you can change IDs arbitrarily between schema versions but swapping IDs in consecutive versions isn't currently supported - if you need to do swaps do it over three different versions (ex:v0:A andB,v1:A_ andB,v2:B andA).
Query, expression and fields types
Usetype_*field_* functions to get type builders for use in expressions/fields.
Usenew_insert/select/update/delete to create query builders.
There are also some helper functions for building queries, see
field_param, a shortcut for a parameter matching the type and name of a field
set_field, a shortcut for setting field values in INSERT and UPDATE
eq_field,gt_field,gte_field,lt_field,lte_field are shortcuts for expressions comparing a field and a parameter with the same type
expr_and, a shortcut for AND expressions
for the database you're using.
Custom types
When defining a field in the schema, call.custom("mycrate::MyString", type_str().build()) on the field type builder (or pass it in asSome("mycreate::MyType".to_string()) if creating the type structure directly).
The type must have methods to convert to/from the native SQL types. There are traits to guide the implementation:
TheExpr::Call variant allows you to create method call expressions. You must provide incompute_type a helper method to type-check the arguments and determine the type of the evaluation of the call.
The first parameter is the evaluation context, which containserrs for reporting errors. The second is a path from the evaluation tree root up to the call, for identifying where in a query expression errors occur. The third argument is a vec of arguments passed to the call. Each argument can be a single type or a record consisting of multiple types (like in() inwhere (x, y, z) < (b.x, b.y, b.z)). If there are no errors, this must returnSome(...).
Error handling is lazy during expression checking - even if an error occurs, processing can continue (and identify more errors before aborting). All errors are fatal, they just don't cause an abort immediately.
If there are errors, record the errors inctx.errs.err(path.add(format!("Argument 0")), format!("Error")). If evaluation within the call cannot continue, returnNone, otherwise continue.
Parameters and return types
Parameters with the same name are deduplicated - if you define a query with multiple parameters of the same name but different types you'll get an error.
Different queries with the same multiple-field returns will use the same return type.
Comparisons
Vs Diesel
Good-ormning is functionally most similar to Diesel.
Diesel
You can define your queries and result structures near where you use them
You can dynamically define queries (i.e. swap operators depending on the input, etc.)
Result structures must be manually defined, and care must be taken to get the field order to match the query
You can define new types to use in the schema, which are checked against queries, although this requires significant boilerplate
Requires many macros, trait implementations
To synchronize your migrations and in-code schema, you can use the CLI with a live database with migrations applied. However, this resets any custom SQL types in the schema with the built-in SQL types. Alternatively you can maintain the schema by hand (and risk query issues due to typos, mismatches).
Column count limitations, slow build times
Supports more syntax, withstood test of time
Good-ormning
Queries have to be defined separately, in thebuild.rs file
All queries have to be defined up front inbuild.rs
You don't have to write any structures, everything is generated from schema and query info
Custom types can be incorporated into the schema with no boilerplate
Migrations are automatically derived via a diff between schema versions plus additional migration metadata
Clear error messages, thanks to no macros, generics
Code generation is fast, compiling the simple generated code is also fast
Alpha
Vs SQLx
SQLx
SQLx has no concept of a schema so it can only perform type-checking on native SQL types (no consideration for new types, blob encodings, etc)
Requires a running database during development
Good-ormning
The same schema used for generating migrations is used for type checking, and natively supports custom types
A live database is unused during development, but all query syntax must be manually implemented in Good-ormning so you may encounter missing features
Vs SeaORM
SeaORM focuses on runtime checks rather than compile time checks.
A few words on the future
Obviously writing an SQL VM isn't great. The ideal solution would be for popular databases to expose their type checking routines as libraries so they could be imported into external programs, like how Go publishes reusable ast-parsing and type-checking libraries.