Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Make every ormning the best ormning

License

NotificationsYou must be signed in to change notification settings

andrewbaxter/good-ormning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Good-ormning is an ORM, probably? In a nutshell:

  1. Define schemas and queries inbuild.rs
  2. Good-ormning generates a function to set up/migrate the database
  3. Good-ormning generates functions for each query

Why you want it

  • You want end to end type safety, from table definition through queries, across versions
  • You want to do everything in Rust, you don't want to need to spin up a database and run SQL manually

Features

  • No macros
  • No generics
  • No traits (okay, simple traits for custom types to help guide implementationsonly)
  • No boilerplate
  • Automatic migrations, no migration-schema mismatches
  • Query parameter type checking - no runtime errors due to parameter types, counts, or ordering
  • Query logic type checking via a query simulation
  • Query result type checking - no runtime errors due to result types, counts, or ordering
  • Fast to generate, minimum runtime overhead

Like other Rust ORMs, Good-ormning doesn't abstract away from actual database workflows, but instead aims to enhance type checking with normal SQL.

See Comparisons, below, for information on how Good-ormning differs from other Rust ORMs.

Current status

  • Basic features work, this works for my basic uses
  • Moderate test coverage
  • Missing advanced features - let me know if there's something you want
  • Some ergonomics issues, interfaces may change in upcoming releases

Supported databases

  • PostgreSQL (featurepg) viatokio-postgres
  • Sqlite (featuresqlite) viarusqlite

Getting started

First time

  1. You'll need the following runtime dependencies:

    • good-ormning-runtime
    • tokio-postgres for PostgreSQL
    • rusqlite for Sqlite

    Andbuild.rs dependencies:

    • good-ormning

    And youmust enable one (or more) of the database features:

    • pg
    • sqlite

    plus maybechrono forDateTime support.

  2. Create abuild.rs and define your initial schema version and queries

  3. Callgoodormning::generate() to output the generated code

  4. In your code, after creating a database connection, callmigrate

Schema changes

  1. Copy your previous version schema, leaving the old schema version untouched. Modify the new schema and queries as you wish.
  2. Pass both the old and new schema versions togoodormning::generate(), which will generate the new migration statements.
  3. At runtime, themigrate call will make sure the database is updated to the new schema version.

Example

Thisbuild.rs file

use std::{    path::PathBuf,    env,};use good_ormning::sqlite::{Version,    schema::{        field::*,        constraint::*,},    query::{        expr::*,        select::*,},*};fnmain(){println!("cargo:rerun-if-changed=build.rs");let root =PathBuf::from(&env::var("CARGO_MANIFEST_DIR").unwrap());letmut latest_version =Version::default();let users = latest_version.table("zQLEK3CT0","users");let id = users.rowid_field(&mut latest_version,None);let name = users.field(&mut latest_version,"zLQI9HQUQ","name",field_str().build());let points = users.field(&mut latest_version,"zLAPH3H29","points",field_i64().build());    good_ormning::sqlite::generate(&root.join("tests/sqlite_gen_hello_world.rs"),vec![// Versions(0usize, latest_version)],vec![// Latest version queries        new_insert(&users, vec![(name.clone(),Expr::Param{            name:"name".into(),            type_: name.type_.type_.clone(),}),(points.clone(),Expr::Param{            name:"points".into(),            type_: points.type_.type_.clone(),})]).build_query("create_user",QueryResCount::None),        new_select(&users).where_(Expr::BinOp{            left:Box::new(Expr::Field(id.clone())),            op:BinOp::Equals,            right:Box::new(Expr::Param{                name:"id".into(),                type_: id.type_.type_.clone(),}),}).return_fields(&[&name,&points]).build_query("get_user",QueryResCount::One),        new_select(&users).return_field(&id).build_query("list_users",QueryResCount::Many)]).unwrap();}

Generates something like:

pubfnmigrate(db:&mut rusqlite::Connection) ->Result<(),GoodError>{// ...}pubfncreate_user(db:&mut rusqlite::Connection,name:&str,points:i64) ->Result<(),GoodError>{// ...}pubstructDbRes1{pubname:String,pubpoints:i64,}pubfnget_user(db:&mut rusqlite::Connection,id:i64) ->Result<DbRes1,GoodError>{// ...}pubfnlist_users(db:&mut rusqlite::Connection) ->Result<Vec<i64>,GoodError>{// ...}

And can be used like:

fnmain(){use sqlite_gen_hello_worldas queries;letmut db = rusqlite::Connection::open_in_memory().unwrap();    queries::migrate(&db).unwrap();    queries::create_user(&db,"rust human",0).unwrap();for user_idin queries::list_users(&db).unwrap(){let user = queries::get_user(&db, user_id).unwrap();println!("User {}: {}", user_id, user.name);}Ok(())}
User 1: rust human

Usage details

Features

  • pg - enables generating code for PostgreSQL
  • sqlite - enables generating code for Sqlite
  • chrono - enable datetime field/expression types

Schema IDs and IDs

"Schema IDs" are internal ids used for matching fields across versions, to identify renames, deletes, etc. Schema IDs must not change once used in a version. I recommend using randomly generated IDs, via a key macro. Changing Schema IDs will result in a delete followed by a create.

"IDs" are used both in SQL (for fields) and Rust (in parameters and returned data structures), so must be valid in both (however, some munging is automatically applied to ids in Rust if they clash with keywords). Depending on the database, you can change IDs arbitrarily between schema versions but swapping IDs in consecutive versions isn't currently supported - if you need to do swaps do it over three different versions (ex:v0:A andB,v1:A_ andB,v2:B andA).

Query, expression and fields types

Usetype_*field_* functions to get type builders for use in expressions/fields.

Usenew_insert/select/update/delete to create query builders.

There are also some helper functions for building queries, see

  • field_param, a shortcut for a parameter matching the type and name of a field
  • set_field, a shortcut for setting field values in INSERT and UPDATE
  • eq_field,gt_field,gte_field,lt_field,lte_field are shortcuts for expressions comparing a field and a parameter with the same type
  • expr_and, a shortcut for AND expressions

for the database you're using.

Custom types

When defining a field in the schema, call.custom("mycrate::MyString", type_str().build()) on the field type builder (or pass it in asSome("mycreate::MyType".to_string()) if creating the type structure directly).

The type must have methods to convert to/from the native SQL types. There are traits to guide the implementation:

pubstructMyString(pubString);impl good_ormning_runtime::pg::GoodOrmningCustomString<MyString>forMyString{fnto_sql(value:&MyString) ->&str{&value.0}fnfrom_sql(s:String) ->Result<MyString,String>{Ok(Self(s))}}

Methods

TheExpr::Call variant allows you to create method call expressions. You must provide incompute_type a helper method to type-check the arguments and determine the type of the evaluation of the call.

The first parameter is the evaluation context, which containserrs for reporting errors. The second is a path from the evaluation tree root up to the call, for identifying where in a query expression errors occur. The third argument is a vec of arguments passed to the call. Each argument can be a single type or a record consisting of multiple types (like in() inwhere (x, y, z) < (b.x, b.y, b.z)). If there are no errors, this must returnSome(...).

Error handling is lazy during expression checking - even if an error occurs, processing can continue (and identify more errors before aborting). All errors are fatal, they just don't cause an abort immediately.

If there are errors, record the errors inctx.errs.err(path.add(format!("Argument 0")), format!("Error")). If evaluation within the call cannot continue, returnNone, otherwise continue.

Parameters and return types

Parameters with the same name are deduplicated - if you define a query with multiple parameters of the same name but different types you'll get an error.

Different queries with the same multiple-field returns will use the same return type.

Comparisons

Vs Diesel

Good-ormning is functionally most similar to Diesel.

Diesel

  • You can define your queries and result structures near where you use them
  • You can dynamically define queries (i.e. swap operators depending on the input, etc.)
  • Result structures must be manually defined, and care must be taken to get the field order to match the query
  • You can define new types to use in the schema, which are checked against queries, although this requires significant boilerplate
  • Requires many macros, trait implementations
  • To synchronize your migrations and in-code schema, you can use the CLI with a live database with migrations applied. However, this resets any custom SQL types in the schema with the built-in SQL types. Alternatively you can maintain the schema by hand (and risk query issues due to typos, mismatches).
  • Column count limitations, slow build times
  • Supports more syntax, withstood test of time

Good-ormning

  • Queries have to be defined separately, in thebuild.rs file
  • All queries have to be defined up front inbuild.rs
  • You don't have to write any structures, everything is generated from schema and query info
  • Custom types can be incorporated into the schema with no boilerplate
  • Migrations are automatically derived via a diff between schema versions plus additional migration metadata
  • Clear error messages, thanks to no macros, generics
  • Code generation is fast, compiling the simple generated code is also fast
  • Alpha

Vs SQLx

SQLx

  • SQLx has no concept of a schema so it can only perform type-checking on native SQL types (no consideration for new types, blob encodings, etc)
  • Requires a running database during development

Good-ormning

  • The same schema used for generating migrations is used for type checking, and natively supports custom types
  • A live database is unused during development, but all query syntax must be manually implemented in Good-ormning so you may encounter missing features

Vs SeaORM

SeaORM focuses on runtime checks rather than compile time checks.

A few words on the future

Obviously writing an SQL VM isn't great. The ideal solution would be for popular databases to expose their type checking routines as libraries so they could be imported into external programs, like how Go publishes reusable ast-parsing and type-checking libraries.

About

Make every ormning the best ormning

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp