- Notifications
You must be signed in to change notification settings - Fork551
Generate a Go ORM tailored to your database schema.
License
volatiletech/sqlboiler
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This package is currently in maintenance mode, which means:
- It generally does not accept new features.
- It does accept bug fixes and version compatability changes provided by the community.
- Maintainers usually do not resolve reported issues.
- Community members are encouraged to help each other with reported issues.
If looking for an actively maintained alternative, consider the following:
Bob is very similar to SQLBoiler. It was directly inspired by SQLBoiler and was created by a maintainer of SQLBoiler.
A comparison can be found here:https://bob.stephenafamo.com/vs/sqlboiler/.
2. sqlc -https://github.com/sqlc-dev/sqlc
sqlc
is a command line tool that generates type-safe code from SQL.
It is not an ORM but for many use cases it can be a good alternative to SQLBoiler.
SQLBoiler is a tool to generate a Go ORM tailored to your database schema.
It is a "database-first" ORM as opposed to "code-first" (like gorm/gorp).That means you must first create your database schema. Please use somethinglikesql-migrateor some other migration tool to manage this part of the database's life-cycle.
v1, v2, and v3 are no longer maintained.
v3 is the last GOPATH-compatible version.
v4 has no real breaking changes between v3 and itself other than Go modulesand is the only maintained version. Note this does not work with GOPATHprojects.
While attempting to migrate a legacy Rails database, we realized how much ActiveRecord benefited us in terms of development velocity.Coming over to the Godatabase/sql
package after using ActiveRecord feels extremely repetitive, super long-winded and down-right boring.Being Go veterans we knew the state of ORMs was shaky, and after a quick review we found what our fears confirmed. Most packages outthere are code-first, reflect-based and have a very weak story around relationships between models. So with that we set out with these goals:
- Work with existing databases: Don't be the tool to define the schema, that's better left to other tools.
- ActiveRecord-like productivity: Eliminate all sql boilerplate, have relationships as a first-class concept.
- Go-like feel: Work with normal structs, call functions, no hyper-magical struct tags, small interfaces.
- Go-like performance:Benchmark and optimize the hot-paths, perform like hand-rolled
sql.DB
code.
We believe with SQLBoiler and our database-first code-generation approach we've been able to successfully meet all of these goals. On topof that SQLBoiler also confers the following benefits:
- The models package is type safe. This means no chance of random panics due to passing in the wrong type. No need for interface{}.
- Our types closely correlate to your database column types. This is expanded by our extended null package which supports nearly all Go data types.
- A system that is easy to debug. Your ORM is tailored to your schema, the code paths should be easy to trace since it's not all buried in reflect.
- Auto-completion provides work-flow efficiency gains.
- SQLBoiler
- Why another ORM
- About SQL Boiler
- Requirements & Pro Tips
- Getting started
- Diagnosing Problems
- Features & Examples
- FAQ
- Won't compiling models for a huge database be very slow?
- Missing imports for generated package
- How should I handle multiple schemas
- How do I use the types.BytesArray for Postgres bytea arrays?
- Why aren't my time.Time or null.Time fields working in MySQL?
- Where is the homepage?
- Why are the auto-generated tests failing?
- Benchmarks
- Third-Party Extensions
- Full model generation
- Extremely fast code generation
- High performance through generation & intelligent caching
- Uses boil.Executor (simple interface, sql.DB, sqlx.DB etc. compatible)
- Uses context.Context
- Easy workflow (models can always be regenerated, full auto-complete)
- Strongly typed querying (usually no converting or binding to pointers)
- Hooks (Before/After Create/Select/Update/Delete/Upsert)
- Automatic CreatedAt/UpdatedAt
- Automatic DeletedAt
- Table and column whitelist/blacklist
- Relationships/Associations
- Eager loading (recursive)
- Custom struct tags
- Transactions
- Raw SQL fallback
- Compatibility tests (Run against your own DB schema)
- Debug logging
- Basic multiple schema support (no cross-schema support)
- 1d arrays, json, hstore & more
- Enum types
- Out of band driver support
- Support for database views
- Supports generated/computed columns
- Multi-column foreign key support
- Materialized view support
- Only postgresql is supported
Note: SQLBoiler supports out of band driver support so you can make your own
We are seeking contributors for other database engines.
For a comprehensive list of available operations and examples please seeFeatures & Examples.
import (// Import this so we don't have to use qm.Limit etc. ."github.com/volatiletech/sqlboiler/v4/queries/qm")// Open handle to database like normaldb,err:=sql.Open("postgres","dbname=fun user=abc")iferr!=nil {returnerr}// If you don't want to pass in db to all generated methods// you can use boil.SetDB to set it globally, and then use// the G variant methods like so (--add-global-variants to enable)boil.SetDB(db)users,err:=models.Users().AllG(ctx)// Query all usersusers,err:=models.Users().All(ctx,db)// Panic-able if you like to code that way (--add-panic-variants to enable)users:=models.Users().AllP(db)// More complex queryusers,err:=models.Users(Where("age > ?",30),Limit(5),Offset(6)).All(ctx,db)// Ultra complex queryusers,err:=models.Users(Select("id","name"),InnerJoin("credit_cards c on c.user_id = users.id"),Where("age > ?",30),AndIn("c.kind in ?","visa","mastercard"),Or("email like ?",`%aol.com%`),GroupBy("id","name"),Having("count(c.id) > ?",2),Limit(5),Offset(6),).All(ctx,db)// Use any "boil.Executor" implementation (*sql.DB, *sql.Tx, data-dog mock db)// for any query.tx,err:=db.BeginTx(ctx,nil)iferr!=nil {returnerr}users,err:=models.Users().All(ctx,tx)// Relationshipsuser,err:=models.Users().One(ctx,db)iferr!=nil {returnerr}movies,err:=user.FavoriteMovies().All(ctx,db)// Eager loadingusers,err:=models.Users(Load("FavoriteMovies")).All(ctx,db)iferr!=nil {returnerr}fmt.Println(len(users.R.FavoriteMovies))
- Go 1.13, older Go versions are not supported.
- Join tables should use acomposite primary key.
- For join tables to be used transparently for relationships your join table must haveacomposite primary key that encompasses both foreign table foreign keys andno other columns in the table. For example, on a join table named
user_videos
you should have:primary key(user_id, video_id)
, with bothuser_id
andvideo_id
being foreign key columns to the users and videostables respectively and there are no other columns on this table.
- For join tables to be used transparently for relationships your join table must haveacomposite primary key that encompasses both foreign table foreign keys andno other columns in the table. For example, on a join table named
- MySQL 5.6.30 minimum; ssl-mode option is not supported for earlier versions.
- For MySQL if using the
github.com/go-sql-driver/mysql
driver, please activatetime.Time parsing when making yourMySQL database connection. SQLBoiler usestime.Time
andnull.Time
to represent time init's models and without this enabled any models withDATE
/DATETIME
columns will not work.
- SQLBoiler generates type safe identifiers for table names, table column names,a table's relationship names and type-safe where clauses. You should use theseinstead of strings due to the ability to catch more errors at compile timewhen your database schema changes. SeeConstants for details.
- It's highly recommended to use transactions where sqlboiler will be doingmultiple database calls (relationship setops with insertions for example) forboth performance and data integrity.
- Foreign key column names should end with
_id
.- Foreign key column names in the format
x_id
will generate clearer method names.It is advisable to use this naming convention whenever it makes sense for your database schema.
- Foreign key column names in the format
- If you never plan on using the hooks functionality you can disable generation of thisfeature using the
--no-hooks
flag. This will save you some binary size.
If you like learning via a video medium, sqlboiler has a number of screencastsavailable.
NOTE: These videos predate modules (v4), the installation/import paths will bedifferent though everything else should remain similar.
SQLBoiler: Advanced Queries and Relationships
Old (v2): SQLBoiler Screencast #1: How to get started
First you have to install the code generator binaries. There's the main binaryand then a separate driver binary (select the right one for your database).
Be very careful when installing, there's confusion in the Go ecosystem andknowing what are the right commands to run for which Go version can be tricky.Ensure you don't forget any /v suffixes or you'll end up on an old version.
# Go 1.16 and above:go install github.com/volatiletech/sqlboiler/v4@latestgo install github.com/volatiletech/sqlboiler/v4/drivers/sqlboiler-psql@latest# Go 1.15 and below:# Install sqlboiler v4 and the postgresql driver (mysql, mssql, sqlite3 also available)# NOTE: DO NOT run this inside another Go module (like your project) as it will# pollute your go.mod with a bunch of stuff you don't want and your binary# will not get installed.GO111MODULE=on go get -u -t github.com/volatiletech/sqlboiler/v4GO111MODULE=on go get github.com/volatiletech/sqlboiler/v4/drivers/sqlboiler-psql
To installsqlboiler
as a dependency in your project use the commands belowinside of your go module's directory tree. This will install the dependenciesinto yourgo.mod
file at the correct version.
# Do not forget the trailing /v4 and /v8 in the following commandsgo get github.com/volatiletech/sqlboiler/v4# Assuming you're going to use the null package for its additional null typesgo get github.com/volatiletech/null/v8
Create a configuration file. Because the project usesviper, TOML, JSON and YAML are all usablebut only TOML is supported. Environment variables are also able to be used.
The configuration file should be namedsqlboiler.toml
and is searched for inthe following directories in this order:
./
$XDG_CONFIG_HOME/sqlboiler/
$HOME/.config/sqlboiler/
We will assume TOML for the rest of the documentation.
The configuration for a specific driver (in these examples we'll usepsql
)must all be prefixed by the driver name. You must use a configuration file orenvironment variables for configuring the database driver; there are nocommand-line options for providing driver-specific configuration.
In the configuration file for postgresql for example you would do:
[psql]dbname ="your_database_name"
When you use an environment variable it must also be prefixed by the drivername:
PSQL_DBNAME="your_database_name"
The values that exist for the drivers:
Name | Required | Postgres Default | MySQL Default | MSSQL Default |
---|---|---|---|---|
schema | no | "public" | none | "dbo" |
dbname | yes | none | none | none |
host | yes | none | none | none |
port | no | 5432 | 3306 | 1433 |
user | yes | none | none | none |
pass | no | none | none | none |
sslmode | no | "require" | "true" | "true" |
unix-socket | no | N/A | "" | N/A |
whitelist | no | [] | [] | [] |
blacklist | no | [] | [] | [] |
Example of whitelist/blacklist:
[psql]# Removes migrations table, the name column from the addresses table, and# secret_col of any table from being generated. Foreign keys that reference tables# or columns that are no longer generated because of whitelists or blacklists may# cause problems.blacklist = ["migrations","addresses.name","*.secret_col"]
You can also pass in these top level configuration values if you would prefernot to pass them through the command line or environment variables:
Name | Defaults |
---|---|
pkgname | "models" |
output | "models" |
tag | [] |
debug | false |
add-global-variants | false |
add-panic-variants | false |
add-enum-types | false |
enum-null-prefix | "Null" |
no-context | false |
no-hooks | false |
no-tests | false |
no-auto-timestamps | false |
no-rows-affected | false |
no-driver-templates | false |
no-relation-getters | false |
tag-ignore | [] |
strict-verify-mod-version | false |
output ="my_models"wipe =trueno-tests =trueadd-enum-types =true[psql]dbname ="dbname"host ="localhost"port =5432user ="dbusername"pass ="dbpassword"schema ="myschema"blacklist = ["migrations","other"][mysql]dbname ="dbname"host ="localhost"port =3306user ="dbusername"pass ="dbpassword"sslmode ="false"tinyint_as_int =true[mssql]dbname ="dbname"host ="localhost"port =1433user ="dbusername"pass ="dbpassword"sslmode ="disable"schema ="notdbo"
After creating a configuration file that points at the database we want togenerate models for, we can invoke the sqlboiler command line utility.
SQL Boiler generates a Go ORM from template files, tailored to your database schema.Complete documentation is available at http://github.com/volatiletech/sqlboilerUsage: sqlboiler [flags] <driver>Examples:sqlboiler psqlFlags: --add-global-variants Enable generation for global variants --add-panic-variants Enable generation for panic variants --add-soft-deletes Enable soft deletion by updating deleted_at timestamp --add-enum-types Enable generation of types for enums --enum-null-prefix Name prefix of nullable enum types (default "Null") -c, --config string Filename of config file to override default lookup -d, --debug Debug mode prints stack traces on error -h, --help help for sqlboiler --no-auto-timestamps Disable automatic timestamps for created_at/updated_at --no-back-referencing Disable back referencing in the loaded relationship structs --no-context Disable context.Context usage in the generated code --no-driver-templates Disable parsing of templates defined by the database driver --no-hooks Disable hooks feature for your models --no-rows-affected Disable rows affected in the generated API --no-tests Disable generated go test files --no-relation-getters Disable generating getters for relationship tables -o, --output string The name of the folder to output to (default "models") -p, --pkgname string The name you wish to assign to your generated package (default "models") --struct-tag-casing string Decides the casing for go structure tag names. camel, title, alias or snake (default "snake") -t, --tag strings Struct tags to be included on your models in addition to json, yaml, toml --tag-ignore strings List of column names that should have tags values set to '-' (ignored during parsing) --templates strings A templates directory, overrides the embedded template folders in sqlboiler --replace strings An array of templates file and the actual template file to be replaces --version Print the version --strict-verify-mod-version Prevent code generation, if project version of sqlboiler not match with executable --wipe Delete the output folder (rm -rf) before generation to ensure sanity
Follow the steps below to do some basic model generation. Once you've generatedyour models, you can run the compatibility tests which will exercise the entiretyof the generated code. This way you can ensure that your database is compatiblewith SQLBoiler. If you find there are some failing tests, please check theDiagnosing Problems section.
# Generate our models and exclude the migrations table# When passing 'psql' here, it looks for a binary called# 'sqlboiler-psql' in your CWD and PATH. You can also pass# an absolute path to a driver if you desire.sqlboiler psql# Run the generated testsgotest ./models
Note: Nomysqldump
orpg_dump
equivalent for Microsoft SQL Server, so generated tests must be supplemented bytables_schema.sql
withCREATE TABLE ...
queries
You can usego generate
for SQLBoiler if you want to to make it easy torun the command for your application:
//go:generate sqlboiler --flags-go-here psql
It's important to not modify anything in the output folder, which brings us tothe next topic: regeneration.
When regenerating the models it's recommended that you completely delete thegenerated directory in a build script or use the--wipe
flag in SQLBoiler.The reasons for this are that sqlboiler doesn't try to diff your files in anysmart way, it simply writes the files it's going to write whether they're thereor not and doesn't delete any files that were added by you or previous runs ofSQLBoiler. In the best case this can cause compilation errors, in the worst casethis may leave extraneous and unusable code that was generated against tablesthat are no longer in the database.
The bottom line is that this tool should always produce the same result fromthe same source. And the intention is to always regenerate from a pure state.The only reason the--wipe
flag isn't defaulted to on is because we don'tlike programs thatrm -rf
things on the filesystem without being asked to.
When sqlboiler is used on a regular basis, sometimes problems arise on thedevelopers' side that the version they are using does not match the versionspecified in the project.
Sqlboiler will warn, if version in project and executable mismatch.Sqlboiler can also fail to prevent code generation, when--strict-verify-mod-version
flag (or aliased version in toml) is enabled.
The templates get executed in a specific way each time. There's a variety ofconfiguration options on the command line/config file that can control whatfeatures are turned on or off.
In addition to the command line flags there are a few features that are onlyavailable via the config file and can use some explanation.
In sqlboiler, names are automatically generated for you. If you name yourdatabase entities properly you will likely have descriptive names generated inthe end. However in the case where the names in your database are bad ANDunchangeable, or sqlboiler's inference doesn't understand the names you do have(even though they are good and correct) you can use aliases to change the nameof your tables, columns and relationships in the generated Go code.
Note: It is not required to provide all parts of all names. Anything left outwill be inferred as it was in the past.
# Although team_names works fine without configuration, we use it here for illustrative purposes[aliases.tables.team_names]up_plural ="TeamNames"up_singular ="TeamName"down_plural ="teamNames"down_singular ="teamName"# Columns can also be aliased. [aliases.tables.team_names.columns]team_name ="OurTeamName"
When creating aliases for relationships, it's important to know how sqlboilernames relationships. For a given table the foreign key name is used as a uniqueidentifier to refer to a given relationship. If you are going to be aliasingrelationships it'shighly recommended that you name your foreign keysexplicitly in your database or the auto-generated names could one daychange/break your aliases.
Each relationship has alocal and aforeign function name. The function name willbe inserted into your generated code as a function to retrieve relationship data aswell as refer to the relationship in a few other places.local means "the function namethat refers to the table with the foreign key on it" and converselyforeignmeans "the function that refers to the table the foreign key points to".
For example - let's have avideos -> users
many to one relationship that lookslike this:
The tables and their columns:| videos | users ||---------|-------|| user_id | id |Our foreign key:videos_user_id_fkey: videos.user_id -> users.id
In this examplelocal
(how we refer to the table with the foreign key) isgoing to be inferred asVideos
. We're going to override that below to beAuthoredVideos
.
Converselyforeign
(how we refer to the table the foreign key points to) isgoing to be inferred asUser
, which we'd like to rename toAuthor
to suitour domain language a bit better.
With the configuration snippet below we can use the following relationshiphelper functions off of the respective models:video.Author
anduser.AuthoredVideos
which make a bit more sense than the inferred names whenwe see it in the code for our domain. Note the use of the foreign key name torefer to the relationship in the configuration key.
[aliases.tables.videos.relationships.videos_author_id_fkey]# The local side would originally be inferred as AuthorVideos, which# is probably good enough to not want to mess around with this feature, avoid it where possible.local ="AuthoredVideos"# Even if left unspecified, the foreign side would have been inferred correctly# due to the proper naming of the foreign key column.foreign ="Author"
In a many-to-many relationship it's a bit more complicated. Let's look at anexample relationship betweenvideos <-> tags
with a join table in the middle.Imagine if the join table didn't exist, and instead both of the id columns inthe join table were slapped on to the tables themselves. You'd havevideos.tag_id
andtags.video_id
. Using a similar method to the above (localis the name with which we refer to the side that has the foreign key)we can rename the relationships. To changeVideos.Tags
toVideos.Rags
we can use the example below.
Keep in mind that naming ONE side of the many-to-many relationship is sufficientas the other side will be automatically mirrored, though you can specify both ifyou so choose.
[aliases.tables.video_tags.relationships.fk_video_id]local ="Rags"foreign ="Videos"
The above definition will specifyRags
as the name of the property with whicha givenVideo
entity will be able to access all of it's tags. If we look theother way around - a singleTag
entity will refer to all videos that have thatspecific tag with theVideos
property.
There is an alternative syntax available for those who are challenged by the keysyntax of toml or challenged by viper lowercasing all of your keys. Instead ofusing a regular table in toml, use an array of tables, and add a name field toeach object. The only one that changes past that is columns, which now has tohave a new field calledalias
.
[[aliases.tables]]name ="team_names"up_plural ="TeamNames"up_singular ="TeamName"down_plural ="teamNames"down_singular ="teamName" [[aliases.tables.columns]]name ="team_name"alias ="OurTeamName" [[aliases.tables.relationships]]name ="fk_video_id"local ="Rags"foreign ="Videos"
Sometimes you might want to customize the case style for different purpose, for example, use camel case for json format and use snake case for yaml,You may create a section named[struct-tag-cases]
to define these custom case for each different format:
[struct-tag-cases]toml ="snake"yaml ="camel"json ="camel"boil ="alias"
By default, the snake case will be used, so you can just setup only few formats:
[struct-tag-cases]json ="camel"
You can add foreign keys not defined in the database to your models using the following configuration:
[foreign_keys.jet_pilots_fkey]table ="jets"column ="pilot_id"foreign_table ="pilots"foreign_column ="id"[foreign_keys.pilot_language_pilots_fkey]table ="pilot_languages"column ="pilot_id"foreign_table ="pilots"foreign_column ="id"[foreign_keys.pilot_language_languages_fkey]table ="pilot_languages"column ="language_id"foreign_table ="languages"foreign_column ="id"
With inflections, you can control the rules sqlboiler uses to generates singular/plural variants. This is useful if a certain word or suffix is used multiple times and you do not want to create aliases for every instance.
[inflections.plural]# Rules to convert a suffix to its plural formium ="ia"[inflections.plural_exact]# Rules to convert an exact word to its plural formstadium ="stadia"[inflections.singular]# Rules to convert a suffix to its singular formia ="ium"[inflections.singular_exact]# Rules to convert an exact word to its singular formstadia ="stadium"[inflections.irregular]# The singular -> plural mapping of an exact word that doen't follow conventional rulesradius ="radii"
There exists the ability to override types that the driver has inferred.The way to accomplish this is through the config file.
[[types]]# The match is a drivers.Column struct, and matches on almost all fields.# Notable exception for the unique bool. Matches are done# with "logical and" meaning it must match all specified matchers.# Boolean values are only checked if all the string specifiers match first,# and they must always match.## Not shown here: db_type is the database type and a very useful matcher# We can also whitelist tables for this replace by adding to the types.match:# tables = ['users', 'videos']## Note there is precedence for types.match, more specific things should appear# further down in the config as once a matching rule is found it is executed# immediately. [types.match]type ="null.String"nullable =true# The replace is what we replace the strings with. You cannot modify any# boolean values in here. But we could change the Go type (the most useful thing)# or the DBType or FullDBType etc. if for some reason we needed to. [types.replace]type ="mynull.String"# These imports specified here overwrite the definition of the type's "based_on_type"# list. The type entry that is replaced is the replaced type's "type" field.# In the above example it would add an entry for mynull.String, if we did not# change the type in our replacement, it would overwrite the null.String entry. [types.imports]third_party = ['"github.com/me/mynull"']
Imports are overridable by the user. This can be used in conjunction withreplacing the templates for extreme cases. Typically this should be avoided.
Note that specifying any section of the imports completely overwrites thatsection. It's also true that the driver can still specify imports and thosewill be merged in to what is provided here.
[imports.all]standard = ['"context"']third_party = ['"github.com/my/package"']# Changes imports for the boil_queries file[imports.singleton."boil_queries"]standard = ['"context"']third_party = ['"github.com/my/package"']# Same syntax as all[imports.test]# Same syntax as singleton[imports.test_singleton]# Changes imports when a model contains null.Int32[imports.based_on_type.string]standard = ['"context"']third_party = ['"github.com/my/package"']
When defining maps it's possible to use an alternative syntax sinceviper automatically lowercases all configuration keys (same as aliases).
[[imports.singleton]]name ="boil_queries"third_party = ['"github.com/my/package"'][[imports.based_on_type]]name ="null.Int64"third_party = ['"github.com/my/int64"']
In advanced scenarios it may be desirable to generate additional files that are not go code.You can accomplish this by using the--templates
flag to specifyall the directories youwish to generate code for. With this flag you specify root directories, that is top-level containerdirectories.
If root directories have a_test
suffix in the name, this folder is considered a folderfull of templates for testing only and will be omitted when--no-tests
is specified andits templates will be generated into files with a_test
suffix.
Each root directory is recursively walked. Each template found will be merged into table_name.extwhere ext is defined by the shared extension of the templates. The directory structure is preservedwith the exception of singletons.
For files that should not be generated for each model, you can use asingleton
directory insidethe directory where the singleton file should be generated. This will make sure that the file isonly generated once.
Here's an example:
templates/├── 00_struct.go.tpl # Merged into output_dir/table_name.go├── 00_struct.js.tpl # Merged into output_dir/table_name.js├── singleton│ └── boil_queries.go.tpl # Rendered as output_dir/boil_queries.go└── js ├── jsmodel.js.tpl # Merged into output_dir/js/table_name.js └── singleton └── jssingle.js.tpl # Merged into output_dir/js/jssingle.js
The output files of which would be:
output_dir/├── boil_queries.go├── table_name.go├── table_name.js└── js ├── table_name.js └── jssingle.js
Note: Because the--templates
flag overrides the embedded templates ofsqlboiler
, if you stillwish to generate the default templates it's recommended that you include the path to sqlboiler's templatesas well.
templates = ["/path/to/sqlboiler/templates","/path/to/sqlboiler/templates_test","/path/to/your_project/more_templates"]
There will probably come a time when you want to extend the generated modelswith some kinds of helper functions. A general guideline is to put yourextension functions into a separate package so that your functions aren'taccidentally deleted when regenerating. Past that there are 3 main ways toextend the models, the first way is the most desirable:
Method 1: Simple Functions
// Package modext is for SQLBoiler helper methodspackage modext// UserFirstTimeSetup is an extension of the user model.funcUserFirstTimeSetup(ctx context.Context,db*sql.DB,u*models.User)error {... }
Code organization is accomplished by using multiple files, and everythingis passed as a parameter so these kinds of methods are very easy to test.
Calling code is also very straightforward:
user,err:=Users().One(ctx,db)// elided error checkerr=modext.UserFirstTimeSetup(ctx,db,user)// elided error check
Method 2: Empty struct methods
The above is the best way to code extensions for SQLBoiler, however there maybe times when the number of methods grows too large and code completion isnot as helpful anymore. In these cases you may consider structuring the codelike this:
// Package modext is for SQLBoiler helper methodspackage modexttypeusersstruct {}varUsers=users{}// FirstTimeSetup is an extension of the user model.func (users)FirstTimeSetup(ctx context.Context,db*sql.DB,u*models.User)error {... }
Calling code then looks a little bit different:
user,err:=Users().One(ctx,db)// elided error checkerr=modext.Users.FirstTimeSetup(ctx,db,user)// elided error check
This is almost identical to the method above, but gives slight amounts moreorganization at virtually no cost at runtime. It is however not as desirableas the first method since it does have some runtime cost and doesn't offer thatmuch benefit over it.
Method 3: Embedding
This pattern is not for the faint of heart, what it provides in benefits itmore than makes up for in downsides. It's possible to embed the SQLBoilerstructs inside your own to enhance them. However it's subject to easy breakagesand a dependency on these additional objects. It can also introduceinconsistencies as some objects may have no extended functionality and thereforehave no reason to be embedded so you either have to have a struct for eachgenerated struct even if it's empty, or have inconsistencies, some places whereyou use the enhanced model, and some where you do not.
user,err:=Users().One(ctx,db)// elided error checkenhUser:= modext.User{user}err=ehnUser.FirstTimeSetup(ctx,db)// elided error check
I don't recommend this pattern, but included it so that people know it's anoption and also know the problems with it.
The most common causes of problems and panics are:
- Forgetting to exclude tables you do not want included in your generation, like migration tables.
- Tables without a primary key. All tables require one.
- Forgetting to put foreign key constraints on your columns that reference other tables.
- The compatibility tests require privileges to create a database for testing purposes, ensure the usersupplied in your
sqlboiler.toml
config has adequate privileges. - A nil or closed database handle. Ensure your passed in
boil.Executor
is not nil.- If you decide to use the
G
variant of functions instead, make sure you've initialized yourglobal database handle usingboil.SetDB()
.
- If you decide to use the
- Naming collisions, if the code fails to compile because there are naming collisions, look at thealiasing feature.
- Race conditions in tests or when using global variable models and usingrelationship set helpers in multiple goroutines. Note that Set/Add/Removerelationship helpers modify their input parameters to maintain parity betweenthe
.R
struct relationships and the database foreign keys but this canproduce subtle race conditions. Test for this using the-race
flag on thego tool. - A field not being inserted (usually a default true boolean),
boil.Infer
looks at the zerovalue of your Go type (it doesn't care what the default value in the database is) to determineif it should insert your field or not. In the case of a default true boolean value, when youwant to set it to false; you set that in the struct but that's the zero value for the boolfield in Go so sqlboiler assumes you do not want to insert that field and you want the defaultvalue from the database. Use a whitelist/greylist to add that field to the list of fieldsto insert. - decimal library showing errors like:
pq: encode: unknown type types.NullDecimal
is a result of a too-new and broken version of the github.com/ericlargergren/decimalpackage, use the following version in your go.mod:github.com/ericlagergren/decimal v0.0.0-20181231230500-73749d4874d5
For errors with other causes, it may be simple to debug yourself by looking at the generated code.Settingboil.DebugMode
totrue
can help with this. You can change the output usingboil.DebugWriter
(defaults toos.Stdout
).
If you're still stuck and/or you think you've found a bug, feel free to leave an issue and we'll do our best to help you.
Most examples in this section will be demonstrated using the following Postgres schema, structs and variables:
CREATETABLEpilots ( idintegerNOT NULL, nametextNOT NULL);ALTERTABLE pilots ADDCONSTRAINT pilot_pkeyPRIMARY KEY (id);CREATETABLEjets ( idintegerNOT NULL, pilot_idintegerNOT NULL, ageintegerNOT NULL, nametextNOT NULL, colortextNOT NULL);ALTERTABLE jets ADDCONSTRAINT jet_pkeyPRIMARY KEY (id);ALTERTABLE jets ADDCONSTRAINT jet_pilots_fkeyFOREIGN KEY (pilot_id)REFERENCES pilots(id);CREATETABLElanguages ( idintegerNOT NULL, languagetextNOT NULL);ALTERTABLE languages ADDCONSTRAINT language_pkeyPRIMARY KEY (id);-- Join tableCREATETABLEpilot_languages ( pilot_idintegerNOT NULL, language_idintegerNOT NULL);-- Composite primary keyALTERTABLE pilot_languages ADDCONSTRAINT pilot_language_pkeyPRIMARY KEY (pilot_id, language_id);ALTERTABLE pilot_languages ADDCONSTRAINT pilot_language_pilots_fkeyFOREIGN KEY (pilot_id)REFERENCES pilots(id);ALTERTABLE pilot_languages ADDCONSTRAINT pilot_language_languages_fkeyFOREIGN KEY (language_id)REFERENCES languages(id);
The generated model structs for this schema look like the following. Note that we've included the relationshipstructs as well so you can see how it all pieces together:
typePilotstruct {IDint`boil:"id" json:"id" toml:"id" yaml:"id"`Namestring`boil:"name" json:"name" toml:"name" yaml:"name"`R*pilotR`boil:"-" json:"-" toml:"-" yaml:"-"`LpilotR`boil:"-" json:"-" toml:"-" yaml:"-"`}typepilotRstruct {LanguagesLanguageSliceJetsJetSlice}typeJetstruct {IDint`boil:"id" json:"id" toml:"id" yaml:"id"`PilotIDint`boil:"pilot_id" json:"pilot_id" toml:"pilot_id" yaml:"pilot_id"`Ageint`boil:"age" json:"age" toml:"age" yaml:"age"`Namestring`boil:"name" json:"name" toml:"name" yaml:"name"`Colorstring`boil:"color" json:"color" toml:"color" yaml:"color"`R*jetR`boil:"-" json:"-" toml:"-" yaml:"-"`LjetR`boil:"-" json:"-" toml:"-" yaml:"-"`}typejetRstruct {Pilot*Pilot}typeLanguagestruct {IDint`boil:"id" json:"id" toml:"id" yaml:"id"`Languagestring`boil:"language" json:"language" toml:"language" yaml:"language"`R*languageR`boil:"-" json:"-" toml:"-" yaml:"-"`LlanguageR`boil:"-" json:"-" toml:"-" yaml:"-"`}typelanguageRstruct {PilotsPilotSlice}
// Open handle to database like normaldb,err:=sql.Open("postgres","dbname=fun user=abc")iferr!=nil {returnerr}
If your generated SQLBoiler models package can find columns with thenamescreated_at
orupdated_at
it will automatically set themtotime.Now()
in your database, and update your object appropriately.To disable this feature use--no-auto-timestamps
.
Note: You can set the timezone for this feature by callingboil.SetLocation()
Set theauto-columns
map in your configuration file
[auto-columns]created ="createdAt"updated ="updatedAt"
If for a given query you do not want timestamp columns to be re-computed priorto an insert or update then you can useboil.SkipTimestamps
on the context youpass in to the query to prevent them from being updated.
Keep in mind this has no effect on whether or not the column is included in theinsert/update, it simply stops them from being set totime.Now()
in the structbefore being sent to the database (if they were going to be sent).
- Insert
- Timestamps for both
updated_at
andcreated_at
that are zero values will be set automatically. - To set the timestamp to null, set
Valid
to false andTime
to a non-zero value.This is somewhat of a work around until we can devise a better solution in a later version.
- Timestamps for both
- Update
- The
updated_at
column will always be set totime.Now()
. If you need to overridethis value you will need to fall back to another method in the meantime:queries.Raw()
,overridingupdated_at
in all of your objects using a hook, or create your own wrapper.
- The
- Upsert
created_at
will be set automatically if it is a zero value, otherwise your supplied valuewill be used. To setcreated_at
tonull
, setValid
to false andTime
to a non-zero value.- The
updated_at
column will always be set totime.Now()
.
Soft deletes are a way of deleting records in a database for the average querywithout actually removing the data. This type of thing is important in certainscenarios where data retention is important. It is typically done by adding adeleted
bool or adeleted_at
timestamp to each table in the databasethat can be soft deleted and subsequent queries on that table should alwaysmake sure thatdeleted != true
ordeleted_at is null
to prevent showing"deleted" data.
SQLBoiler uses thedeleted_at
variant to provide this functionality. If yourtable has a nullable timestamp field nameddeleted_at
it will be a candidatefor soft-deletion.
NOTE: As of writing soft-delete is opt-in via--add-soft-deletes
and isliable to change in future versions.
NOTE: There is a query mod to bypass soft delete for a specific query by usingqm.WithDeleted
, note that there is no way to do this for Exists/Find helpersyet.
NOTE: TheDelete
helpers willnot setupdated_at
currently. The currentphilosophy is that deleting the object is simply metadata and since it returnsin no queries (other than raw ones) the updated_at will no longer be relevant.This could change in future versions if people disagree with this but it isthe current behavior.
We generate "Starter" methods for you. These methods are named as the plural versions of your model,for example:models.Jets()
. Starter methods are used to build queries using ourQuery Mod System. They take a slice ofQuery Modsas parameters, and end with a call to aFinisher method.
Here are a few examples:
// SELECT COUNT(*) FROM pilots;count,err:=models.Pilots().Count(ctx,db)// SELECT * FROM "pilots" LIMIT 5;pilots,err:=models.Pilots(qm.Limit(5)).All(ctx,db)// DELETE FROM "pilots" WHERE "id"=$1;err:=models.Pilots(qm.Where("id=?",1)).DeleteAll(ctx,db)// type safe version of aboveerr:=models.Pilots(models.PilotWhere.ID.EQ(1)).DeleteAll(ctx,db)
In the event that you would like to build a query and specify the table yourself, youcan do so usingmodels.NewQuery()
:
// Select all rows from the pilots table by using the From query mod.err:=models.NewQuery(db,qm.From("pilots")).All(ctx,db)
As you can see,Query Mods allow you to modify yourqueries, andFinishers allow you to execute the final action.
We also generate query building helper methods for your relationships as well. Take a look at ourRelationships Query Building section for some additional query building information.
The query mod system allows you to modify queries created withStarter methods when performing query building.See examples below.
NOTE: SQLBoiler generates type-safe identifiers based on your databasetables, columns and relationships. Using these is a bit more verbose, but isespecially safe since when the names change in the database the generatedcode will be different causing compilation failures instead of runtimeerrors. It is highly recommended you use these instead of regular strings.SeeConstants for more details.
NOTE: You will notice that there is printf used below mixed with SQLstatements. This is normally NOT OK if the user is able to supply any ofthe sql string, but here we always use a?
placeholder and pass argumentsso that the only thing that's being printf'd are constants which makes itsafe, but be careful!
// Dot import so we can access query mods directly instead of prefixing with "qm."import ."github.com/volatiletech/sqlboiler/v4/queries/qm"// Use a raw query against a generated struct (Pilot in this example)// If this query mod exists in your call, it will override the others.// "?" placeholders are not supported here, use "$1, $2" etc.SQL("select * from pilots where id=$1",10)models.Pilots(SQL("select * from pilots where id=$1",10)).All()Select("id","name")// Select specific columns.Select(models.PilotColumns.ID,models.PilotColumns.Name)From("pilots as p")// Specify the FROM table manually, can be useful for doing complex queries.From(models.TableNames.Pilots+" as p")// WHERE clause buildingWhere("name=?","John")models.PilotWhere.Name.EQ("John")And("age=?",24)// No equivalent type safe query yetOr("height=?",183)// No equivalent type safe query yetWhere("(name=? and age=?) or (age=?)","John",5,6)// Expr allows manual grouping of statementsWhere(Expr(models.PilotWhere.Name.EQ("John"),Or2(models.PilotWhere.Age.EQ(5)), ),Or2(models.PilotAge),)// WHERE IN clause buildingWhereIn("(name, age) in ?","John",24,"Tim",33)// Generates: WHERE ("name","age") IN (($1,$2),($3,$4))WhereIn(fmt.Sprintf("(%s, %s) in ?",models.PilotColumns.Name,models.PilotColumns.Age),"John",24,"Tim",33)AndIn("weight in ?",84)AndIn(models.PilotColumns.Weight+" in ?",84)OrIn("height in ?",183,177,204)OrIn(models.PilotColumns.Height+" in ?",183,177,204)InnerJoin("pilots p on jets.pilot_id=?",10)InnerJoin(models.TableNames.Pilots+" p on "+models.TableNames.Jets+"."+models.JetColumns.PilotID+"=?",10)GroupBy("name")GroupBy("name like ? DESC, name","John")GroupBy(models.PilotColumns.Name)OrderBy("age, height")OrderBy(models.PilotColumns.Age,models.PilotColumns.Height)Having("count(jets) > 2")Having(fmt.Sprintf("count(%s) > 2",models.TableNames.Jets)Limit(15)Offset(5)// Explicit lockingFor("update nowait")// Common Table ExpressionsWith("cte_0 AS (SELECT * FROM table_0 WHERE thing=$1 AND stuff=$2)")// Eager Loading -- Load takes the relationship name, ie the struct field name of the// Relationship struct field you want to load. Optionally also takes query mods to filter on that query.Load("Languages",Where(...))// If it's a ToOne relationship it's in singular form, ToMany is plural.Load(models.PilotRels.Languages,Where(...))
Note: We don't force you to break queries apart like this if you don't want to, the followingis also valid and supported by query mods that take a clause:
Where("(name=? OR age=?) AND height=?","John",24,183)
Functions can have variations generated for them by using the flags--add-global-variants
and--add-panic-variants
. Once you've used theseflags or set the appropriate values in your configuration file extra methodoverloads will be generated. We've used theDelete
method to demonstrate:
// Set the global db handle for G method variants.boil.SetDB(db)pilot,_:=models.FindPilot(ctx,db,1)err:=pilot.Delete(ctx,db)// Regular variant, takes a db handle (boil.Executor interface).pilot.DeleteP(ctx,db)// Panic variant, takes a db handle and panics on error.err:=pilot.DeleteG(ctx)// Global variant, uses the globally set db handle (boil.SetDB()).pilot.DeleteGP(ctx)// Global&Panic variant, combines the global db handle and panic on error.db.Begin()// Normal sql package way of creating a transactionboil.BeginTx(ctx,nil)// Uses the global database handle set by boil.SetDB() (doesn't require flag)
Note that it's slightly different for query building.
Here are a list of all of the finishers that can be used in combination withQuery Building.
Finishers all haveP
(panic)method variations. To specifyyour db handle use theG
or regular variation of theStarter method.
// These are called like the following:models.Pilots().All(ctx,db)One()// Retrieve one row as object (same as LIMIT(1))All()// Retrieve all rows as objects (same as SELECT * FROM)Count()// Number of rows (same as COUNT(*))UpdateAll(models.M{"name":"John","age":23})// Update all rows matching the built query.DeleteAll()// Delete all rows matching the built query.Exists()// Returns a bool indicating whether the row(s) for the built query exists.Bind(&myObj)// Bind the results of a query to your own struct object.Exec()// Execute an SQL query that does not require any rows returned.QueryRow()// Execute an SQL query expected to return only a single row.Query()// Execute an SQL query expected to return multiple rows.
We providequeries.Raw()
for executing raw queries. Generally you will want to useBind()
withthis, like the following:
err:=queries.Raw("select * from pilots where id=$1",5).Bind(ctx,db,&obj)
You can use your own structs or a generated struct as a parameter to Bind. Bind supports botha single object for single row queries and a slice of objects for multiple row queries.
queries.Raw()
also has a method that can execute a query without binding to an object, if required.
You also havemodels.NewQuery()
at your disposal if you would still like to useQuery Buildingin combination with your own custom, non-generated model.
For a comprehensive ruleset forBind()
you can refer to ourpkg.go.dev.
TheBind()
Finisher allows the results of a query built withtheRaw SQL method or theQuery Builder methods to be boundto your generated struct objects, or your own custom struct objects.
This can be useful for complex queries, queries that only require a small subset of dataand have no need for the rest of the object variables, or custom join struct objects likethe following:
// Custom struct using two generated structstypePilotAndJetstruct { models.Pilot`boil:",bind"` models.Jet`boil:",bind"`}varpajPilotAndJet// Use a raw queryerr:=queries.Raw(` select pilots.id as "pilots.id", pilots.name as "pilots.name", jets.id as "jets.id", jets.pilot_id as "jets.pilot_id", jets.age as "jets.age", jets.name as "jets.name", jets.color as "jets.color" from pilots inner join jets on jets.pilot_id=?`,23,).Bind(ctx,db,&paj)// Use query buildingerr:=models.NewQuery(Select("pilots.id","pilots.name","jets.id","jets.pilot_id","jets.age","jets.name","jets.color"),From("pilots"),InnerJoin("jets on jets.pilot_id = pilots.id"),).Bind(ctx,db,&paj)
// Custom struct for selecting a subset of datatypeJetInfostruct {AgeSumint`boil:"age_sum"`Countint`boil:"juicy_count"`}varinfoJetInfo// Use query buildingerr:=models.NewQuery(Select("sum(age) as age_sum","count(*) as juicy_count",From("jets"))).Bind(ctx,db,&info)// Use a raw queryerr:=queries.Raw(`select sum(age) as "age_sum", count(*) as "juicy_count" from jets`).Bind(ctx,db,&info)
We support the following struct tag modes forBind()
control:
typeCoolObjectstruct {// Don't specify a name, Bind will TitleCase the column// name, and try to match against this.Frogint// Specify an alternative name for the column, it will// be titlecased for matching, can be whatever you like.Catint`boil:"kitten"`// Ignore this struct field, do not attempt to bind it.Pigint`boil:"-"`// Instead of binding to this as a regular struct field// (like other sql-able structs eg. time.Time)// Recursively search inside the Dog struct for field names from the query.Dog`boil:",bind"`// Same as the above, except specify a different table nameMouse`boil:"rodent,bind"`// Ignore this struct field, do not attempt to bind it.Bird`boil:"-"`}
Helper methods will be generated for every to one and to many relationship structureyou have defined in your database by using foreign keys.
We attach these helpers directly to your model struct, for example:
jet,_:=models.FindJet(ctx,db,1)// "to one" relationship helper method.// This will retrieve the pilot for the jet.pilot,err:=jet.Pilot().One(ctx,db)// "to many" relationship helper method.// This will retrieve all languages for the pilot.languages,err:=pilot.Languages().All(ctx,db)
If your relationship involves a join table SQLBoiler will figure it out for you transparently.
It is important to note that you should useEager Loading
if you planon loading large collections of rows, to avoid N+1 performance problems.
For example, take the following:
// Avoid this loop query pattern, it is slow.jets,_:=models.Jets().All(ctx,db)pilots:=make([]models.Pilot,len(jets))fori:=0;i<len(jets);i++ {pilots[i]=jets[i].Pilot().OneP(ctx,db)}// Instead, use Eager Loading!jets,_:=models.Jets(Load("Pilot")).All(ctx,db)// Type safe relationship names exist too:jets,_:=models.Jets(Load(models.JetRels.Pilot)).All(ctx,db)// Then access the loaded structs using the special Relation fieldfor_,j:=rangejets {_=j.R.Pilot}
Eager loading can be combined with other query mods, and it can also eager load recursively.
// Example of a nested load.// Each jet will have its pilot loaded, and each pilot will have its languages loaded.jets,_:=models.Jets(Load("Pilot.Languages")).All(ctx,db)// Note that each level of a nested Load call will be loaded. No need to call Load() multiple times.// Type safe queries exist for this too!jets,_:=models.Jets(Load(Rels(models.JetRels.Pilot,models.PilotRels.Languages))).All(ctx,db)// A larger example. In the below scenario, Pets will only be queried one time, despite// showing up twice because they're the same query (the user's pets)users,_:=models.Users(Load("Pets.Vets"),// the query mods passed in below only affect the query for Toys// to use query mods against Pets itself, you must declare it separatelyLoad("Pets.Toys",Where("toys.deleted = ?",isDeleted)),Load("Property"),Where("age > ?",23),).All(ctx,db)
We provide the following methods for managing relationships on objects:
To One
SetX()
: Set the foreign key to point to something else: jet.SetPilot(...)RemoveX()
: Null out the foreign key, effectively removing the relationship between these two objects: jet.RemovePilot(...)
To Many
AddX()
: Add more relationships to the existing set of related Xs: pilot.AddLanguages(...)SetX()
: Remove all existing relationships, and replace them with the provided set: pilot.SetLanguages(...)RemoveX()
: Remove all provided relationships: pilot.RemoveLanguages(...)
Important: Remember to use transactions around these set helpers for performanceand data integrity. SQLBoiler does not do this automatically due to it's transparent API which allowsyou to batch any number of calls in a transaction without spawning subtransactions you don't knowabout or are not supported by your database.
To One code examples:
jet,_:=models.FindJet(ctx,db,1)pilot,_:=models.FindPilot(ctx,db,1)// Set the pilot to an existing jeterr:=jet.SetPilot(ctx,db,false,&pilot)pilot= models.Pilot{Name:"Erlich", }// Insert the pilot into the database and assign it to a jeterr:=jet.SetPilot(ctx,db,true,&pilot)// Remove a relationship. This method only exists for foreign keys that can be NULL.err:=jet.RemovePilot(ctx,db,&pilot)
To Many code examples:
pilots,_:=models.Pilots().All(ctx,db)languages,_:=models.Languages().All(ctx,db)// Set a group of language relationshipserr:=pilots.SetLanguages(db,false,&languages)languages:= []*models.Language{ {Language:"Strayan"}, {Language:"Yupik"}, {Language:"Pawnee"}, }// Insert new a group of languages and assign them to a piloterr:=pilots.SetLanguages(ctx,db,true,languages...)// Add another language relationship to the existing set of relationshipserr:=pilots.AddLanguages(ctx,db,false,&someOtherLanguage)anotherLanguage:= models.Language{Language:"Archi"}// Insert and then add another language relationshiperr:=pilots.AddLanguages(ctx,db,true,&anotherLanguage)// Remove a group of relationshipserr:=pilots.RemoveLanguages(ctx,db,languages...)
Before and After hooks are available for most operations. If you don't need them you canshrink the size of the generated code by disabling them with the--no-hooks
flag.
Every generated package that includes hooks has the followingHookPoints
defined:
const (BeforeInsertHookHookPoint=iota+1BeforeUpdateHookBeforeDeleteHookBeforeUpsertHookAfterInsertHookAfterSelectHookAfterUpdateHookAfterDeleteHookAfterUpsertHook)
To register a hook for your model you will need to create the hook function, and attachit with theAddModelHook
method. Here is an example of a before insert hook:
// Define my hook functionfuncmyHook(ctx context.Context,exec boil.ContextExecutor,p*Pilot)error {// Do stuffreturnnil}// Register my before insert hook for pilotsmodels.AddPilotHook(boil.BeforeInsertHook,myHook)
YourModelHook
will always be defined asfunc(context.Context, boil.ContextExecutor, *Model) error
if context is not turned off.
You can skip hooks by using theboil.SkipHooks
on the context you pass into a given query.
Theboil.Executor
andboil.ContextExecutor
interface powers all of SQLBoiler. This meansanything that conforms to the threeExec/Query/QueryRow
methods (and their context-aware variants)can be used to execute queries.sql.DB
,sql.Tx
as well as otherlibraries (sqlx
) conform to this interface, and therefore any of these things may beused as an executor for any query in the system. This makes using transactions very simple:
tx,err:=db.BeginTx(ctx,nil)iferr!=nil {returnerr}users,_:=models.Pilots().All(ctx,tx)users.DeleteAll(ctx,tx)// Rollback or committx.Commit()tx.Rollback()
It's also worth noting that there's a way to take advantage ofboil.SetDB()
by using theboil.BeginTx()function. This opens a transaction using the globally stored database.
Debug logging will print your generated SQL statement and the arguments it is using.Debug logging can be toggled on globally by setting the following global variable totrue
:
boil.DebugMode=true// Optionally set the writer as well. Defaults to os.Stdoutfh,_:=os.Open("debug.txt")boil.DebugWriter=fh
Note: Debug output is messy at the moment. This is something we would like addressed.
Select is done throughQuery Building andFind. Here's a short example:
// Select one pilotpilot,err:=models.Pilots(qm.Where("name=?","Tim")).One(ctx,db)// Type safe variantpilot,err:=models.Pilots(models.PilotWhere.Name.EQ("Tim")).One(ctx,db)// Select specific columns of many jetsjets,err:=models.Jets(qm.Select("age","name")).All(ctx,db)// Type safe variantjets,err:=models.Jets(qm.Select(models.JetColumns.Age,models.JetColumns.Name)).All(ctx,db)
Find is used to find a single row by primary key:
// Retrieve pilot with all columns filledpilot,err:=models.FindPilot(ctx,db,1)// Retrieve a subset of column valuesjet,err:=models.FindJet(ctx,db,1,"name","color")
The main thing to be aware of withInsert
is how thecolumns
argumentoperates. You can supply one of the following column lists:boil.Infer
,boil.Whitelist
,boil.Blacklist
, orboil.Greylist
.
These lists control what fields are inserted into the database, and what valuesare returned to your struct from the database (default, auto incrementing,trigger-based columns are candidates for this). Your struct will have thosevalues after the insert is complete.
When you use inferencesqlboiler
looks at your Go struct field values and ifthe field value is the Go zero value and that field has a default value in thedatabase it will not insert that field, instead it will get the value from thedatabase. Keep in mindsqlboiler
cannot read or understand your defaultvalues set in the database, so the Go zero value is what's important here (thiscan be especially troubling for default true bool fields). Use a whitelist orgreylist in cases where you want to insert a Go zero value.
Column List | Behavior |
---|---|
Infer | Infer the column list using "smart" rules |
Whitelist | Insert only the columns specified in this list |
Blacklist | Infer the column list, but ensure these columns are not inserted |
Greylist | Infer the column list, but ensure these columns are inserted |
NOTE: CreatedAt/UpdatedAt are not included inWhitelist
automatically.
See the documentation forboil.Columns.InsertColumnSetfor more details.
varp1 models.Pilotp1.Name="Larry"err:=p1.Insert(ctx,db,boil.Infer())// Insert the first pilot with name "Larry"// p1 now has an ID field set to 1varp2 models.Pilotp2.Name="Boris"err:=p2.Insert(ctx,db,boil.Infer())// Insert the second pilot with name "Boris"// p2 now has an ID field set to 2varp3 models.Pilotp3.ID=25p3.Name="Rupert"err:=p3.Insert(ctx,db,boil.Infer())// Insert the third pilot with a specific ID// The id for this row was inserted as 25 in the database.varp4 models.Pilotp4.ID=0p4.Name="Nigel"err:=p4.Insert(ctx,db,boil.Whitelist("id","name"))// Insert the fourth pilot with a zero value ID// The id for this row was inserted as 0 in the database.// Note: We had to use the whitelist for this, otherwise// SQLBoiler would presume you wanted to auto-increment
Update
can be performed on a single object, a slice of objects or as aFinisherfor a collection of rows.
Update
on a single object optionally takes awhitelist
. The purpose of thewhitelist is to specify which columns in your object should be updated in the database.
LikeInsert
, this method also takes aColumns
type, but the behavior isslightly different. Although the descriptions below look similar the fulldocumentation reveals the differences. Note that all inference is based onthe Go types zero value and not the database default value, read theInsert
documentation above for more details.
Column List | Behavior |
---|---|
Infer | Infer the column list using "smart" rules |
Whitelist | Update only the columns specified in this list |
Blacklist | Infer the column list for updating, but ensure these columns are not updated |
Greylist | Infer the column list, but ensure these columns are updated |
NOTE: CreatedAt/UpdatedAt are not included inWhitelist
automatically.
See the documentation forboil.Columns.UpdateColumnSetfor more details.
// Find a pilot and update his namepilot,_:=models.FindPilot(ctx,db,1)pilot.Name="Neo"rowsAff,err:=pilot.Update(ctx,db,boil.Infer())// Update a slice of pilots to have the name "Smith"pilots,_:=models.Pilots().All(ctx,db)rowsAff,err:=pilots.UpdateAll(ctx,db, models.M{"name":"Smith"})// Update all pilots in the database to to have the name "Smith"rowsAff,err:=models.Pilots().UpdateAll(ctx,db, models.M{"name":"Smith"})
Delete a single object, a slice of objects or specific objects throughQuery Building.
pilot,_:=models.FindPilot(db,1)// Delete the pilot from the databaserowsAff,err:=pilot.Delete(ctx,db)// Delete all pilots from the databaserowsAff,err:=models.Pilots().DeleteAll(ctx,db)// Delete a slice of pilots from the databasepilots,_:=models.Pilots().All(ctx,db)rowsAff,err:=pilots.DeleteAll(ctx,db)
Upsert allows you to perform an insertthat optionally performs an update when a conflict is found against existing row values.
TheupdateColumns
andinsertColumns
operates in the same fashion that it does forUpdateandInsert.
If an insert is performed, your object will be updated with any missing default values from the database,such as auto-incrementing column values.
varp1 models.Pilotp1.ID=5p1.Name="Gaben"// INSERT INTO pilots ("id", "name") VALUES($1, $2)// ON CONFLICT DO NOTHINGerr:=p1.Upsert(ctx,db,false,nil,boil.Infer())// INSERT INTO pilots ("id", "name") VALUES ($1, $2)// ON CONFLICT ("id") DO UPDATE SET "name" = EXCLUDED."name"err:=p1.Upsert(ctx,db,true, []string{"id"},boil.Whitelist("name"),boil.Infer())// Set p1.ID to a zero value. We will have to use the whitelist now.p1.ID=0p1.Name="Hogan"// INSERT INTO pilots ("id", "name") VALUES ($1, $2)// ON CONFLICT ("id") DO UPDATE SET "name" = EXCLUDED."name"err:=p1.Upsert(ctx,db,true, []string{"id"},boil.Whitelist("name"),boil.Whitelist("id","name"))// Custom conflict_target expression:// INSERT INTO pilots ("id", "name") VALUES (9, 'Antwerp Design')// ON CONFLICT ON CONSTRAINT pilots_pkey DO NOTHING;conflictTarget:=models.UpsertConflictTargeterr:=p1.Upsert(ctx,db,false,nil,boil.Whitelist("id","name"),boil.None(),conflictTarget("ON CONSTRAINT pilots_pkey"))// Custom UPDATE SET expression:// INSERT INTO pilots ("id", "name") VALUES (9, 'Antwerp Design')// ON CONFLICT ("id") DO UPDATE SET (id, name) = (sub-SELECT)updateSet:=models.UpsertUpdateSeterr:=p1.Upsert(ctx,db,true, []string{"id"},boil.Whitelist("id","name"),boil.None(),updateSet("(id, name) = (sub-SELECT)"))
- Postgres
- The
updateOnConflict
argument allows you to specify whether you would like Postgresto perform aDO NOTHING
on conflict, opposed to aDO UPDATE
. For MySQL and MSSQL, this param will not be generated. - The
conflictColumns
argument allows you to specify theON CONFLICT
columns for Postgres.For MySQL and MSSQL, this param will not be generated.
- The
- MySQL and MSSQL
- Passing
boil.None()
forupdateColumns
allows to perform aDO NOTHING
on conflict similar to Postgres.
- Passing
Note: Passing a different set of column values to the update component is not currently supported.
Note: Upsert is now not guaranteed to be provided by SQLBoiler and it's now up to each driverindividually to support it since it's a bit outside of the reach of the sql standard.
In the event that your objects get out of sync with the database for whatever reason,you can useReload
andReloadAll
to reload the objects using the primary key valuesattached to the objects.
pilot,_:=models.FindPilot(ctx,db,1)// > Object becomes out of sync for some reason, perhaps async processing// Refresh the object with the latest data from the dberr:=pilot.Reload(ctx,db)// Reload all objects in a slicepilots,_:=models.Pilots().All(ctx,db)err:=pilots.ReloadAll(ctx,db)
Note:Reload
andReloadAll
are not recursive, if you need your relationships reloadedyou will need to call theReload
methods on those yourself.
jet,err:=models.FindJet(ctx,db,1)// Check if the pilot assigned to this jet exists.exists,err:=jet.Pilot().Exists(ctx,db)// Check if the pilot with ID 5 existsexists,err:=models.Pilots(Where("id=?",5)).Exists(ctx,db)
If your MySQL or Postgres tables use enums we will generate constants that hold their valuesthat you can use in your queries. For example:
CREATETYPEworkdayAS ENUM('monday','tuesday','wednesday','thursday','friday');CREATETABLEevent_one ( idserialPRIMARY KEYNOT NULL, nameVARCHAR(255), day workdayNOT NULL);
An enum type defined like the above, being used by a table, will generate the following enums:
const (WorkdayMonday="monday"WorkdayTuesday="tuesday"WorkdayWednesday="wednesday"WorkdayThursday="thursday"WorkdayFriday="friday")
For Postgres we useenum type name + title cased
value to generate the const variable name.For MySQL we usetable name + column name + title cased value
to generate the const variable name.
Note: If your enum holds a value we cannot parse correctly due, to non-alphabet characters for example,it may not be generated. In this event, you will receive errors in your generated tests becausethe value randomizer in the test suite does not know how to generate valid enum values. You willstill be able to use your generated library, and it will still work as expected, but the only wayto get the tests to pass in this event is to either use a parsable enum value or use a regular columninstead of an enum.
The models package will also contain some structs that contain all table,column, relationship names harvested from the database at generation time. Typesafe where query mods are also generated.
There are type safe identifiers at:
- models.TableNames.TableName
- models.ModelColumns.ColumnName
- models.ModelWhere.ColumnName.Operator
- models.ModelRels.ForeignTableName
For table names they're generated undermodels.TableNames
:
// Generated code from models packagevarTableNames=struct {MessagesstringPurchasesstring}{Messages:"messages",Purchases:"purchases",}// Usage example:fmt.Println(models.TableNames.Messages)
For column names they're generated undermodels.{Model}Columns
:
// Generated code from models packagevarMessageColumns=struct {IDstringPurchaseIDstring}{ID:"id",PurchaseID:"purchase_id",}// Usage example:fmt.Println(models.MessageColumns.ID)
For where clauses they're generated undermodels.{Model}Where.{Column}.{Operator}
:
varMessageWhere=struct {IDwhereHelperintTextwhereHelperstring}{ID:whereHelperint{field:`id`},PurchaseID:whereHelperstring{field:`purchase_id`},}// Usage example:models.Messages(models.MessageWhere.PurchaseID.EQ("hello"))
For eager loading relationships ther're generated undermodels.{Model}Rels
:
// Generated code from models packagevarMessageRels=struct {Purchasestring}{Purchase:"Purchase",}// Usage example:fmt.Println(models.MessageRels.Purchase)
NOTE: You can also assign the ModelWhere or ColumnNames to a variable andalthough you probably pay some performance penalty with it sometimes thereadability increase is worth it:
cols:=&models.UserColumnswhere:=&models.UserWhereu,err:=models.Users(where.Name.EQ("hello"),qm.Or(cols.Age+"=?",5))
No, because Go's toolchain - unlike traditional toolchains - makes the compiler do most of the workinstead of the linker. This means that when the firstgo install
is done it can takea little bit of time because there is a lot of code that is generated. However, because of thiswork balance between the compiler and linker in Go, linking to that code afterwards in the subsequentcompiles is extremely fast.
The generated models might import a couple of packages that are not on your system already, socd
into your generated models directory and typego get -u -t
to fetch them. You will only needto run this command once, not per generation.
If your database uses multiple schemas you should generate a new package for each of your schemas.Note that this only applies to databases that use real, SQL standard schemas (like PostgreSQL), notfake schemas (like MySQL).
Only "escaped format" is supported for types.BytesArray. This means that your byte slice needs to havea format of "\x00" (4 bytes per byte) opposed to "\x00" (1 byte per byte). This is to maintain compatibilitywith all Postgres drivers. Example:
x := types.BytesArray{0: []byte("\\x68\\x69")}
Please note that multi-dimensional Postgres ARRAY types are not supported at this time.
Youmust use a DSN flag in MySQL connections, see:Requirements
The homepage for theSQLBoilerGolang ORMgenerator is located at:https://github.com/volatiletech/sqlboiler
The tests generated for your models package with sqlboiler are fairlyerror-prone. They are usually broken by constraints in the databasethat sqlboiler can't hope to understand.
During regular run-time this isn't an issue because your code will throw errorsand you will fix it however the auto-generated tests can only report thoseerrors and it seems like something is wrong when in reality the only issue isthat the auto generated tests can't understand that yourtext
column isvalidated by a regex that says it must be composed solely of the 'b' characterrepeated 342 times.
These tests are broken especially by foreign key constraints because of theparallelism we use. There's also no understanding in the tests of dependenciesbased on these foreign keys. As such there is a process that removes the foreignkeys from your schema when they are run, if this process messes up you will geterrors relating to foreign key constraints.
If you'd like to run the benchmarks yourself check out ourboilbench repo.
gotest -bench. -benchmem
Test machine:
OS: Ubuntu 16.04CPU: Intel(R) Core(TM) i7-4771 CPU @ 3.50GHzMem: 16GBGo: go version go1.8.1 linux/amd64
The graphs below have many runs like this as input to calculate errors. Hereis a sample run:
BenchmarkGORMSelectAll/gorm-8 20000 66500 ns/op 28998 B/op 455 allocs/opBenchmarkGORPSelectAll/gorp-8 50000 31305 ns/op 9141 B/op 318 allocs/opBenchmarkXORMSelectAll/xorm-8 20000 66074 ns/op 16317 B/op 417 allocs/opBenchmarkKallaxSelectAll/kallax-8 100000 18278 ns/op 7428 B/op 145 allocs/opBenchmarkBoilSelectAll/boil-8 100000 12759 ns/op 3145 B/op 67 allocs/opBenchmarkGORMSelectSubset/gorm-8 20000 69469 ns/op 30008 B/op 462 allocs/opBenchmarkGORPSelectSubset/gorp-8 50000 31102 ns/op 9141 B/op 318 allocs/opBenchmarkXORMSelectSubset/xorm-8 20000 64151 ns/op 15933 B/op 414 allocs/opBenchmarkKallaxSelectSubset/kallax-8 100000 16996 ns/op 6499 B/op 132 allocs/opBenchmarkBoilSelectSubset/boil-8 100000 13579 ns/op 3281 B/op 71 allocs/opBenchmarkGORMSelectComplex/gorm-8 20000 76284 ns/op 34566 B/op 521 allocs/opBenchmarkGORPSelectComplex/gorp-8 50000 31886 ns/op 9501 B/op 328 allocs/opBenchmarkXORMSelectComplex/xorm-8 20000 68430 ns/op 17694 B/op 464 allocs/opBenchmarkKallaxSelectComplex/kallax-8 50000 26095 ns/op 10293 B/op 212 allocs/opBenchmarkBoilSelectComplex/boil-8 100000 16403 ns/op 4205 B/op 102 allocs/opBenchmarkGORMDelete/gorm-8 200000 10356 ns/op 5059 B/op 98 allocs/opBenchmarkGORPDelete/gorp-8 1000000 1335 ns/op 352 B/op 13 allocs/opBenchmarkXORMDelete/xorm-8 200000 10796 ns/op 4146 B/op 122 allocs/opBenchmarkKallaxDelete/kallax-8 300000 5141 ns/op 2241 B/op 48 allocs/opBenchmarkBoilDelete/boil-8 2000000 796 ns/op 168 B/op 8 allocs/opBenchmarkGORMInsert/gorm-8 100000 15238 ns/op 8278 B/op 150 allocs/opBenchmarkGORPInsert/gorp-8 300000 4648 ns/op 1616 B/op 38 allocs/opBenchmarkXORMInsert/xorm-8 100000 12600 ns/op 6092 B/op 138 allocs/opBenchmarkKallaxInsert/kallax-8 100000 15115 ns/op 6003 B/op 126 allocs/opBenchmarkBoilInsert/boil-8 1000000 2249 ns/op 984 B/op 23 allocs/opBenchmarkGORMUpdate/gorm-8 100000 18609 ns/op 9389 B/op 174 allocs/opBenchmarkGORPUpdate/gorp-8 500000 3180 ns/op 1536 B/op 35 allocs/opBenchmarkXORMUpdate/xorm-8 100000 13149 ns/op 5098 B/op 149 allocs/opBenchmarkKallaxUpdate/kallax-8 100000 22880 ns/op 11366 B/op 219 allocs/opBenchmarkBoilUpdate/boil-8 1000000 1810 ns/op 936 B/op 18 allocs/opBenchmarkGORMRawBind/gorm-8 20000 65821 ns/op 30502 B/op 444 allocs/opBenchmarkGORPRawBind/gorp-8 50000 31300 ns/op 9141 B/op 318 allocs/opBenchmarkXORMRawBind/xorm-8 20000 62024 ns/op 15588 B/op 403 allocs/opBenchmarkKallaxRawBind/kallax-8 200000 7843 ns/op 4380 B/op 46 allocs/opBenchmarkSQLXRawBind/sqlx-8 100000 13056 ns/op 4572 B/op 55 allocs/opBenchmarkBoilRawBind/boil-8 200000 11519 ns/op 4638 B/op 55 allocs/op
Below are extensions for SQL Boiler developed by community, use them at your own risk.
- sqlboiler-extensions: Generates additional methods for models, particlarly for bulk operations.
- boilingseed: Generates helpers to seed the database with data.
- boilingfactory: Generates helpers to create and insert test models on the fly.
About
Generate a Go ORM tailored to your database schema.