- Notifications
You must be signed in to change notification settings - Fork140
Simple and powerful toolkit for BoltDB
License
asdine/storm
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Storm is a simple and powerful toolkit forBoltDB. Basically, Storm provides indexes, a wide range of methods to store and fetch data, an advanced query system, and much more.
In addition to the examples below, see also theexamples in the GoDoc.
For extended queries and support forBadger, see alsoGenji
- Getting Started
- Import Storm
- Open a database
- Simple CRUD system
- Nodes and nested buckets
- Simple Key/Value store
- BoltDB
- License
- Credits
GO111MODULE=on go get -u github.com/asdine/storm/v3
import"github.com/asdine/storm/v3"
Quick way of opening a database
db,err:=storm.Open("my.db")deferdb.Close()
Open
can receive multiple options to customize the way it behaves. SeeOptions below
typeUserstruct {IDint// primary keyGroupstring`storm:"index"`// this field will be indexedEmailstring`storm:"unique"`// this field will be indexed with a unique constraintNamestring// this field will not be indexedAgeint`storm:"index"`}
The primary key can be of any type as long as it is not a zero value. Storm will search for the tagid
, if not present Storm will search for a field namedID
.
typeUserstruct {ThePrimaryKeystring`storm:"id"`// primary keyGroupstring`storm:"index"`// this field will be indexedEmailstring`storm:"unique"`// this field will be indexed with a unique constraintNamestring// this field will not be indexed}
Storm handles tags in nested structures with theinline
tag
typeBasestruct {Ident bson.ObjectId`storm:"id"`}typeUserstruct {Base`storm:"inline"`Groupstring`storm:"index"`Emailstring`storm:"unique"`NamestringCreatedAt time.Time`storm:"index"`}
user:=User{ID:10,Group:"staff",Email:"john@provider.com",Name:"John",Age:21,CreatedAt:time.Now(),}err:=db.Save(&user)// err == niluser.ID++err=db.Save(&user)// err == storm.ErrAlreadyExists
That's it.
Save
creates or updates all the required indexes and buckets, checks the unique constraints and saves the object to the store.
Storm can auto increment integer values so you don't have to worry about that when saving your objects. Also, the new value is automatically inserted in your field.
typeProductstruct {Pkint`storm:"id,increment"`// primary key with auto incrementNamestringIntegerFielduint64`storm:"increment"`IndexedIntegerFielduint32`storm:"index,increment"`UniqueIntegerFieldint16`storm:"unique,increment=100"`// the starting value can be set}p:=Product{Name:"Vaccum Cleaner"}fmt.Println(p.Pk)fmt.Println(p.IntegerField)fmt.Println(p.IndexedIntegerField)fmt.Println(p.UniqueIntegerField)// 0// 0// 0// 0_=db.Save(&p)fmt.Println(p.Pk)fmt.Println(p.IntegerField)fmt.Println(p.IndexedIntegerField)fmt.Println(p.UniqueIntegerField)// 1// 1// 1// 100
Any object can be fetched, indexed or not. Storm uses indexes when available, otherwise it uses thequery system.
varuserUsererr:=db.One("Email","john@provider.com",&user)// err == nilerr=db.One("Name","John",&user)// err == nilerr=db.One("Name","Jack",&user)// err == storm.ErrNotFound
varusers []Usererr:=db.Find("Group","staff",&users)
varusers []Usererr:=db.All(&users)
varusers []Usererr:=db.AllByIndex("CreatedAt",&users)
varusers []Usererr:=db.Range("Age",10,21,&users)
varusers []Usererr:=db.Prefix("Name","Jo",&users)
varusers []Usererr:=db.Find("Group","staff",&users,storm.Skip(10))err=db.Find("Group","staff",&users,storm.Limit(10))err=db.Find("Group","staff",&users,storm.Reverse())err=db.Find("Group","staff",&users,storm.Limit(10),storm.Skip(10),storm.Reverse())err=db.All(&users,storm.Limit(10),storm.Skip(10),storm.Reverse())err=db.AllByIndex("CreatedAt",&users,storm.Limit(10),storm.Skip(10),storm.Reverse())err=db.Range("Age",10,21,&users,storm.Limit(10),storm.Skip(10),storm.Reverse())
err:=db.DeleteStruct(&user)
// Update multiple fields// Only works for non zero-value fields (e.g. Name can not be "", Age can not be 0)err:=db.Update(&User{ID:10,Name:"Jack",Age:45})// Update a single field// Also works for zero-value fields (0, false, "", ...)err:=db.UpdateField(&User{ID:10},"Age",0)
err:=db.Init(&User{})
Useful when starting your application
Using the struct
err:=db.Drop(&User)
Using the bucket name
err:=db.Drop("User")
err:=db.ReIndex(&User{})
Useful when the structure has changed
For more complex queries, you can use theSelect
method.Select
takes any number ofMatcher
from theq
package.
Here are some common Matchers:
// Equalityq.Eq("Name",John)// Strictly greater thanq.Gt("Age",7)// Lesser than or equal toq.Lte("Age",77)// Regex with name that starts with the letter Dq.Re("Name","^D")// In the given slice of valuesq.In("Group", []string{"Staff","Admin"})// Comparing fieldsq.EqF("FieldName","SecondFieldName")q.LtF("FieldName","SecondFieldName")q.GtF("FieldName","SecondFieldName")q.LteF("FieldName","SecondFieldName")q.GteF("FieldName","SecondFieldName")
Matchers can also be combined withAnd
,Or
andNot
:
// Match if all matchq.And(q.Gt("Age",7),q.Re("Name","^D"))// Match if one matchesq.Or(q.Re("Name","^A"),q.Not(q.Re("Name","^B") ),q.Re("Name","^C"),q.In("Group", []string{"Staff","Admin"}),q.And(q.StrictEq("Password", []byte(password)),q.Eq("Registered",true) ))
You can find the complete list in thedocumentation.
Select
takes any number of matchers and wraps them into aq.And()
so it's not necessary to specify it. It returns aQuery
type.
query:=db.Select(q.Gte("Age",7),q.Lte("Age",77))
TheQuery
type contains methods to filter and order the records.
// Limitquery=query.Limit(10)// Skipquery=query.Skip(20)// Calls can also be chainedquery=query.Limit(10).Skip(20).OrderBy("Age").Reverse()
But also to specify how to fetch them.
varusers []Usererr=query.Find(&users)varuserUsererr=query.First(&user)
Examples withSelect
:
// Find all users with an ID between 10 and 100err=db.Select(q.Gte("ID",10),q.Lte("ID",100)).Find(&users)// Nested matcherserr=db.Select(q.Or(q.Gt("ID",50),q.Lt("Age",21),q.And(q.Eq("Group","admin"),q.Gte("Age",21), ),)).Find(&users)query:=db.Select(q.Gte("ID",10),q.Lte("ID",100)).Limit(10).Skip(5).Reverse().OrderBy("Age","Name")// Find multiple recordserr=query.Find(&users)// orerr=db.Select(q.Gte("ID",10),q.Lte("ID",100)).Limit(10).Skip(5).Reverse().OrderBy("Age","Name").Find(&users)// Find first recorderr=query.First(&user)// orerr=db.Select(q.Gte("ID",10),q.Lte("ID",100)).Limit(10).Skip(5).Reverse().OrderBy("Age","Name").First(&user)// Delete all matching recordserr=query.Delete(new(User))// Fetching records one by one (useful when the bucket contains a lot of records)query=db.Select(q.Gte("ID",10),q.Lte("ID",100)).OrderBy("Age","Name")err=query.Each(new(User),func(recordinterface{})error) {u:=record.(*User)...returnnil}
See thedocumentation for a complete list of methods.
tx,err:=db.Begin(true)iferr!=nil {returnerr}defertx.Rollback()accountA.Amount-=100accountB.Amount+=100err=tx.Save(accountA)iferr!=nil {returnerr}err=tx.Save(accountB)iferr!=nil {returnerr}returntx.Commit()
Storm options are functions that can be passed when constructing you Storm instance. You can pass it any number of options.
By default, Storm opens a database with the mode0600
and a timeout of one second.You can change this behavior by usingBoltOptions
db,err:=storm.Open("my.db",storm.BoltOptions(0600,&bolt.Options{Timeout:1*time.Second}))
To store the data in BoltDB, Storm marshals it in JSON by default. If you wish to change this behavior you can pass a codec that implementscodec.MarshalUnmarshaler
via thestorm.Codec
option:
db:=storm.Open("my.db",storm.Codec(myCodec))
You can easily implement your ownMarshalUnmarshaler
, but Storm comes with built-in support forJSON (default),GOB,Sereal,Protocol Buffers andMessagePack.
These can be used by importing the relevant package and use that codec to configure Storm. The example below shows all variants (without proper error handling):
import ("github.com/asdine/storm/v3""github.com/asdine/storm/v3/codec/gob""github.com/asdine/storm/v3/codec/json""github.com/asdine/storm/v3/codec/sereal""github.com/asdine/storm/v3/codec/protobuf""github.com/asdine/storm/v3/codec/msgpack")vargobDb,_=storm.Open("gob.db",storm.Codec(gob.Codec))varjsonDb,_=storm.Open("json.db",storm.Codec(json.Codec))varserealDb,_=storm.Open("sereal.db",storm.Codec(sereal.Codec))varprotobufDb,_=storm.Open("protobuf.db",storm.Codec(protobuf.Codec))varmsgpackDb,_=storm.Open("msgpack.db",storm.Codec(msgpack.Codec))
Tip: Adding Storm tags to generated Protobuf files can be tricky. A good solution is to usethis tool to inject the tags during the compilation.
You can use an existing connection and pass it to Storm
bDB,_:=bolt.Open(filepath.Join(dir,"bolt.db"),0600,&bolt.Options{Timeout:10*time.Second})db:=storm.Open("my.db",storm.UseDB(bDB))
Batch mode can be enabled to speed up concurrent writes (seeBatch read-write transactions)
db:=storm.Open("my.db",storm.Batch())
Storm takes advantage of BoltDB nested buckets feature by usingstorm.Node
.Astorm.Node
is the underlying object used bystorm.DB
to manipulate a bucket.To create a nested bucket and use the same API asstorm.DB
, you can use theDB.From
method.
repo:=db.From("repo")err:=repo.Save(&Issue{Title:"I want more features",Author:user.ID,})err=repo.Save(newRelease("0.10"))varissues []Issueerr=repo.Find("Author",user.ID,&issues)varreleaseReleaseerr=repo.One("Tag","0.10",&release)
You can also chain the nodes to create a hierarchy
chars:=db.From("characters")heroes:=chars.From("heroes")enemies:=chars.From("enemies")items:=db.From("items")potions:=items.From("consumables").From("medicine").From("potions")
You can even pass the entire hierarchy as arguments toFrom
:
privateNotes:=db.From("notes","private")workNotes:=db.From("notes","work")
A Node can also be configured. Activating an option on a Node creates a copy, so a Node is always thread-safe.
n:=db.From("my-node")
Give a bolt.Tx transaction to the Node
n=n.WithTransaction(tx)
Enable batch mode
n=n.WithBatch(true)
Use a Codec
n=n.WithCodec(gob.Codec)
Storm can be used as a simple, robust, key/value store that can store anything.The key and the value can be of any type as long as the key is not a zero value.
Saving data :
db.Set("logs",time.Now(),"I'm eating my breakfast man")db.Set("sessions",bson.NewObjectId(),&someUser)db.Set("weird storage","754-3010",map[string]interface{}{"hair":"blonde","likes": []string{"cheese","star wars"},})
Fetching data :
user:=User{}db.Get("sessions",someObjectId,&user)vardetailsmap[string]interface{}db.Get("weird storage","754-3010",&details)db.Get("sessions",someObjectId,&details)
Deleting data :
db.Delete("sessions",someObjectId)db.Delete("weird storage","754-3010")
You can find other useful methods in thedocumentation.
BoltDB is still easily accessible and can be used as usual
db.Bolt.View(func(tx*bolt.Tx)error {bucket:=tx.Bucket([]byte("my bucket"))val:=bucket.Get([]byte("any id"))fmt.Println(string(val))returnnil})
A transaction can be also be passed to Storm
db.Bolt.Update(func(tx*bolt.Tx)error {...dbx:=db.WithTransaction(tx)err=dbx.Save(&user)...returnnil})
MIT
About
Simple and powerful toolkit for BoltDB