- Notifications
You must be signed in to change notification settings - Fork0
koalacxr/badger
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
BadgerDB is an embeddable, persistent, simple and fast key-value (KV) databasewritten in pure Go. It's meant to be a performant alternative to non-Go-basedkey-value stores likeRocksDB.
Badger v1.0 was released in Nov 2017. Check theChangelog for the full details.
We introduced transactions inv0.9.0 which involved a major API change. If you have a Badgerdatastore prior to that, please usev0.8.1, but we strongly urge you to upgrade. Upgrading fromboth v0.8 and v0.9 will require you totake backups and restore using the newversion.
- Getting Started
- Resources
- Contact
- Design
- Other Projects Using Badger
- Frequently Asked Questions
To start using Badger, install Go 1.8 or above and rungo get
:
$ go get github.com/dgraph-io/badger/...
This will retrieve the library and install thebadger_info
command lineutility into your$GOBIN
path.
The top-level object in Badger is aDB
. It represents multiple files on diskin specific directories, which contain the data for a single database.
To open your database, use thebadger.Open()
function, with the appropriateoptions. TheDir
andValueDir
options are mandatory and must bespecified by the client. They can be set to the same value to simplify things.
package mainimport ("log""github.com/dgraph-io/badger")funcmain() {// Open the Badger database located in the /tmp/badger directory.// It will be created if it doesn't exist.opts:=badger.DefaultOptionsopts.Dir="/tmp/badger"opts.ValueDir="/tmp/badger"db,err:=badger.Open(opts)iferr!=nil {log.Fatal(err) }deferdb.Close() // Your code here…}
Please note that Badger obtains a lock on the directories so multiple processescannot open the same database at the same time.
To start a read-only transaction, you can use theDB.View()
method:
err:=db.View(func(tx*badger.Txn)error { // Your code here… returnnil})
You cannot perform any writes or deletes within this transaction. Badgerensures that you get a consistent view of the database within this closure. Anywrites that happen elsewhere after the transaction has started, will not beseen by calls made within the closure.
To start a read-write transaction, you can use theDB.Update()
method:
err:=db.Update(func(tx*badger.Txn)error { // Your code here… returnnil})
All database operations are allowed inside a read-write transaction.
Always check the returned error value. If you return an errorwithin your closure it will be passed through.
AnErrConflict
error will be reported in case of a conflict. Depending on the stateof your application, you have the option to retry the operation if you receivethis error.
AnErrTxnTooBig
will be reported in case the number of pending writes/deletes inthe transaction exceed a certain limit. In that case, it is best to commit thetransaction and start a new transaction immediately. Here is an example (we arenot checking for errors in some places for simplicity):
updates:=make(map[string]string)txn:=db.NewTransaction(true)fork,v:=rangeupdates {iferr:=txn.Set(byte[](k),byte[](v));err==ErrTxnTooBig {_=txn.Commit()txn= db.NewTransaction(..) _=txn.Set(k,v) }}_=txn.Commit()
TheDB.View()
andDB.Update()
methods are wrappers around theDB.NewTransaction()
andTxn.Commit()
methods (orTxn.Discard()
in case ofread-only transactions). These helper methods will start the transaction,execute a function, and then safely discard your transaction if an error isreturned. This is the recommended way to use Badger transactions.
However, sometimes you may want to manually create and commit yourtransactions. You can use theDB.NewTransaction()
function directly, whichtakes in a boolean argument to specify whether a read-write transaction isrequired. For read-write transactions, it is necessary to callTxn.Commit()
to ensure the transaction is committed. For read-only transactions, callingTxn.Discard()
is sufficient.Txn.Commit()
also callsTxn.Discard()
internally to cleanup the transaction, so just callingTxn.Commit()
issufficient for read-write transaction. However, if your code doesn’t callTxn.Commit()
for some reason (for e.g it returns prematurely with an error),then please make sure you callTxn.Discard()
in adefer
block. Refer to thecode below.
// Start a writable transaction.txn,err:=db.NewTransaction(true)iferr!=nil {returnerr}defertxn.Discard()// Use the transaction...err:=txn.Set([]byte("answer"), []byte("42"))iferr!=nil {returnerr}// Commit the transaction and check for error.iferr:=txn.Commit(nil);err!=nil {returnerr}
The first argument toDB.NewTransaction()
is a boolean stating if the transactionshould be writable.
Badger allows an optional callback to theTxn.Commit()
method. Normally, thecallback can be set tonil
, and the method will return after all the writeshave succeeded. However, if this callback is provided, theTxn.Commit()
method returns as soon as it has checked for any conflicts. The actual writingto the disk happens asynchronously, and the callback is invoked once thewriting has finished, or an error has occurred. This can improve the throughputof the application in some cases. But it also means that a transaction is notdurable until the callback has been invoked with anil
error value.
To save a key/value pair, use theTxn.Set()
method:
err:=db.Update(func(txn*badger.Txn)error {err:=txn.Set([]byte("answer"), []byte("42"))returnerr})
This will set the value of the"answer"
key to"42"
. To retrieve thisvalue, we can use theTxn.Get()
method:
err:=db.View(func(txn*badger.Txn)error {item,err:=txn.Get([]byte("answer"))iferr!=nil {returnerr }val,err:=item.Value()iferr!=nil {returnerr }fmt.Printf("The answer is: %s\n",val)returnnil})
Txn.Get()
returnsErrKeyNotFound
if the value is not found.
Please note that values returned fromGet()
are only valid while thetransaction is open. If you need to use a value outside of the transactionthen you must usecopy()
to copy it to another byte slice.
Use theTxn.Delete()
method to delete a key.
To get unique monotonically increasing integers with strong durability, you canuse theDB.GetSequence
method. This method returns aSequence
object, whichis thread-safe and can be used concurrently via various goroutines.
Badger would lease a range of integers to hand out from memory, with thebandwidth provided toDB.GetSequence
. The frequency at which disk writes aredone is determined by this lease bandwidth and the frequency ofNext
invocations. Setting a bandwith too low would do more disk writes, setting ittoo high would result in wasted integers if Badger is closed or crashes.To avoid wasted integers, callRelease
before closing Badger.
seq,err:=db.GetSequence(key,1000)deferseq.Release()for {num,err:=seq.Next()}
Badger provides support for unordered merge operations. You can define a funcof typeMergeFunc
which takes in an existing value, and a value to bemerged with it. It returns a new value which is the result of themergeoperation. All values are specified in byte arrays. For e.g., here is a mergefunction (add
) which adds auint64
value to an existinguint64
value.
uint64ToBytes(iuint64) []byte {varbuf [8]bytebinary.BigEndian.PutUint64(buf[:],i)returnbuf[:]}funcbytesToUint64(b []byte)uint64 {returnbinary.BigEndian.Uint64(b)}// Merge function to add two uint64 numbersfuncadd(existing,new []byte) []byte {returnuint64ToBytes(bytesToUint64(existing)+bytesToUint64(new))}
This function can then be passed to theDB.GetMergeOperator()
method, alongwith a key, and a duration value. The duration specifies how often the mergefunction is run on values that have been added using theMergeOperator.Add()
method.
MergeOperator.Get()
method can be used to retrieve the cumulative value of the keyassociated with the merge operation.
key:= []byte("merge")m:=db.GetMergeOperator(key,add,200*time.Millisecond)deferm.Stop()m.Add(uint64ToBytes(1))m.Add(uint64ToBytes(2))m.Add(uint64ToBytes(3))res,err:=m.Get()// res should have value 6 encodedfmt.Println(bytesToUint64(res))
Badger allows setting an optional Time to Live (TTL) value on keys. Once the TTL haselapsed, the key will no longer be retrievable and will be eligible for garbagecollection. A TTL can be set as atime.Duration
value using theTxn.SetWithTTL()
API method.
An optional user metadata value can be set on each key. A user metadata valueis represented by a single byte. It can be used to set certain bits alongwith the key to aid in interpreting or decoding the key-value pair. Usermetadata can be set using theTxn.SetWithMeta()
API method.
Txn.SetEntry()
can be used to set the key, value, user metatadata and TTL,all at once.
To iterate over keys, we can use anIterator
, which can be obtained using theTxn.NewIterator()
method. Iteration happens in byte-wise lexicographical sortingorder.
err:=db.View(func(txn*badger.Txn)error {opts:=badger.DefaultIteratorOptionsopts.PrefetchSize=10it:=txn.NewIterator(opts)forit.Rewind();it.Valid();it.Next() {item:=it.Item()k:=item.Key()v,err:=item.Value()iferr!=nil {returnerr }fmt.Printf("key=%s, value=%s\n",k,v) }returnnil})
The iterator allows you to move to a specific point in the list of keys and moveforward or backward through the keys one at a time.
By default, Badger prefetches the values of the next 100 items. You can adjustthat with theIteratorOptions.PrefetchSize
field. However, setting it toa value higher than GOMAXPROCS (which we recommend to be 128 or higher)shouldn’t give any additional benefits. You can also turn off the fetching ofvalues altogether. See section below on key-only iteration.
To iterate over a key prefix, you can combineSeek()
andValidForPrefix()
:
db.View(func(txn*badger.Txn)error {it:=txn.NewIterator(&DefaultIteratorOptions)prefix:= []byte("1234")forit.Seek(prefix);it.ValidForPrefix(prefix);it.Next() {item:=it.Item()k:=item.Key()v,err:=item.Value()iferr!=nil {returnerr }fmt.Printf("key=%s, value=%s\n",k,v) }returnnil})
Badger supports a unique mode of iteration calledkey-only iteration. It isseveral order of magnitudes faster than regular iteration, because it involvesaccess to the LSM-tree only, which is usually resident entirely in RAM. Toenable key-only iteration, you need to set theIteratorOptions.PrefetchValues
field tofalse
. This can also be used to do sparse reads for selected keysduring an iteration, by callingitem.Value()
only when required.
err:=db.View(func(txn*badger.Txn)error {opts:=badger.DefaultIteratorOptionsopts.PrefetchValues=falseit:=txn.NewIterator(opts)forit.Rewind();it.Valid();it.Next() {item:=it.Item()k:=item.Key()fmt.Printf("key=%s\n",k) }returnnil})
Badger values need to be garbage collected, because of two reasons:
Badger keeps values separately from the LSM tree. This means that the compaction operationsthat clean up the LSM tree do not touch the values at all. Values need to be cleaned upseparately.
Concurrent read/write transactions could leave behind multiple values for a single key, because theyare stored with different versions. These could accumulate, and take up unneeded space beyond thetime these older versions are needed.
Badger relies on the client to perform garbage collection at a time of their choosing. It providesthe following methods, which can be invoked at an appropriate time:
DB.PurgeOlderVersions()
: This method iterates over the database, and cleans up all but the latestversions of the key-value pairs. It marks the older versions as deleted, which makes them eligible forgarbage collection.DB.PurgeVersionsBelow(key, ts)
: This method is useful to do a more targeted clean up of older versionsof key-value pairs. You can specify a key, and a timestamp. All versions of the key older than the timestampare marked as deleted, making them eligible for garbage collection.DB.RunValueLogGC()
: This method is designed to do garbage collection whileBadger is online. Please ensure that you call theDB.Purge…()
methods firstbefore invoking this method. It uses any statistics generated by theDB.Purge(…)
methods to pick files that are likely to lead to maximum spacereclamation. It loops until it encounters a file which does not lead to anygarbage collection.It could lead to increased I/O if
DB.RunValueLogGC()
hasn’t been called fora long time, and many deletes have happened in the meanwhile. So it is recommendedthat this method be called regularly.
There are two public API methodsDB.Backup()
andDB.Load()
which can beused to do online backups and restores. Badger v0.9 provides a CLI toolbadger
, which can do offline backup/restore. Make sure you have$GOPATH/bin
in your PATH to use this tool.
The command below will create a version-agnostic backup of the database, to afilebadger.bak
in the current working directory
badger backup --dir <path/to/badgerdb>
To restorebadger.bak
in the current working directory to a new database:
badger restore --dir <path/to/badgerdb>
Seebadger --help
for more details.
If you have a Badger database that was created using v0.8 (or below), you canuse thebadger_backup
tool provided in v0.8.1, and then restore it using thecommand above to upgrade your database to work with the latest version.
badger_backup --dir <path/to/badgerdb> --backup-file badger.bak
Badger's memory usage can be managed by tweaking several options available intheOptions
struct that is passed in when opening the database usingDB.Open
.
Options.ValueLogLoadingMode
can be set tooptions.FileIO
(instead of thedefaultoptions.MemoryMap
) to avoid memory-mapping log files. This can beuseful in environments with low RAM.- Number of memtables (
Options.NumMemtables
)- If you modify
Options.NumMemtables
, also adjustOptions.NumLevelZeroTables
andOptions.NumLevelZeroTablesStall
accordingly.
- If you modify
- Number of concurrent compactions (
Options.NumCompactors
) - Mode in which LSM tree is loaded (
Options.TableLoadingMode
) - Size of table (
Options.MaxTableSize
) - Size of value log file (
Options.ValueLogFileSize
)
If you want to decrease the memory usage of Badger instance, tweak theseoptions (ideally one at a time) until you achieve the desiredmemory usage.
Badger records metrics using theexpvar package, which is included in the Gostandard library. All the metrics are documented iny/metrics.gofile.
expvar
package adds a handler in to the default HTTP server (which has to bestarted explicitly), and serves up the metrics at the/debug/vars
endpoint.These metrics can then be collected by a system likePrometheus, to getbetter visibility into what Badger is doing.
- Introducing Badger: A fast key-value store written natively inGo
- Make Badger crash resilient with ALICE
- Badger vs LMDB vs BoltDB: Benchmarking key-value databases in Go
- Concurrent ACID Transactions in Badger
Badger was written with these design goals in mind:
- Write a key-value database in pure Go.
- Use latest research to build the fastest KV database for data sets spanning terabytes.
- Optimize for SSDs.
Badger’s design is based on a paper titledWiscKey: Separating Keys fromValues in SSD-conscious Storage.
Feature | Badger | RocksDB | BoltDB |
---|---|---|---|
Design | LSM tree with value log | LSM tree only | B+ tree |
High Read throughput | Yes | No | Yes |
High Write throughput | Yes | Yes | No |
Designed for SSDs | Yes (with latest research1) | Not specifically2 | No |
Embeddable | Yes | Yes | Yes |
Sorted KV access | Yes | Yes | Yes |
Pure Go (no Cgo) | Yes | No | Yes |
Transactions | Yes, ACID, concurrent with SSI3 | Yes (but non-ACID) | Yes, ACID |
Snapshots | Yes | Yes | Yes |
TTL support | Yes | Yes | No |
1 TheWISCKEY paper (on which Badger is based) saw bigwins with separating values from keys, significantly reducing the writeamplification compared to a typical LSM tree.
2 RocksDB is an SSD optimized version of LevelDB, which was designed specifically for rotating disks.As such RocksDB's design isn't aimed at SSDs.
3 SSI: Serializable Snapshot Isolation. For more details, see the blog postConcurrent ACID Transactions in Badger
We have run comprehensive benchmarks against RocksDB, Bolt and LMDB. Thebenchmarking code, and the detailed logs for the benchmarks can be found in thebadger-bench repo. More explanation, including graphs can be found the blog posts (linkedabove).
Below is a list of public, open source projects that use Badger:
- Dgraph - Distributed graph database.
- go-ipfs - Go client for the InterPlanetary File System (IPFS), a new hypermedia distribution protocol.
- 0-stor - Single device object store.
- Sandglass - distributed, horizontally scalable, persistent, time sorted message queue.
If you are using Badger in a project please send a pull request to add it to the list.
- My writes are getting stuck. Why?
This can happen if a long running iteration withPrefetch
is set to false, butaItem::Value
call is made internally in the loop. That causes Badger toacquire read locks over the value log files to avoid value log GC removing thefile from underneath. As a side effect, this also blocks a new value log GCfile from being created, when the value log file boundary is hit.
Please see Github issues#293and#315.
There are multiple workarounds during iteration:
- Use
Item::ValueCopy
instead ofItem::Value
when retrieving value. - Set
Prefetch
to true. Badger would then copy over the value and release thefile lock immediately. - When
Prefetch
is false, don't callItem::Value
and do a pure key-onlyiteration. This might be useful if you just want to delete a lot of keys. - Do the writes in a separate transaction after the reads.
- My writes are really slow. Why?
Are you creating a new transaction for every single key update? This will leadto very low throughput. To get best write performance, batch up multiple writesinside a transaction using singleDB.Update()
call. You could also havemultiple suchDB.Update()
calls being made concurrently from multiplegoroutines.
- I don't see any disk write. Why?
If you're using Badger withSyncWrites=false
, then your writes might not be written to value logand won't get synced to disk immediately. Writes to LSM tree are done inmemory first, before theyget compacted to disk. The compaction would only happen onceMaxTableSize
has been reached. So, ifyou're doing a few writes and then checking, you might not see anything on disk. Once youClose
the database, you'll see these writes on disk.
- Which instances should I use for Badger?
We recommend using instances which provide local SSD storage, without any limiton the maximum IOPS. In AWS, these are storage optimized instances like i3. Theyprovide local SSDs which clock 100K IOPS over 4KB blocks easily.
- Are there any Go specific settings that I should use?
Wehighly recommend setting a high number for GOMAXPROCS, which allows Go toobserve the full IOPS throughput provided by modern SSDs. In Dgraph, we have setit to 128. For more details,see thisthread.
- Please usediscuss.dgraph.io for questions, feature requests and discussions.
- Please useGithub issue tracker for filing bugs or feature requests.
- Join
.
- Follow us on Twitter@dgraphlabs.
About
Fast key-value DB in Go.
Resources
License
Stars
Watchers
Forks
Packages0
Languages
- Go99.8%
- Shell0.2%