Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Fast key-value store in Go.

License

NotificationsYou must be signed in to change notification settings

devopsmi/badger

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

An embeddable, persistent, simple and fast key-value (KV) store, written purely in Go. It's meant to be a performant alternative to non Go based key-value stores likeRocksDB.

Badger sketch

About

Badger is written out of frustration with existing KV stores which are either written in pure Go and slow, or fast but require usage of Cgo.Badger aims to provide an equal or better speed compared to industry leading KV stores (like RocksDB), while maintaining the entire code base in pure Go.

Related Blog Posts

  1. Introducing Badger: A fast key-value store written natively in Go
  2. Make Badger crash resilient with ALICE

Video Tutorials

Installation and Usage

go get -v github.com/dgraph-io/badger

If you want to run tests, also get testing dependencies by passing in-t flag.

go get -t -v github.com/dgraph-io/badger

From here, followdocs for usage.

Note

Badger is undergoing a major API change by introducing transactions. To use theexisting version of the APIs, please usetag v0.8. Tag can bespecified via the Go dependency tool you're using.

Documentation

Badger documentation is located atgodoc.org.

Design Goals

Badger has these design goals in mind:

  • Write it purely in Go.
  • Use latest research to build the fastest KV store for data sets spanning terabytes.
  • Keep it simple, stupid. No support for transactions, versioning or snapshots -- anything that can be done outside of the store should be done outside. By user demand and realizing their utility, we're introducing multi-version concurrency control, snapshots and transactions to Badger.
  • Optimize for SSDs (more below).

Users

Badger is currently being used byDgraph.

If you're using Badger in a project, let us know.

Design

Badger is based onWiscKey paper by University of Wisconsin, Madison.

In simplest terms, keys are stored in LSM trees along with pointers to values, which would be stored in write-ahead log files, aka value logs.Keys would typically be kept in RAM, values would be served directly off SSD.

Optimizations for SSD

SSDs are best at doing serial writes (like HDD) and random reads (unlike HDD).Each write reduces the lifecycle of an SSD, so Badger aims to reduce the write amplification of a typical LSM tree.

It achieves this by separating the keys from values. Keys tend to be smaller in size and are stored in the LSM tree.Values (which tend to be larger in size) are stored in value logs, which also double as write ahead logs for fault tolerance.

Only a pointer to the value is stored along with the key, which significantly reduces the size of each KV pair in LSM tree.This allows storing lot more KV pairs per table. For e.g., a table of size 64MB can store 2 million KV pairs assuming an average key size of 16 bytes, and a value pointer of 16 bytes (with prefix diffing in Badger, the average key sizes stored in a table would be lower).Thus, lesser compactions are required to achieve stability for the LSM tree, which results in fewer writes (all writes being serial).

It might be a good idea on ext4 to periodically invokefstrim in case the file systemdoes not quickly reuse space from deleted files.

Nature of LSM trees

Because only keys (and value pointers) are stored in LSM tree, Badger generates much smaller LSM trees.Even for huge datasets, these smaller trees can fit nicely in RAM allowing for lot quicker accesses to and iteration through keys.For random gets, keys can be quickly looked up from within RAM, giving access to the value pointer.Then only a single pointed read from SSD (random read) is done to retrieve value.This improves random get performance significantly compared to traditional LSM tree design used by other KV stores.

Comparisons

FeatureBadgerRocksDBBoltDB
DesignLSM tree with value logLSM tree onlyB+ tree
High RW PerformanceYesYesNo
Designed for SSDsYes (with latest research1)Not specifically2No
EmbeddableYesYesYes
Sorted KV accessYesYesYes
Pure Go (no Cgo)YesNoYes
TransactionsNo (but provides compare and set operations)Yes (but non-ACID)Yes, ACID
SnapshotsNoYesYes

1 Badger is based on a paper calledWiscKey by University of Wisconsin, Madison, which saw big wins with separating values from keys, significantly reducing the write amplification compared to a typical LSM tree.

2 RocksDB is an SSD optimized version of LevelDB, which was designed specifically for rotating disks.As such RocksDB's design isn't aimed at SSDs.

Benchmarks

RocksDB Benchmarks

Crash Consistency

Badger is crash resilient. Any update which was applied successfully before a crash, would be available after the crash.Badger achieves this via its value log.

Badger's value log is a write-ahead log (WAL). Every update to Badger is written to this log first, before being applied to the LSM tree.Badger maintains a monotonically increasing pointer (head) in the LSM tree, pointing to the last update offset in the value log.As and when LSM table is persisted, the head gets persisted along with.Thus, the head always points to the latest persisted offset in the value log.Every time Badger opens the directory, it would first replay the updates after the head in order, bringing the updates back into the LSM tree; before it allows any reads or writes.This technique ensures data persistence in face of crashes.

Furthermore, Badger can be run withSyncWrites option, which would open the WAL with O_DSYNC flag, hence syncing the writes to disk on every write.

Frequently Asked Questions

  • My writes are really slow. Why?

You're probably doing writes serially, usingSet. To get the best write performance, useBatchSet, and call itconcurrently from multiple goroutines.

  • I don't see any disk write. Why?

If you're using Badger withSyncWrites=false, then your writes might not be written to value logand won't get synced to disk immediately. Writes to LSM tree are done inmemory first, before theyget compacted to disk. The compaction would only happen onceMaxTableSize has been reached. So, ifyou're doing a few writes and then checking, you might not see anything on disk. Once youClosethe store, you'll see these writes on disk.

  • Which instances should I use for Badger?

We recommend using instances which provide local SSD storage, without any limiton the maximum IOPS. In AWS, these are storage optimized instances like i3. Theyprovide local SSDs which clock 100K IOPS over 4KB blocks easily.

  • Are there any Go specific settings that I should use?

Wehighly recommend setting a high number for GOMAXPROCS, which allows Go toobserve the full IOPS throughput provided by modern SSDs. In Dgraph, we have setit to 128. For more details,see thisthread.

Contact

Packages

No packages published

Languages

  • Go99.5%
  • Other0.5%

[8]ページ先頭

©2009-2025 Movatter.jp