Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Abstract class for a lexicographically sorted key-value database.

License

NotificationsYou must be signed in to change notification settings

Level/abstract-level

Repository files navigation

Abstract class for a lexicographically sorted key-value database. Provides encodings, sublevels, events and hooks. If you are upgrading, please seeUPGRADING.md.

📌 Wondering what happened tolevelup? VisitFrequently Asked Questions.

level badgenpmNode versionTestCoverageStandardCommon ChangelogDonate

Usage

This module exports an abstract class. End users should instead use modules likelevel that export a concrete implementation. The purpose of the abstract class is to provide a common interface that looks like this:

// Create a databaseconstdb=newLevel('./db',{valueEncoding:'json'})// Add an entry with key 'a' and value 1awaitdb.put('a',1)// Add multiple entriesawaitdb.batch([{type:'put',key:'b',value:2}])// Get value of key 'a': 1constvalue=awaitdb.get('a')// Iterate entries with keys that are greater than 'a'forawait(const[key,value]ofdb.iterator({gt:'a'})){console.log(value)// 2}

Usage from TypeScript requires generic type parameters.

TypeScript example
// Specify types of keys and values (any, in the case of json).// The generic type parameters default to Level<string, string>.constdb=newLevel<string,any>('./db',{valueEncoding:'json'})// All relevant methods then use those typesawaitdb.put('a',{x:123})// Specify different types when overriding encoding per operationawaitdb.get<string,string>('a',{valueEncoding:'utf8'})// Though in some cases TypeScript can infer themawaitdb.get('a',{valueEncoding:db.valueEncoding('utf8')})// It works the same for sublevelsconstabc=db.sublevel('abc')constxyz=db.sublevel<string,any>('xyz',{valueEncoding:'json'})

TypeScript users can benefit from theusing keyword becauseabstract-level implementsSymbol.asyncDispose on its resources. For example:

Using example
awaitdb.put('example','before')await usingsnapshot=db.snapshot()awaitdb.put('example','after')awaitdb.get('example',{ snapshot}))// Returns 'before'

The equivalent in JavaScript would be:

awaitdb.put('example','before')constsnapshot=db.snapshot()try{awaitdb.put('example','after')awaitdb.get('example',{ snapshot}))// Returns 'before'}finally{awaitsnapshot.close()}

Install

Withnpm do:

npm install abstract-level

Supported Platforms

We aim to support Active LTS and Current Node.js releases, as well as evergreen browsers that are based on Chromium, Firefox or Webkit. Features that the runtime must support includequeueMicrotask,Promise.allSettled(),globalThis andasync generators. Supported runtimes may differ per implementation.

Public API For Consumers

This module has a public API for consumers of a database and aprivate API for concrete implementations. The public API, as documented in this section, offers a simple yet rich interface that is common between all implementations. Implementations may have additional options or methods. TypeScripttype declarations areincluded (and exported for reuse) only for the public API.

Anabstract-level database is at its core akey-value database. A key-value pair is referred to as anentry here and typically returned as an array, comparable toObject.entries().

db = new Level(...[, options])

Creating a database is done by calling a class constructor. Implementations export a class that extends theAbstractLevel class and has its own constructor with an implementation-specific signature. All constructors should have anoptions argument as the last. Typically, constructors take alocation as their first argument, pointing to where the data will be stored. That may be a file path, URL, something else or none at all, since not all implementations are disk-based or persistent. Others take another database rather than a location as their first argument.

The optionaloptions object may contain:

  • keyEncoding (string or object, default'utf8'): encoding to use for keys
  • valueEncoding (string or object, default'utf8'): encoding to use for values.

SeeEncodings for a full description of these options. Otheroptions (exceptpassive) are forwarded todb.open() which is automatically called in a next tick after the constructor returns. Any read & write operations are queued internally until the database has finished opening. If opening fails, those queued operations will yield errors.

db.status

Getter that returns a string reflecting the current state of the database:

  • 'opening' - waiting for the database to be opened
  • 'open' - successfully opened the database
  • 'closing' - waiting for the database to be closed
  • 'closed' - database is closed.

db.open([options])

Open the database. Returns a promise. Options passed toopen() take precedence over options passed to the database constructor. Not all implementations support thecreateIfMissing anderrorIfExists options (notablymemory-level andbrowser-level) and will indicate so viadb.supports.createIfMissing anddb.supports.errorIfExists.

The optionaloptions object may contain:

  • createIfMissing (boolean, default:true): Iftrue, create an empty database if one doesn't already exist. Iffalse and the database doesn't exist, opening will fail.
  • errorIfExists (boolean, default:false): Iftrue and the database already exists, opening will fail.
  • passive (boolean, default:false): Wait for, but do not initiate, opening of the database.

It's generally not necessary to callopen() because it's automatically called by the database constructor. It may however be useful to capture an error from failure to open, that would otherwise not surface until another method likedb.get() is called. It's also possible to reopen the database after it has been closed withclose(). Onceopen() has then been called, any read & write operations will again be queued internally until opening has finished.

Theopen() andclose() methods are idempotent. If the database is already open, the promise returned byopen() will resolve without delay. If opening is already in progress, the promise will resolve when that has finished. If closing is in progress, the database will be reopened once closing has finished. Likewise, ifclose() is called afteropen(), the database will be closed once opening has finished.

db.close()

Close the database. Returns a promise.

A database may have associated resources like file handles and locks. When the database is no longer needed (for the remainder of a program) it's recommended to calldb.close() to free up resources.

Afterdb.close() has been called, no further read & write operations are allowed unless and untildb.open() is called again. For example,db.get(key) will yield an error with codeLEVEL_DATABASE_NOT_OPEN. Any unclosed iterators, snapshots and chained batches will be closed bydb.close() and can then no longer be used even whendb.open() is called again.

db.get(key[, options])

Get a value from the database bykey. The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode thekey.
  • valueEncoding: custom value encoding for this operation, used to decode the value.
  • snapshot: explicitsnapshot to read from.

Returns a promise for the value. If thekey was not found then the value will beundefined.

db.getMany(keys[, options])

Get multiple values from the database by an array ofkeys. The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode thekeys.
  • valueEncoding: custom value encoding for this operation, used to decode values.
  • snapshot: explicitsnapshot to read from.

Returns a promise for an array of values with the same order askeys. If a key was not found, the relevant value will beundefined.

db.has(key[, options])

Check if the database has an entry with the givenkey. The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode thekey.
  • snapshot: explicitsnapshot to read from.

Returns a promise for a boolean. For example:

if(awaitdb.has('fruit')){console.log('We have fruit')}

If the value of the entry is needed, instead do:

constvalue=awaitdb.get('fruit')if(value!==undefined){console.log('We have fruit: %o',value)}

db.hasMany(keys[, options])

Check if the database has entries with the given keys. Thekeys argument must be an array. The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode thekeys.
  • snapshot: explicitsnapshot to read from.

Returns a promise for an array of booleans with the same order askeys. For example:

awaitdb.put('a','123')awaitdb.hasMany(['a','b'])// [true, false]

db.put(key, value[, options])

Add a new entry or overwrite an existing entry. The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode thekey.
  • valueEncoding: custom value encoding for this operation, used to encode thevalue.

Returns a promise.

db.del(key[, options])

Delete an entry bykey. The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode thekey.

Returns a promise.

db.batch(operations[, options])

Perform multipleput and/ordel operations in bulk. Returns a promise. Theoperations argument must be an array containing a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation.

Each operation must be an object with at least atype property set to either'put' or'del'. If thetype is'put', the operation must havekey andvalue properties. It may optionally havekeyEncoding and / orvalueEncoding properties to encode keys or values with a custom encoding for just that operation. If thetype is'del', the operation must have akey property and may optionally have akeyEncoding property.

An operation of either type may also have asublevel property, to prefix the key of the operation with the prefix of that sublevel. This allows atomically committing data to multiple sublevels. The givensublevel must have the sameroot (i.e. top-most) database asdb. Keys and values will be encoded by the sublevel, to the same effect as asublevel.batch(..) call. In the following example, the firstvalue will be encoded with'json' rather than the default encoding ofdb:

constpeople=db.sublevel('people',{valueEncoding:'json'})constnameIndex=db.sublevel('names')awaitdb.batch([{type:'put',sublevel:people,key:'123',value:{name:'Alice'}},{type:'put',sublevel:nameIndex,key:'Alice',value:'123'}])

The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this batch, used to encode keys.
  • valueEncoding: custom value encoding for this batch, used to encode values.

Encoding properties on individual operations take precedence. In the following example, the first value will be encoded with the'utf8' encoding and the second with'json'.

awaitdb.batch([{type:'put',key:'a',value:'foo'},{type:'put',key:'b',value:123,valueEncoding:'json'}],{valueEncoding:'utf8'})

chainedBatch = db.batch()

Create achained batch, whenbatch() is called with zero arguments. A chained batch can be used to build and eventually commit an atomic batch of operations:

constchainedBatch=db.batch().del('bob').put('alice',361).put('kim',220)// CommitawaitchainedBatch.write()

Depending on how it's used, it is possible to obtain greater overall performance with this form ofbatch(), mainly because its methods likeput() can immediately copy the data of that singular operation to the underlying storage, rather than having to block the event loop while copying the data of multiple operations. However, on severalabstract-level implementations, chained batch is just sugar and has no performance benefits.

Due to its synchronous nature, it is not possible to create a chained batch before the database has finished opening. Be sure to callawait db.open() beforechainedBatch = db.batch(). This does not apply to other database methods.

iterator = db.iterator([options])

Create aniterator. The optionaloptions object may contain the followingrange options to control the range of entries to be iterated:

  • gt (greater than) orgte (greater than or equal): define the lower bound of the range to be iterated. Only entries where the key is greater than (or equal to) this option will be included in the range. Whenreverse is true the order will be reversed, but the entries iterated will be the same.
  • lt (less than) orlte (less than or equal): define the higher bound of the range to be iterated. Only entries where the key is less than (or equal to) this option will be included in the range. Whenreverse is true the order will be reversed, but the entries iterated will be the same.
  • reverse (boolean, default:false): iterate entries in reverse order. Beware that a reverse seek can be slower than a forward seek.
  • limit (number, default:Infinity): limit the number of entries yielded. This number represents amaximum number of entries and will not be reached if the end of the range is reached first. A value ofInfinity or-1 means there is no limit. Whenreverse is true the entries with the highest keys will be returned instead of the lowest keys.

Thegte andlte range options take precedence overgt andlt respectively. If no range options are provided, the iterator will visit all entries of the database, starting at the lowest key and ending at the highest key (unlessreverse is true). In addition to range options, theoptions object may contain:

  • keys (boolean, default:true): whether to return the key of each entry. If set tofalse, the iterator will yield keys that areundefined. Prefer to usedb.keys() instead.
  • values (boolean, default:true): whether to return the value of each entry. If set tofalse, the iterator will yield values that areundefined. Prefer to usedb.values() instead.
  • keyEncoding: custom key encoding for this iterator, used to encode range options, to encodeseek() targets and to decode keys.
  • valueEncoding: custom value encoding for this iterator, used to decode values.
  • signal: anAbortSignal toabort read operations on the iterator.
  • snapshot: explicitsnapshot to read from.

Lastly, an implementation is free to add its own options.

📌 To instead consume data using streams, seelevel-read-stream andlevel-web-stream.

keyIterator = db.keys([options])

Create akey iterator, having the same interface asdb.iterator() except that it yields keys instead of entries. If only keys are needed, usingdb.keys() may increase performance because values won't have to fetched, copied or decoded. Options are the same as fordb.iterator() except thatdb.keys() does not takekeys,values andvalueEncoding options.

// Iterate lazilyforawait(constkeyofdb.keys({gt:'a'})){console.log(key)}// Get all at once. Setting a limit is recommended.constkeys=awaitdb.keys({gt:'a',limit:10}).all()

valueIterator = db.values([options])

Create avalue iterator, having the same interface asdb.iterator() except that it yields values instead of entries. If only values are needed, usingdb.values() may increase performance because keys won't have to fetched, copied or decoded. Options are the same as fordb.iterator() except thatdb.values() does not takekeys andvalues options. Note that itdoes take akeyEncoding option, relevant for the encoding of range options.

// Iterate lazilyforawait(constvalueofdb.values({gt:'a'})){console.log(value)}// Get all at once. Setting a limit is recommended.constvalues=awaitdb.values({gt:'a',limit:10}).all()

db.clear([options])

Delete all entries or a range. Not guaranteed to be atomic. Returns a promise. Accepts the following options (with the same rules as on iterators):

  • gt (greater than) orgte (greater than or equal): define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. Whenreverse is true the order will be reversed, but the entries deleted will be the same.
  • lt (less than) orlte (less than or equal): define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. Whenreverse is true the order will be reversed, but the entries deleted will be the same.
  • reverse (boolean, default:false): delete entries in reverse order. Only effective in combination withlimit, to delete the last N entries.
  • limit (number, default:Infinity): limit the number of entries to be deleted. This number represents amaximum number of entries and will not be reached if the end of the range is reached first. A value ofInfinity or-1 means there is no limit. Whenreverse is true the entries with the highest keys will be deleted instead of the lowest keys.
  • keyEncoding: custom key encoding for this operation, used to encode range options.
  • snapshot: explicitsnapshot to read from, such that entries not present in the snapshot will not be deleted. If nosnapshot is provided, the database may create its own internal snapshot but (unlike on other methods) this is currently not a hard requirement for implementations.

Thegte andlte range options take precedence overgt andlt respectively. If no options are provided, all entries will be deleted.

sublevel = db.sublevel(name[, options])

Create asublevel that has the same interface asdb (except for additional, implementation-specific methods) and prefixes the keys of operations before passing them on todb. Thename argument is required and must be a string, or an array of strings (explained further below).

constexample=db.sublevel('example')awaitexample.put('hello','world')awaitdb.put('a','1')// Prints ['hello', 'world']forawait(const[key,value]ofexample.iterator()){console.log([key,value])}

Sublevels effectively separate a database into sections. Think SQL tables, but evented, ranged and realtime! Each sublevel is anAbstractLevel instance with its own keyspace,encodings,hooks andevents. For example, it's possible to have one sublevel with'buffer' keys and another with'utf8' keys. The same goes for values. Like so:

db.sublevel('one',{valueEncoding:'json'})db.sublevel('two',{keyEncoding:'buffer'})

An own keyspace means thatsublevel.iterator() only includes entries of that sublevel,sublevel.clear() will only delete entries of that sublevel, and so forth. Range options get prefixed too.

Fully qualified keys (as seen from the parent database) take the form ofprefix + key whereprefix isseparator + name + separator. Ifname is empty, the effective prefix is two separators. Sublevels can be nested: ifdb is itself a sublevel then the effective prefix is a combined prefix, e.g.'!one!!two!'. Note that a parent database will see its own keys as well as keys of any nested sublevels:

// Prints ['!example!hello', 'world'] and ['a', '1']forawait(const[key,value]ofdb.iterator()){console.log([key,value])}

📌 The key structure is equal to that ofsubleveldown which offered sublevels before they were built-in toabstract-level. This means that anabstract-level sublevel can read sublevels previously created with (and populated by)subleveldown.

Internally, sublevels operate on keys that are either a string, Buffer or Uint8Array, depending on parent database and choice of encoding. Which is to say: binary keys are fully supported. Thename must however always be a string and can only contain ASCII characters.

The optionaloptions object may contain:

  • separator (string, default:'!'): Character for separating sublevel names from user keys and each other. Must sort before characters used inname. An error will be thrown if that's not the case.
  • keyEncoding (string or object, default'utf8'): encoding to use for keys
  • valueEncoding (string or object, default'utf8'): encoding to use for values.

ThekeyEncoding andvalueEncoding options are forwarded to theAbstractLevel constructor and work the same, as if a new, separate database was created. They default to'utf8' regardless of the encodings configured ondb. Other options are forwarded too butabstract-level has no relevant options at the time of writing. For example, setting thecreateIfMissing option will have no effect. Why is that?

Like regular databases, sublevels open themselves, but they do not affect the state of the parent database. This means a sublevel can be individually closed and (re)opened. If the sublevel is created while the parent database is opening, it will wait for that to finish. Closing the parent database will automatically close the sublevel, along with other resources like iterators.

Lastly, thename argument can be an array as a shortcut to create nested sublevels. Those are normally created like so:

constindexes=db.sublevel('idx')constcolorIndex=indexes.sublevel('colors')

Here, the parent database ofcolorIndex isindexes. Operations made oncolorIndex are thus forwarded from that sublevel toindexes and from there todb. At each step, hooks and events are available to transform and react to data from a different perspective. Which comes at a (typically small) performance cost that increases with further nested sublevels. If theindexes sublevel is only used to organize keys and not directly interfaced with, operations oncolorIndex can be made faster by skippingindexes:

constcolorIndex=db.sublevel(['idx','colors'])

In this case, the parent database ofcolorIndex isdb. Note that it's still possible to separately create theindexes sublevel, but it will be disconnected fromcolorIndex, meaning thatindexes will not see (live) operations made oncolorIndex.

encoding = db.keyEncoding([encoding])

Returns the givenencoding argument as a normalized encoding object that follows thelevel-transcoder encoding interface. SeeEncodings for an introduction. Theencoding argument may be:

  • A string to select a known encoding by its name
  • An object that follows one of the following interfaces:level-transcoder,level-codec,abstract-encoding,multiformats
  • A previously normalized encoding, such thatkeyEncoding(x) equalskeyEncoding(keyEncoding(x))
  • Omitted,null orundefined, in which case the defaultkeyEncoding of the database is returned.

Other methods that takekeyEncoding orvalueEncoding options, accept the same as above. Results are cached. If theencoding argument is an object and it has a name then subsequent calls can refer to that encoding by name.

Depending on the encodings supported by a database, this method may return atranscoder encoding that translates the desired encoding from / to an encoding supported by the database. Itsencode() anddecode() methods will have respectively the same input and output types as a non-transcoded encoding, but itsname property will differ.

Assume that e.g.db.keyEncoding().encode(key) is safe to call at any time including if the database isn't open, because encodings must be stateless. If the given encoding is not found or supported, aLEVEL_ENCODING_NOT_FOUND orLEVEL_ENCODING_NOT_SUPPORTED error is thrown.

encoding = db.valueEncoding([encoding])

Same asdb.keyEncoding([encoding]) except that it returns the defaultvalueEncoding of the database (if theencoding argument is omitted,null orundefined).

key = db.prefixKey(key, keyFormat[, local])

Add sublevel prefix to the givenkey, which must be already-encoded. If this database is not a sublevel, the givenkey is returned as-is. ThekeyFormat must be one of'utf8','buffer','view'. If'utf8' thenkey must be a string and the return value will be a string. If'buffer' then Buffer, if'view' then Uint8Array.

constsublevel=db.sublevel('example')console.log(db.prefixKey('a','utf8'))// 'a'console.log(sublevel.prefixKey('a','utf8'))// '!example!a'

By default, the givenkey will be prefixed to form a fully-qualified key in the context of theroot (i.e. top-most) database, as the following example will demonstrate. Iflocal is true, the givenkey will instead be prefixed to form a fully-qualified key in the context of theparent database.

constsublevel=db.sublevel('example')constnested=sublevel.sublevel('nested')console.log(nested.prefixKey('a','utf8'))// '!example!!nested!a'console.log(nested.prefixKey('a','utf8',true))// '!nested!a'

snapshot = db.snapshot(options)

Create an explicitsnapshot. Throws aLEVEL_NOT_SUPPORTED error ifdb.supports.explicitSnapshots is false (Level/community#118). For details, seeReading From Snapshots.

There are currently no options but specific implementations may add their own.

db.supports

Amanifest describing the features supported by this database. Might be used like so:

if(!db.supports.permanence){thrownewError('Persistent storage is required')}

db.defer(fn[, options])

Call the functionfn at a later time whendb.status changes to'open' or'closed'. Known as adeferred operation. Used byabstract-level itself to implement "deferred open" which is a feature that makes it possible to call methods likedb.put() before the database has finished opening. Thedefer() method is exposed for implementations and plugins to achieve the same on their custom methods:

db.foo=function(key){if(this.status==='opening'){this.defer(()=>this.foo(key))}else{// ..}}

The optionaloptions object may contain:

  • signal: anAbortSignal to abort the deferred operation. When aborted (now or later) thefn function will not be called.

When deferring a custom operation, do it early: after normalizing optional arguments but before encoding (to avoid double encoding and to emit original input if the operation has events) and before anyfast paths (to avoid calling back before the database has finished opening). For example,db.batch([]) has an internal fast path where it skips work if the array of operations is empty. Resources that can be closed on their own (like iterators) should however first check such state before deferring, in order to reject operations after close (including when the database was reopened).

db.deferAsync(fn[, options])

Similar todb.defer(fn) but for asynchronous work. Returns a promise, which waits fordb.status to change to'open' or'closed' and then callsfn which itself must return a promise. This allows for recursion:

db.foo=asyncfunction(key){if(this.status==='opening'){returnthis.deferAsync(()=>this.foo(key))}else{// ..}}

The optionaloptions object may contain:

  • signal: anAbortSignal to abort the deferred operation. When aborted (now or later) thefn function will not be called, and the promise returned bydeferAsync() will be rejected with aLEVEL_ABORTED error.

db.attachResource(resource)

Keep track of the givenresource in order to call itsclose() method when the database is closed. Once successfully closed, the resource will no longer be tracked, to the same effect as manually callingdb.detachResource(). When given multiple resources, the database will close them in parallel. Resources are kept in aset so that the same object will not be attached (and closed) twice.

Intended for objects that rely on an open database. Used internally for built-in resources like iterators and sublevels, and is publicly exposed for custom resources.

db.detachResource(resource)

Stop tracking the givenresource.

iterator

An iterator allows one to lazily read a range of entries stored in the database. The entries will be sorted by keys inlexicographic order (in other words: byte order) which in short means key'a' comes before'b' and key'10' comes before'2'.

Iterators can be consumed withfor await...of anditerator.all(), or by manually callingiterator.next() ornextv() in succession. In the latter case,iterator.close() must always be called. In contrast, finishing, throwing, breaking or returning from afor await...of loop automatically callsiterator.close(), as doesiterator.all().

An iterator reaches its natural end in the following situations:

  • The end of the database has been reached
  • The end of the range has been reached
  • The lastiterator.seek() was out of range.

An iterator keeps track of calls that are in progress. It doesn't allow concurrentnext(),nextv() orall() calls (including a combination thereof) and will throw an error with codeLEVEL_ITERATOR_BUSY if that happens:

// Not awaitediterator.next()try{// Which means next() is still in progress hereiterator.all()}catch(err){console.log(err.code)// 'LEVEL_ITERATOR_BUSY'}

for await...of iterator

Yields entries, which are arrays containing akey andvalue. The type ofkey andvalue depends on the options passed todb.iterator().

try{forawait(const[key,value]ofdb.iterator()){console.log(key)}}catch(err){console.error(err)}

Note for implementors: this usesiterator.next() anditerator.close() under the hood so no further method implementations are needed to supportfor await...of.

iterator.next()

Advance to the next entry and yield that entry. Returns a promise for either an entry array (containing akey andvalue) or forundefined if the iterator reached its natural end. The type ofkey andvalue depends on the options passed todb.iterator().

Note:iterator.close() must always be called once there's no intention to callnext() ornextv() again. Even if such calls yielded an error and even if the iterator reached its natural end. Not closing the iterator will result in memory leaks and may also affect performance of other operations if many iterators are unclosed and each is holding a snapshot of the database.

iterator.nextv(size[, options])

Advance repeatedly and get at mostsize amount of entries in a single call. Can be faster than repeatednext() calls. Thesize argument must be an integer and has a soft minimum of 1. There are nooptions by default but implementations may add theirs.

Returns a promise for an array of entries, where each entry is an array containing a key and value. The natural end of the iterator will be signaled by yielding an empty array.

constiterator=db.iterator()while(true){constentries=awaititerator.nextv(100)if(entries.length===0){break}for(const[key,value]ofentries){// ..}}awaititerator.close()

iterator.all([options])

Advance repeatedly and get all (remaining) entries as an array, automatically closing the iterator. Assumes that those entries fit in memory. If that's not the case, instead usenext(),nextv() orfor await...of. There are nooptions by default but implementations may add theirs. Returns a promise for an array of entries, where each entry is an array containing a key and value.

constentries=awaitdb.iterator({limit:100}).all()for(const[key,value]ofentries){// ..}

iterator.seek(target[, options])

Seek to the key closest totarget. This method is synchronous, but the actual work may happen lazily. Subsequent calls toiterator.next(),nextv() orall() (including implicit calls in afor await...of loop) will yield entries with keys equal to or larger thantarget, or equal to or smaller thantarget if thereverse option passed todb.iterator() was true.

The optionaloptions object may contain:

  • keyEncoding: custom key encoding, used to encode thetarget. By default thekeyEncoding option of the iterator is used or (if that wasn't set) thekeyEncoding of the database.

If range options likegt were passed todb.iterator() andtarget does not fall within that range, the iterator will reach its natural end.

iterator.close()

Free up underlying resources. Returns a promise. Closing the iterator is an idempotent operation, such that callingclose() more than once is allowed and makes no difference.

If anext() ,nextv() orall() call is in progress, closing will wait for that to finish. Afterclose() has been called, further calls tonext() ,nextv() orall() will yield an error with codeLEVEL_ITERATOR_NOT_OPEN.

iterator.db

A reference to the database that created this iterator.

iterator.count

Read-only getter that indicates how many entries have been yielded so far (by any method) excluding calls that errored or yieldedundefined.

iterator.limit

Read-only getter that reflects thelimit that was set in options. Greater than or equal to zero. EqualsInfinity if no limit, which allows for easy math:

consthasMore=iterator.count<iterator.limitconstremaining=iterator.limit-iterator.count

Aborting Iterators

Iterators take an experimentalsignal option that, once signaled, aborts an in-progress read operation (if any) and rejects subsequent reads. The relevant promise will be rejected with aLEVEL_ABORTED error. Aborting does not close the iterator, because closing is asynchronous and may result in an error that needs a place to go. This means signals should be used together with a pattern that automatically closes the iterator:

constabortController=newAbortController()constsignal=abortController.signal// Will result in 'aborted' logabortController.abort()try{forawait(constentryofdb.iterator({ signal})){console.log(entry)}}catch(err){if(err.code==='LEVEL_ABORTED'){console.log('aborted')}}

Otherwise, close the iterator explicitly:

constiterator=db.iterator({ signal})try{constentries=awaititerator.nextv(10)}catch(err){if(err.code==='LEVEL_ABORTED'){console.log('aborted')}}finally{awaititerator.close()}

Support of signals is indicated viadb.supports.signals.iterators.

keyIterator

A key iterator has the same interface asiterator except that its methods yield keys instead of entries. Usage is otherwise the same.

valueIterator

A value iterator has the same interface asiterator except that its methods yield values instead of entries. Usage is otherwise the same.

chainedBatch

chainedBatch.put(key, value[, options])

Add aput operation to this chained batch, not committed untilwrite() is called. This will throw aLEVEL_INVALID_KEY orLEVEL_INVALID_VALUE error ifkey orvalue is invalid. The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode thekey.
  • valueEncoding: custom value encoding for this operation, used to encode thevalue.
  • sublevel (sublevel instance): act as though theput operation is performed on the given sublevel, to similar effect assublevel.batch().put(key, value). This allows atomically committing data to multiple sublevels. The givensublevel must have the sameroot (i.e. top-most) database aschainedBatch.db. Thekey will be prefixed with the prefix of the sublevel, and thekey andvalue will be encoded by the sublevel (using the default encodings of the sublevel unlesskeyEncoding and / orvalueEncoding are provided).

chainedBatch.del(key[, options])

Add adel operation to this chained batch, not committed untilwrite() is called. This will throw aLEVEL_INVALID_KEY error ifkey is invalid. The optionaloptions object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode thekey.
  • sublevel (sublevel instance): act as though thedel operation is performed on the given sublevel, to similar effect assublevel.batch().del(key). This allows atomically committing data to multiple sublevels. The givensublevel must have the sameroot (i.e. top-most) database aschainedBatch.db. Thekey will be prefixed with the prefix of the sublevel, and thekey will be encoded by the sublevel (using the default key encoding of the sublevel unlesskeyEncoding is provided).

chainedBatch.clear()

Remove all operations from this chained batch, so that they will not be committed.

chainedBatch.write([options])

Commit the operations. Returns a promise. All operations will be written atomically, that is, they will either all succeed or fail with no partial commits.

There are nooptions by default but implementations may add theirs. Note thatwrite() does not take encoding options. Those can only be set onput() anddel() because implementations may synchronously forward such calls to an underlying store and thus need keys and values to be encoded at that point.

Afterwrite() orclose() has been called, no further operations are allowed.

chainedBatch.close()

Free up underlying resources. This should be done even if the chained batch has zero operations. Automatically called bywrite() so normally not necessary to call, unless the intent is to discard a chained batch without committing it. Closing the batch is an idempotent operation, such that callingclose() more than once is allowed and makes no difference. Returns a promise.

chainedBatch.length

The number of operations in this chained batch, including operations that were added byprewrite hook functions if any.

chainedBatch.db

A reference to the database that created this chained batch.

sublevel

A sublevel is an instance of theAbstractSublevel class, which extendsAbstractLevel and thus has the same API. Sublevels have a few additional properties and methods.

sublevel.prefix

Prefix of the sublevel. A read-only string property.

constexample=db.sublevel('example')constnested=example.sublevel('nested')console.log(example.prefix)// '!example!'console.log(nested.prefix)// '!example!!nested!'

sublevel.parent

Parent database. A read-only property.

constexample=db.sublevel('example')constnested=example.sublevel('nested')console.log(example.parent===db)// trueconsole.log(nested.parent===example)// true

sublevel.db

Root database. A read-only property.

constexample=db.sublevel('example')constnested=example.sublevel('nested')console.log(example.db===db)// trueconsole.log(nested.db===db)// true

sublevel.path([local])

Get the path of this sublevel, which is its prefix without separators. Iflocal is true, exclude path of parent database. If false (the default) then recurse to form a fully-qualified path that travels from the root database to this sublevel.

constexample=db.sublevel('example')constnested=example.sublevel('nested')constfoo=db.sublevel(['example','nested','foo'])// Get global or local pathconsole.log(nested.path())// ['example', 'nested']console.log(nested.path(true))// ['nested']// Has no intermediary sublevels, so the local option has no effectconsole.log(foo.path())// ['example', 'nested', 'foo']console.log(foo.path(true))// ['example', 'nested', 'foo']

snapshot

snapshot.ref()

Increment reference count, to register work that should delay closing untilsnapshot.unref() is called an equal amount of times. The promise that will be returned bysnapshot.close() will not resolve until the reference count returns to 0. This prevents prematurely closing underlying resources while the snapshot is in use.

It is normally not necessary to callsnapshot.ref() andsnapshot.unref() because builtin database methods automatically do.

snapshot.unref()

Decrement reference count, to indicate that the work has finished.

snapshot.close()

Free up underlying resources. Be sure to call this when the snapshot is no longer needed, because snapshots may cause the database to temporarily pause internal storage optimizations. Returns a promise. Closing the snapshot is an idempotent operation, such that callingsnapshot.close() more than once is allowed and makes no difference.

Aftersnapshot.close() has been called, no further operations are allowed. For example,db.get(key, { snapshot }) will throw an error with codeLEVEL_SNAPSHOT_NOT_OPEN.

Encodings

Any database method that takes akey argument,value argument or range options likegte, hereby jointly referred to asdata, runs thatdata through anencoding. This means to encode inputdata and decode outputdata.

Several encodings are builtin courtesy oflevel-transcoder and can be selected by a short name like'utf8' or'json'. The default encoding is'utf8' which ensures you'll always get back a string. Encodings can be specified for keys and values independently withkeyEncoding andvalueEncoding options, either in the database constructor or per method to apply an encoding selectively. For example:

constdb=level('./db',{keyEncoding:'view',valueEncoding:'json'})// Use binary keysconstkey=Uint8Array.from([1,2])// Encode the value with JSONawaitdb.put(key,{x:2})// Decode the value with JSON. Yields { x: 2 }constobj=awaitdb.get(key)// Decode the value with utf8. Yields '{"x":2}'conststr=awaitdb.get(key,{valueEncoding:'utf8'})

ThekeyEncoding andvalueEncoding options accept a string to select a known encoding by its name, or an object to use a custom encoding likecharwise. SeekeyEncoding() for details. If a custom encoding is passed to the database constructor, subsequent method calls can refer to that encoding by name. Supported encodings are exposed in thedb.supports manifest:

constdb=level('./db',{keyEncoding:require('charwise'),valueEncoding:'json'})// Includes builtin and custom encodingsconsole.log(db.supports.encodings.utf8)// trueconsole.log(db.supports.encodings.charwise)// true

An encoding can both widen and limit the range ofdata types. The default'utf8' encoding can only store strings. Other types, though accepted, are irreversibly stringified before storage. That includes JavaScript primitives which are converted withString(x), Buffer which is converted withx.toString('utf8') and Uint8Array converted withTextDecoder#decode(x). Use other encodings for a richer set ofdata types, as well as binary data without a conversion cost - or loss of non-unicode bytes.

For binary data two builtin encodings are available:'buffer' and'view'. They use a Buffer or Uint8Array respectively. To some extent these encodings are interchangeable, as the'buffer' encoding also accepts Uint8Array as inputdata (and will convert that to a Buffer without copying the underlying ArrayBuffer), the'view' encoding also accepts Buffer as inputdata and so forth. Outputdata will be either a Buffer or Uint8Array respectively and can also be converted:

constdb=level('./db',{valueEncoding:'view'})constbuffer=awaitdb.get('example',{valueEncoding:'buffer'})

In browser environments it may be preferable to only use'view'. When bundling JavaScript with Webpack, Browserify or other, you can choose not to use the'buffer' encoding and (through configuration of the bundler) exclude thebuffer shim in order to reduce bundle size.

Regardless of the choice of encoding, akey orvalue may not benull orundefined due to preexisting significance in iterators and streams. No such restriction exists on range options becausenull andundefined are significant types in encodings likecharwise as well as some underlying stores like IndexedDB. Consumers of anabstract-level implementation must assume that range options like{ gt: undefined } arenot the same as{}. Theabstract test suite does not test these types. Whether they are supported or how they sort may differ per implementation. An implementation can choose to:

  • Encode these types to make them meaningful
  • Have no defined behavior (moving the concern to a higher level)
  • Delegate to an underlying database (moving the concern to a lower level).

Lastly, one way or another, every implementationmust supportdata of type String andshould supportdata of type Buffer or Uint8Array.

Events

Anabstract-level database is anEventEmitter and emits the events listed below.

opening

Emitted when database is opening. Receives 0 arguments:

db.once('opening',function(){console.log('Opening...')})

open

Emitted when database has successfully opened. Receives 0 arguments:

db.once('open',function(){console.log('Opened!')})

closing

Emitted when database is closing. Receives 0 arguments.

closed

Emitted when database has successfully closed. Receives 0 arguments.

write

Emitted when data was successfully written to the database as the result ofdb.batch(),db.put() ordb.del(). Receives a singleoperations argument, which is an array containing normalized operation objects. The array will contain at least one operation object and reflects modifications made (and operations added) by theprewrite hook. Normalized means that every operation object haskeyEncoding and (iftype is'put')valueEncoding properties and these are always encoding objects, rather than their string names like'utf8' or whatever was given in the input.

Operation objects also include userland options that were provided in theoptions argument of the originating call, for example theoptions in adb.put(key, value, options) call:

db.on('write',function(operations){for(constopofoperations){if(op.type==='put'){console.log(op.key,op.value,op.foo)}}})// Put with a userland 'foo' optionawaitdb.put('abc','xyz',{foo:true})

Thekey andvalue of the operation object match the original input, before having encoded it. To provide access to encoded data, the operation object additionally hasencodedKey and (iftype is'put')encodedValue properties. Event listeners can inspectkeyEncoding.format andvalueEncoding.format to determine the data type ofencodedKey andencodedValue.

As an example, given a sublevel created withusers = db.sublevel('users', { valueEncoding: 'json' }), a call likeusers.put('isa', { score: 10 }) will emit awrite event from the sublevel with anoperations argument that looks like the following. Note that specifics (in data types and encodings) may differ per database at it depends on which encodings an implementation supports and uses internally. This example assumes that the database uses'utf8'.

[{type:'put',key:'isa',value:{score:10},keyEncoding:users.keyEncoding('utf8'),valueEncoding:users.valueEncoding('json'),encodedKey:'isa',// No change (was already utf8)encodedValue:'{"score":10}',// JSON-encoded}]

Because sublevels encode and then forward operations to their parent database, a separatewrite event will be emitted fromdb with:

[{type:'put',key:'!users!isa',// Prefixedvalue:'{"score":10}',// No changekeyEncoding:db.keyEncoding('utf8'),valueEncoding:db.valueEncoding('utf8'),encodedKey:'!users!isa',encodedValue:'{"score":10}'}]

Similarly, if asublevel option was provided:

awaitdb.batch().del('isa',{sublevel:users}).write()

We'll get:

[{type:'del',key:'!users!isa',// PrefixedkeyEncoding:db.keyEncoding('utf8'),encodedKey:'!users!isa'}]

Lastly, newly addedwrite event listeners are only called for subsequently created batches (including chained batches):

constpromise=db.batch([{type:'del',key:'abc'}])db.on('write',listener)// Too lateawaitpromise

For the event listener to be called it must be added earlier:

db.on('write',listener)awaitdb.batch([{type:'del',key:'abc'}])

The same is true fordb.put() anddb.del().

clear

Emitted when adb.clear() call completed and entries were thus successfully deleted from the database. Receives a singleoptions argument, which is the verbatimoptions argument that was passed todb.clear(options) (or an empty object if none) before having encoded range options.

Order Of Operations

There is no defined order between parallel write operations. Consider:

awaitPromise.all([db.put('example',1),db.put('example',2)])constresult=awaitdb.get('example')

The value ofresult could be either1 or2, because thedb.put() calls are asynchronous and awaited in parallel. Some implementations ofabstract-level may unintentionally exhibit a "defined" order due to internal details. Implementations are free to change such details at any time, because per the asynchronousabstract-level interface that they follow, the order is theoretically random.

Removing this concern (if necessary) must be done on an application-level. For example, the application could have a queue of operations, or per-key locks, or implement transactions on top of snapshots, or a versioning mechanism in its keyspace, or specialized data types like CRDT, or just say that conflicts are acceptable for that particular application, and so forth. The abundance of examples should explain whyabstract-level itself doesn't enter this opinionated and application-specific problem space. Each solution has tradeoffs andabstract-level, being the core of a modular database, cannot decide which tradeoff to make.

Reading From Snapshots

A snapshot is a lightweight "token" that represents a version of a database at a particular point in time. This allows for reading data without seeing subsequent writes made on the database. It comes in two forms:

  1. Implicit snapshots: created internally by the database and not visible to the outside world.
  2. Explicit snapshots: created withsnapshot = db.snapshot(). Because it acts as a token,snapshot has no read methods of its own. Instead the snapshot is to be passed to database methods likedb.get() anddb.iterator(). This also works on sublevels.

Use explicit snapshots wisely, because their lifetime must be managed manually. Implicit snapshots are typically more convenient and possibly more performant because they can handled natively and have their lifetime limited by the surrounding operation. That said, explicit snapshots can be useful to make multiple read operations that require a shared, consistent view of the data.

Most but not allabstract-level implementations support snapshots. They can be divided into three groups.

1. Implementation does not support snapshots

As indicated bydb.supports.implicitSnapshots anddb.supports.explicitSnapshots being false. In this case, operations read from the latest version of the database. This most notably affects iterators:

awaitdb.put('example','a')constit=db.iterator()awaitdb.del('example')constentries=awaitit.all()// Likely an empty array

Thedb.supports.implicitSnapshots property is aliased asdb.supports.snapshots for backwards compatibility.

2. Implementation supports implicit snapshots

As indicated bydb.supports.implicitSnapshots being true. An iterator, upon creation, will synchronously create a snapshot and subsequently read from that snapshot rather than the latest version of the database. There are no actual numerical versions, but let's say there are in order to clarify the behavior:

awaitdb.put('example','a')// Results in v1constit=db.iterator()// Creates snapshot of v1awaitdb.del('example')// Results in v2constentries=awaitit.all()// Reads from snapshot and thus v1

Theentries array thus includes the deleted entry, because the snapshot of the iterator represents the database version from before the entry was deleted.

Other read operations likedb.get() also use a snapshot. Such calls synchronously create a snapshot and then asynchronously read from it. This means a write operation (to the same key) may not be visible unless awaited:

awaitdb.put('example',1)// Awaiteddb.put('example',2)// Not awaitedawaitdb.get('example')// Yields 1 (typically)

In other words, once a write operation hasfinished (including having communicated that to the main thread of JavaScript, i.e. by resolving the promise in the above example) subsequent reads are guaranteed to include that data. That's because those reads use a snapshot created in the main thread which is aware of the finished write at this point. Before that point, no guarantee can be given.

3. Implementation supports explicit snapshots

As indicated bydb.supports.explicitSnapshots being true. This is the most precise and flexible way to control the version of the data to read. The previous example can be modified to get a consistent result:

awaitdb.put('example',1)constsnapshot=db.snapshot()db.put('example',2)awaitdb.get('example',{ snapshot}))// Yields 1 (always)awaitsnapshot.close()

The main use case for explicit snapshots is retrieving data from an index.

// We'll use charwise to encode "compound" keysconstcharwise=require('charwise-compact')constplayers=db.sublevel('players',{valueEncoding:'json'})constindex=db.sublevel('scores',{keyEncoding:charwise})// Write sample data (using an atomic batch so that the index remains in-sync)awaitdb.batch().put('alice',{score:620},{sublevel:players}).put([620,'alice'],'',{sublevel:index}).write()// Iterate players that have a score higher than 100constsnapshot=db.snapshot()constiterator=index.keys({gt:[100,charwise.HI], snapshot})forawait(constkeyofiterator){// Index key is [620, 'alice'] so key[1] gives us 'alice'constplayer=awaitplayers.get(key[1],{ snapshot})}// Don't forget to close (and try/catch/finally)awaitsnapshot.close()

On implementations that support implicit but not explicit snapshots, some of the above can be simulated. In particular, to get multiple entries from a snapshot, one could create an iterator and then repeatedlyseek() to the desired entries.

Hooks

Hooks are experimental and subject to change without notice.

Hooks allow userlandhook functions to customize behavior of the database. Each hook is a different extension point, accessible viadb.hooks. Some are shared between database methods to encapsulate common behavior. A hook is either synchronous or asynchronous, and functions added to a hook must respect that trait.

hook = db.hooks.prewrite

A synchronous hook for modifying or adding operations todb.batch([]),db.batch().put(),db.batch().del(),db.put() anddb.del() calls. It does not includedb.clear() because the entries deleted by such a call are not communicated back todb.

Functions added to this hook will receive two arguments:op andbatch.

Example
constcharwise=require('charwise-compact')constbooks=db.sublevel('books',{valueEncoding:'json'})constindex=db.sublevel('authors',{keyEncoding:charwise})books.hooks.prewrite.add(function(op,batch){if(op.type==='put'){batch.add({type:'put',key:[op.value.author,op.key],value:'',sublevel:index})}})// Will atomically commit it to the author index as wellawaitbooks.put('12',{title:'Siddhartha',author:'Hesse'})
Arguments
op (object)

Theop argument reflects the input operation and has the following properties:type,key,keyEncoding, an optionalsublevel, and iftype is'put' then alsovalue andvalueEncoding. It can also include userland options, that were provided either in the input operation object (if it originated fromdb.batch([])) or in theoptions argument of the originating call, for example theoptions indb.del(key, options).

Thekey andvalue have not yet been encoded at this point. ThekeyEncoding andvalueEncoding properties are always encoding objects (rather than encoding names like'json') which means hook functions can call (for example)op.keyEncoding.encode(123).

Hook functions can modify thekey,value,keyEncoding andvalueEncoding properties, but nottype orsublevel. If a hook function modifieskeyEncoding orvalueEncoding it can use either encoding names or encoding objects, which will subsequently be normalized to encoding objects. Hook functions can also add custom properties toop which will be visible to other hook functions, the private API of the database and in thewrite event.

batch (object)

Thebatch argument of the hook function is an interface to add operations, to be committed in the same batch as the input operation(s). This also works if the originating call was a singular operation likedb.put() because the presence of one or more hook functions will changedb.put() anddb.del() to internally use a batch. For originating calls likedb.batch([]) that provide multiple input operations, operations will be added after the last input operation, rather than interleaving. The hook function will not be called for operations that were added by either itself or other hook functions.

batch = batch.add(op)

Add a batch operation, using the same format as the operations thatdb.batch([]) takes. However, it is assumed thatop can be freely mutated byabstract-level. Unlike input operations it will not be cloned before doing so. Theadd method returnsbatch which allows for chaining, similar to thechained batch API.

For hook functions to be generic, it is recommended to explicitly definekeyEncoding andvalueEncoding properties onop (instead of relying on database defaults) or to use an isolated sublevel with known defaults.

hook = db.hooks.postopen

An asynchronous hook that runs after the database has succesfully opened, but before deferred operations are executed and before events are emitted. It thus allows for additional initialization, including reading and writing data that deferred operations might need. The postopen hook always runs before the prewrite hook.

Functions added to this hook must return a promise and will receive one argument:options. If one of the hook functions yields an error then the database will be closed. In the rare event that closing also fails, which means there's no safe state to return to, the database will enter an internal locked state wheredb.status is'closed' and subsequent calls todb.open() ordb.close() will be met with aLEVEL_STATUS_LOCKED error. This locked state is also used during the postopen hook itself, meaning hook functions are not allowed to calldb.open() ordb.close().

Example
db.hooks.postopen.add(asyncfunction(options){// Can read and write like usualreturndb.put('example',123,{valueEncoding:'json'})})
Arguments
options (object)

Theoptions that were provided in the originatingdb.open(options) call, merged with constructor options and defaults. Equivalent to what the private API received indb._open(options).

hook = db.hooks.newsub

A synchronous hook that runs when aAbstractSublevel instance has been created bydb.sublevel(options). Functions added to this hook will receive two arguments:sublevel andoptions.

Example

This hook can be useful to hook into a database and any sublevels created on that database. Userland modules that act like plugins might like the following pattern:

module.exports=functionlogger(db,options){// Recurse so that db.sublevel('foo', opts) will call logger(sublevel, opts)db.hooks.newsub.add(logger)db.hooks.prewrite.add(function(op,batch){console.log('writing',{ db, op})})}
Arguments
sublevel (object)

TheAbstractSublevel instance that was created.

options (object)

Theoptions that were provided in the originatingdb.sublevel(options) call, merged with defaults. Equivalent to what the private API received indb._sublevel(options).

hook

hook.add(fn)

Add the givenfn function to this hook, if it wasn't already added.

hook.delete(fn)

Remove the givenfn function from this hook.

Hook Error Handling

If a hook function throws an error, it will be wrapped in an error with codeLEVEL_HOOK_ERROR and abort the originating call:

try{awaitdb.put('abc',123)}catch(err){if(err.code==='LEVEL_HOOK_ERROR'){console.log(err.cause)}}

As a result, other hook functions will not be called.

Hooks On Sublevels

On sublevels and their parent database(s), hooks are triggered in bottom-up order. For example,db.sublevel('a').sublevel('b').batch(..) will trigger theprewrite hook of sublevelb, then theprewrite hook of sublevela and then ofdb. Only direct operations on a database will trigger hooks, not when a sublevel is provided as an option. This meansdb.batch([{ sublevel, ... }]) will trigger theprewrite hook ofdb but not ofsublevel. These behaviors are symmetrical toevents:db.batch([{ sublevel, ... }]) will only emit awrite event fromdb whiledb.sublevel(..).batch([{ ... }]) will emit awrite event from the sublevel and then another fromdb (this time with fully-qualified keys).

Shared Access

Unless documented otherwise, implementations ofabstract-level donot support accessing a database from multiple processes running in parallel. That includes Node.js clusters and Electron renderer processes.

SeeLevel/awesome for modules likemany-level andrave-level that allow a database to be shared across processes and/or machines.

Errors

Errors thrown by anabstract-level database have acode property that is an uppercase string. Error codes will not change between major versions, but error messages will. Messages may also differ between implementations; they are free and encouraged to tune messages.

A database may also throwTypeError errors (or other core error constructors in JavaScript) without acode and without any guarantee on the stability of error properties - because these errors indicate invalid arguments and other programming mistakes that should not be catched much less have associated logic.

Error codes will be one of the following.

LEVEL_DATABASE_NOT_OPEN

When an operation was made on a database while it was closing or closed. The error may have acause property that explains a failure to open:

try{awaitdb.open()}catch(err){console.error(err.code)// 'LEVEL_DATABASE_NOT_OPEN'if(err.cause&&err.cause.code==='LEVEL_LOCKED'){// Another process or instance has opened the database}}

LEVEL_DATABASE_NOT_CLOSED

When a database failed toclose(). The error may have acause property that explains a failure to close.

LEVEL_ITERATOR_NOT_OPEN

When an operation was made on an iterator while it was closing or closed, which may also be the result of the database being closed.

LEVEL_ITERATOR_BUSY

Wheniterator.next() orseek() was called while a previousnext() call was still in progress.

LEVEL_BATCH_NOT_OPEN

When an operation was made on a chained batch while it was closing or closed, which may also be the result of the database being closed or thatwrite() was called on the chained batch.

LEVEL_SNAPSHOT_NOT_OPEN

When an operation was made on a snapshot while it was closing or closed, which may also be the result of the database being closed.

LEVEL_ABORTED

When an operation was aborted by the user. Forweb compatibility this error can also be identified by itsname which is'AbortError':

if(err.name==='AbortError'){// Operation was aborted}

LEVEL_ENCODING_NOT_FOUND

When akeyEncoding orvalueEncoding option specified a named encoding that does not exist.

LEVEL_ENCODING_NOT_SUPPORTED

When akeyEncoding orvalueEncoding option specified an encoding that isn't supported by the database.

LEVEL_DECODE_ERROR

When decoding of keys or values failed. The errormay have acause property containing an original error. For example, it might be aSyntaxError from an internalJSON.parse() call:

awaitdb.put('key','invalid json',{valueEncoding:'utf8'})try{constvalue=awaitdb.get('key',{valueEncoding:'json'})}catch(err){console.log(err.code)// 'LEVEL_DECODE_ERROR'console.log(err.cause)// 'SyntaxError: Unexpected token i in JSON at position 0'}

LEVEL_INVALID_KEY

When a key isnull,undefined or (if an implementation deems it so) otherwise invalid.

LEVEL_INVALID_VALUE

When a value isnull,undefined or (if an implementation deems it so) otherwise invalid.

LEVEL_CORRUPTION

Data could not be read (from an underlying store) due to a corruption.

LEVEL_IO_ERROR

Data could not be read (from an underlying store) due to an input/output error, for example from the filesystem.

LEVEL_INVALID_PREFIX

When a sublevel prefix contains characters outside of the supported byte range.

LEVEL_NOT_SUPPORTED

When a module needs a certain feature, typically as indicated bydb.supports, but that feature is not available on a database argument or other. For example, some kind of plugin may depend on snapshots:

constModuleError=require('module-error')module.exports=functionplugin(db){if(!db.supports.explicitSnapshots){thrownewModuleError('Database must support snapshots',{code:'LEVEL_NOT_SUPPORTED'})}// ..}

LEVEL_LEGACY

When a method, option or other property was used that has been removed from the API.

LEVEL_LOCKED

When an attempt was made to open a database that is already open in another process or instance. Used byclassic-level and other implementations ofabstract-level that use exclusive locks.

LEVEL_HOOK_ERROR

An error occurred while running a hook function. The error will have acause property set to the original error thrown from the hook function.

LEVEL_STATUS_LOCKED

Whendb.open() ordb.close() was called while database was locked, as described in thepostopen hook documentation.

LEVEL_READONLY

When an attempt was made to write data to a read-only database. Used bymany-level.

LEVEL_CONNECTION_LOST

When a database relies on a connection to a remote party and that connection has been lost. Used bymany-level.

LEVEL_REMOTE_ERROR

When a remote party encountered an unexpected condition that it can't reflect with a more specific code. Used bymany-level.

Private API For Implementors

To implement anabstract-level database, extend theAbstractLevel class and override the private underscored versions of its methods. For example, to implement the publicput() method, override the private_put() method. The same goes for other classes (some of which are optional to override). All classes can be found on the main export of the npm package:

const{  AbstractLevel,  AbstractSublevel,  AbstractIterator,  AbstractKeyIterator,  AbstractValueIterator,  AbstractChainedBatch,  AbstractSnapshot}=require('abstract-level')

Naming-wise, implementations should use a class name in the form of*Level (suffixed, for exampleMemoryLevel) and an npm package name in the form of*-level (for examplememory-level). While utilities and plugins should use a package name in the form oflevel-* (prefixed).

Each of the private methods listed below will receive exactly the number and types of arguments described, regardless of what is passed in through the public API. Public methods provide type checking: if a consumer callsdb.batch(123) they'll get an error that the first argument must be an array. Optional arguments get sensible defaults: adb.get(key) call translates to adb._get(key, options) call.

Where possible, the default private methods are sensible noops that do nothing. For example,db._open() will simply resolve its promise on a next tick. Other methods have functional defaults. Each method documents whether implementing it is mandatory.

When throwing or yielding an error, prefer using aknown error code. If new codes are required for your implementation and you wish to use theLEVEL_ prefix for consistency, feel free to open an issue to discuss. We'll likely want to document those codes here.

Example

Let's implement a basic in-memory database:

const{ AbstractLevel}=require('abstract-level')classExampleLevelextendsAbstractLevel{// This in-memory example doesn't have a location argumentconstructor(options){// Declare supported encodingsconstencodings={utf8:true}// Call AbstractLevel constructorsuper({ encodings},options)// Create a map to store entriesthis._entries=newMap()}async_open(options){// Here you would open any necessary resources.}async_put(key,value,options){this._entries.set(key,value)}async_get(key,options){// Is undefined if not foundreturnthis._entries.get(key)}async_del(key,options){this._entries.delete(key)}}

Now we can use our implementation:

constdb=newExampleLevel()awaitdb.put('foo','bar')constvalue=awaitdb.get('foo')console.log(value)// 'bar'

Although our basic implementation only supports'utf8' strings internally, we do get to useencodings that encodeto that. For example, the'json' encoding which encodes to'utf8':

constdb=newExampleLevel({valueEncoding:'json'})awaitdb.put('foo',{a:123})constvalue=awaitdb.get('foo')console.log(value)// { a: 123 }

Seememory-level if you are looking for a complete in-memory implementation. The example above notably lacks iterator support and would not pass theabstract test suite.

db = new AbstractLevel(manifest[, options])

The database constructor. Sets thestatus to'opening'. Takes amanifest object that the constructor will enrich with defaults. At minimum, the manifest must declare whichencodings are supported in the private API. For example:

classExampleLevelextendsAbstractLevel{constructor(location,options){constmanifest={encodings:{buffer:true}}// Call AbstractLevel constructor.// Location is not handled by AbstractLevel.super(manifest,options)}}

Both the public and private API ofabstract-level are encoding-aware. This means that private methods receivekeyEncoding andvalueEncoding options too. Implementations don't need to perform encoding or decoding themselves. Rather, thekeyEncoding andvalueEncoding options are lower-level encodings that indicate the type of already-encoded input data or the expected type of yet-to-be-decoded output data. They're one of'buffer','view','utf8' and always strings in the private API.

If the manifest declared support of'buffer', thenkeyEncoding andvalueEncoding will always be'buffer'. If the manifest declared support of'utf8' thenkeyEncoding andvalueEncoding will be'utf8'.

For example: a call likeawait db.put(key, { x: 2 }, { valueEncoding: 'json' }) will encode the{ x: 2 } value and might forward it to the private API asdb._put(key, '{"x":2}', { valueEncoding: 'utf8' }). Same for the key (omitted for brevity).

The public API will coerce user input as necessary. If the manifest declared support of'utf8' thenawait db.get(24) will forward that number key as a string:db._get('24', { keyEncoding: 'utf8', ... }). However, this isnot true for output: a private API call likedb._get(key, { keyEncoding: 'utf8', valueEncoding: 'utf8' })must yield a string value.

All private methods below that take akey argument,value argument or range option, will receive that data in encoded form. That includesiterator._seek() with itstarget argument. So if the manifest declared support of'buffer' thendb.iterator({ gt: 2 }) translates intodb._iterator({ gt: Buffer.from('2'), ...options }) anditerator.seek(128) translates intoiterator._seek(Buffer.from('128'), options).

TheAbstractLevel constructor will add other supported encodings to the public manifest. If the private API only supports'buffer', the resultingdb.supports.encodings will nevertheless be as follows because all other encodings can be transcoded to'buffer':

{buffer:true,view:true,utf8:true,json:true, ...}

Implementations can also declare support of multiple encodings. Keys and values will then be encoded and decoded via the most optimal path. For example,classic-level uses:

super({encodings:{buffer:true,utf8:true}},options)

This has the benefit that user input needs less conversion steps: if the input is a string thenclassic-level can pass that to its LevelDB binding as-is. Vice versa for output.

db._open(options)

Open the database. Theoptions object will always have the following properties:createIfMissing,errorIfExists. When this is called,db.status will be'opening'. Must return a promise. If opening failed, reject the promise, which will setdb.status to'closed'. Otherwise resolve the promise, which will setdb.status to'open'. The default_open() is an async noop.

db._close()

Close the database. When this is called,db.status will be'closing'. Must return a promise. If closing failed, reject the promise, which will resetdb.status to'open'. Otherwise resolve the promise, which will setdb.status to'closed'. If the database was never opened or failed to open then_close() will not be called.

The default_close() is an async noop. In native implementations (native addons written in C++ or other) it's recommended to delay closing if any operations are in flight. Seeclassic-level (previouslyleveldown) for an example of this behavior. The JavaScript side inabstract-level will preventnew operations before the database is reopened (as explained in constructor documentation above) while the C++ side should prevent closing the database beforeexisting operations have completed.

db._get(key, options)

Get a value bykey. Theoptions object will always have the following properties:keyEncoding andvalueEncoding. Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise with the value. If thekey was not found then useundefined as value.

If the database indicates support of snapshots viadb.supports.implicitSnapshots thendb._get() must read from a snapshot of the database. That snapshot (or similar mechanism) must be created synchronously whendb._get() is called, before asynchronously reading the value. This means it should not see the data of write operations that are scheduled immediately afterdb._get().

The default_get() returns a promise for anundefined value. It must be overridden.

db._getMany(keys, options)

Get multiple values by an array ofkeys. Theoptions object will always have the following properties:keyEncoding andvalueEncoding. Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise with an array of values. If a key does not exist, set the relevant value toundefined.

Snapshot behavior ofdb._getMany() must be the same as described fordb._get() above.

The default_getMany() returns a promise for an array of values that is equal in length tokeys and is filled withundefined. It must be overridden.

db._has(key, options)

Check if the database has an entry with the givenkey. Theoptions object will always have the following properties:keyEncoding. Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise with a boolean.

The default_has() throws aLEVEL_NOT_SUPPORTED error. It is an optional feature at the moment. If implemented then_hasMany() must also be implemented. Setmanifest.has totrue in order to enable tests:

classExampleLevelextendsAbstractLevel{constructor(/* ... */){constmanifest={has:true,// ...}super(manifest,options)}}

db._hasMany(keys, options)

Check if the database has entries with the given keys. Thekeys argument is guaranteed to be an array. Theoptions object will always have the following properties:keyEncoding. Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise with an array of booleans.

db._put(key, value, options)

Add a new entry or overwrite an existing entry. Theoptions object will always have the following properties:keyEncoding andvalueEncoding. Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise, without an argument.

The default_put() returns a resolved promise. It must be overridden.

db._del(key, options)

Delete an entry. Theoptions object will always have the following properties:keyEncoding. Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise, without an argument.

The default_del() returns a resolved promise. It must be overridden.

db._batch(operations, options)

Perform multipleput and/ordel operations in bulk. Theoperations argument is always an array containing a list of operations to be executed sequentially, although as a whole they should be performed as an atomic operation. The_batch() method will not be called if theoperations array is empty. Each operation is guaranteed to have at leasttype,key andkeyEncoding properties. If the type isput, the operation will also havevalue andvalueEncoding properties. There are no default options butoptions will always be an object.

Must return a promise. If the batch failed, reject the promise. Otherwise resolve the promise, without an argument.

The publicbatch() method supports encoding options both in theoptions argument and per operation. The private_batch() method should only support encoding options per operation, which are guaranteed to be set and to be normalized (theoptions argument in the private API might also contain encoding options but only because it's cheaper to not remove them).

The default_batch() returns a resolved promise. It must be overridden.

db._chainedBatch()

The default_chainedBatch() returns a functionalAbstractChainedBatch instance that usesdb._batch(array, options) under the hood. To implement chained batch in an optimized manner, extendAbstractChainedBatch and return an instance of this class in the_chainedBatch() method:

const{ AbstractChainedBatch}=require('abstract-level')classExampleChainedBatchextendsAbstractChainedBatch{constructor(db){super(db)}}classExampleLevelextendsAbstractLevel{_chainedBatch(){returnnewExampleChainedBatch(this)}}

db._iterator(options)

The default_iterator() returns a noopAbstractIterator instance. It must be overridden, by extendingAbstractIterator and returning an instance of this class in the_iterator(options) method:

const{ AbstractIterator}=require('abstract-level')classExampleIteratorextendsAbstractIterator{constructor(db,options){super(db,options)}// ..}classExampleLevelextendsAbstractLevel{_iterator(options){returnnewExampleIterator(this,options)}}

Theoptions object will always have the following properties:reverse,keys,values,limit,keyEncoding andvalueEncoding. Thelimit will always be an integer, greater than or equal to-1 and less thanInfinity. If the user passed range options todb.iterator(), those will be encoded and set inoptions.

db._keys(options)

The default_keys() returns a functional iterator that wrapsdb._iterator() in order to map entries to keys. For optimal performance it can be overridden by extendingAbstractKeyIterator:

const{ AbstractKeyIterator}=require('abstract-level')classExampleKeyIteratorextendsAbstractKeyIterator{constructor(db,options){super(db,options)}// ..}classExampleLevelextendsAbstractLevel{_keys(options){returnnewExampleKeyIterator(this,options)}}

Theoptions object will always have the following properties:reverse,limit andkeyEncoding. Thelimit will always be an integer, greater than or equal to-1 and less thanInfinity. If the user passed range options todb.keys(), those will be encoded and set inoptions.

db._values(options)

The default_values() returns a functional iterator that wrapsdb._iterator() in order to map entries to values. For optimal performance it can be overridden by extendingAbstractValueIterator:

const{ AbstractValueIterator}=require('abstract-level')classExampleValueIteratorextendsAbstractValueIterator{constructor(db,options){super(db,options)}// ..}classExampleLevelextendsAbstractLevel{_values(options){returnnewExampleValueIterator(this,options)}}

Theoptions object will always have the following properties:reverse,limit,keyEncoding andvalueEncoding. Thelimit will always be an integer, greater than or equal to -1 and less than Infinity. If the user passed range options todb.values(), those will be encoded and set inoptions.

db._clear(options)

Delete all entries or a range. Does not have to be atomic. Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise, without an argument. It is recommended (and possibly mandatory in the future) to operate on a snapshot so that writes scheduled after a call toclear() will not be affected.

Implementations that wrap another database can typically forward the_clear() call to that database, having transformed range options if necessary.

Theoptions object will always have the following properties:reverse,limit andkeyEncoding. If the user passed range options todb.clear(), those will be encoded and set inoptions.

sublevel = db._sublevel(name, options)

Create asublevel. Theoptions object will always have the following properties:separator. The default_sublevel() returns a new instance of theAbstractSublevel class. Overriding is optional. TheAbstractSublevel can be extended in order to add additional methods to sublevels:

const{ AbstractLevel, AbstractSublevel}=require('abstract-level')classExampleLevelextendsAbstractLevel{_sublevel(name,options){returnnewExampleSublevel(this,name,options)}}// For brevity this does not handle deferred openclassExampleSublevelextendsAbstractSublevel{example(key,options){// Encode and prefix the keyconstkeyEncoding=this.keyEncoding(options.keyEncoding)constkeyFormat=keyEncoding.formatkey=this.prefixKey(keyEncoding.encode(key),keyFormat,true)// The parent database can be accessed like so. Make sure// to forward encoding options and use the full key.this.parent.del(key,{keyEncoding:keyFormat}, ...)}}

snapshot = db._snapshot(options)

Create a snapshot. Theoptions argument is guaranteed to be an object. There are currently no options but implementations may add their own.

The default_snapshot() throws aLEVEL_NOT_SUPPORTED error. To implement this method, extendAbstractSnapshot, return an instance of this class in an overridden_snapshot() method and setmanifest.explicitSnapshots totrue:

const{ AbstractSnapshot}=require('abstract-level')classExampleSnapshotextendsAbstractSnapshot{constructor(options){super(options)}}classExampleLevelextendsAbstractLevel{constructor(/* ..., */options){constmanifest={explicitSnapshots:true,// ...}super(manifest,options)}_snapshot(options){returnnewExampleSnapshot(options)}}

The snapshot of the underlying database (or other mechanisms to achieve the same effect) must be created synchronously, such that a call likedb.put() made immediately afterdb._snapshot() will not affect the snapshot. As for previous write operations that are still in progress at the time thatdb._snapshot() is called:db._snapshot() does not have to (and should not) wait for such operations. Solving inconsistencies that may arise from this behavior is an application-level concern. To be clear, if the application awaits the write operations before callingdb.snapshot() then the snapshot does need to reflect (include) those operations.

iterator = new AbstractIterator(db, options)

The first argument to this constructor must be an instance of the relevantAbstractLevel implementation. The constructor will setiterator.db which is used (among other things) to access encodings and ensures thatdb will not be garbage collected in case there are no other references to it. Theoptions argument must be the originaloptions object that was passed todb._iterator() and it is therefore not (publicly) possible to create an iterator via constructors alone.

Thesignal option, if any and once signaled, should abort an in-progress_next(),_nextv() or_all() call and reject the promise returned by that call with aLEVEL_ABORTED error. Doing so is optional until a future semver-major release. Responsibilities are divided as follows:

  1. Before a database has finished opening,abstract-level handles the signal
  2. While a call is in progress, the implementation handles the signal
  3. Once the signal is aborted,abstract-level rejects further calls.

A method like_next() therefore doesn't have to check the signalbefore it start its asynchronous work, onlyduring that work. If supported, setdb.supports.signals.iterators totrue (via the manifest passed to the database constructor) which also enables relevant tests in thetest suite.

iterator._next()

Advance to the next entry and yield that entry. Must return a promise. If an error occurs, reject the promise. If the natural end of the iterator has been reached, resolve the promise withundefined. Otherwise resolve the promise with an array containing akey andvalue. If alimit was set and the iterator already yielded that many entries (via any of the methods) then_next() will not be called.

The default_next() returns a promise forundefined. It must be overridden.

iterator._nextv(size, options)

Advance repeatedly and get at mostsize amount of entries in a single call. Thesize argument will always be an integer greater than 0. If alimit was set thensize will be at mostlimit - iterator.count. If alimit was set and the iterator already yielded that many entries (via any of the methods) then_nextv() will not be called. There are no default options butoptions will always be an object.

Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise with an array of entries. An empty array signifies the natural end of the iterator, so yield an array with at least one entry if the end has not been reached yet.

The default_nextv() is a functional default that makes repeated calls to_next() and should be overridden for better performance.

iterator._all(options)

Advance repeatedly and get all (remaining) entries as an array. If alimit was set and the iterator already yielded that many entries (via any of the methods) then_all() will not be called. Do not callclose() here becauseall() will do so (regardless of any error) and this may become an opt-out behavior in the future. There are no default options butoptions will always be an object.

Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise with an array of entries.

The default_all() is a functional default that makes repeated calls to_nextv() and should be overridden for better performance.

iterator._seek(target, options)

Seek to the key closest totarget. Theoptions object will always have the following properties:keyEncoding. The default_seek() will throw an error with codeLEVEL_NOT_SUPPORTED and must be overridden.

iterator._close()

Free up underlying resources. This method is guaranteed to only be called once. Must return a promise.

The default_close() returns a resolved promise. Overriding is optional.

keyIterator = AbstractKeyIterator(db, options)

A key iterator has the same interface and constructor arguments asAbstractIterator except that it must yields keys instead of entries. The same goes for value iterators:

classExampleKeyIteratorextendsAbstractKeyIterator{async_next(){return'example-key'}}classExampleValueIteratorextendsAbstractValueIterator{async_next(){return'example-value'}}

Theoptions argument must be the originaloptions object that was passed todb._keys() and it is therefore not (publicly) possible to create a key iterator via constructors alone. The same goes for value iterators viadb._values().

Note: theAbstractKeyIterator andAbstractValueIterator classes donot extend theAbstractIterator class. Similarly, if your implementation overridesdb._keys() returning a custom subclass ofAbstractKeyIterator, then that subclass must implement methods like_next() separately from your subclass ofAbstractIterator.

valueIterator = AbstractValueIterator(db, options)

A value iterator has the same interface and constructor arguments asAbstractIterator except that it must yields values instead of entries. For further details, seekeyIterator above.

chainedBatch = new AbstractChainedBatch(db, options)

The first argument to this constructor must be an instance of the relevantAbstractLevel implementation. The constructor will setchainedBatch.db which is used (among other things) to access encodings and ensures thatdb will not be garbage collected in case there are no other references to it.

There are two ways to implement a chained batch. Ifoptions.add is true, only_add() will be called. Ifoptions.add is false or not provided, only_put() and_del() will be called.

chainedBatch._add(op)

Add aput ordel operation. Theop object will always have the following properties:type,key,keyEncoding and (iftype is'put')value andvalueEncoding.

chainedBatch._put(key, value, options)

Add aput operation. Theoptions object will always have the following properties:keyEncoding andvalueEncoding.

chainedBatch._del(key, options)

Add adel operation. Theoptions object will always have the following properties:keyEncoding.

chainedBatch._clear()

Remove all operations from this batch.

chainedBatch._write(options)

The default_write() method usesdb._batch(). If_write() is overridden it must atomically commit the operations. There are no default options butoptions will always be an object. Must return a promise. If an error occurs, reject the promise. Otherwise resolve the promise, without an argument. The_write() method will not be called if the chained batch contains zero operations.

chainedBatch._close()

Free up underlying resources. This method is guaranteed to only be called once. Must return a promise.

The default_close() returns a resolved promise. Overriding is optional.

snapshot = new AbstractSnapshot(db)

The first argument to this constructor must be an instance of the relevantAbstractLevel implementation.

snapshot._close()

Free up underlying resources. This method is guaranteed to only be called once and will not be called while read operations likedb._get() are inflight. Must return a promise.

The default_close() returns a resolved promise. Overriding is optional.

Test Suite

To prove that your implementation isabstract-level compliant, include the abstract test suite in yourtest.js (or similar):

consttest=require('tape')constsuite=require('abstract-level/test')constExampleLevel=require('.')suite({  test,factory(options){returnnewExampleLevel(options)}})

Thetest optionmust be a function that is API-compatible withtape. Thefactory optionmust be a function that returns a unique and isolated instance of your implementation. The factory will be called many times by the test suite.

If your implementation is disk-based we recommend usingtempy (or similar) to create unique temporary directories. Your setup could look something like:

consttest=require('tape')consttempy=require('tempy')constsuite=require('abstract-level/test')constExampleLevel=require('.')suite({  test,factory(options){returnnewExampleLevel(tempy.directory(),options)}})

Excluding tests

As not every implementation can be fully compliant due to limitations of its underlying storage, some tests may be skipped. This must be done viadb.supports which is set via the constructor. For example, to skip tests of implicit snapshots:

const{ AbstractLevel}=require('abstract-level')classExampleLevelextendsAbstractLevel{constructor(location,options){super({implicitSnapshots:false},options)}}

This also serves as a signal to users of your implementation.

ReusingtestCommon

The input to the test suite is atestCommon object. Should you need to reusetestCommon for your own (additional) tests, use the included utility to create atestCommon with defaults:

consttest=require('tape')constsuite=require('abstract-level/test')constExampleLevel=require('.')consttestCommon=suite.common({  test,factory(options){returnnewExampleLevel(options)}})suite(testCommon)

ThetestCommon object will have thetest andfactory properties described above, as well as a conveniencesupports property that is lazily copied from afactory().supports. You might use it like so:

test('custom test',function(t){constdb=testCommon.factory()// ..})testCommon.supports.explicitSnapshots&&test('another test',function(t){constdb=testCommon.factory()// ..})

Spread The Word

If you'd like to share your awesome implementation with the world, here's what you might want to do:

  • Add an awesome badge to yourREADME:![level badge](https://leveljs.org/img/badge.svg)
  • Publish your awesome module tonpm
  • Send a Pull Request toLevel/awesome to advertise your work!

Contributing

Level/abstract-level is anOPEN Open Source Project. This means that:

Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.

See theContribution Guide for more details.

Donate

Support us with a monthly donation onOpen Collective and help us continue our work.

License

MIT


[8]ページ先頭

©2009-2025 Movatter.jp