- Notifications
You must be signed in to change notification settings - Fork97
A fast, immutable, distributed & compositional Datalog engine for everyone.
License
replikativ/datahike
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Datahike is a durableDatalog databasepowered by an efficient Datalog query engine. This project started as a port ofDataScript to thehitchhiker-tree. AllDataScript tests are passing, but we are still working on the internals. Havingsaid this we consider Datahike usable for medium sized projects, since DataScript isvery mature and deployed in many applications and the hitchhiker-treeimplementation is heavily tested through generative testing. We arebuilding on the two projects and the storage backends for the hitchhiker-treethroughkonserve. We would like tohear experience reports and are happy if you join us.
You can findAPI documentation on cljdoc and articles on Datahike on our company'sblog page.
We presented Datahike also at meetups,for example at:
- 2021 Bay Area Clojure meetup
- 2019 scicloj online meetup.
- 2019 Vancouver Meetup.
- 2018 Dutch clojure meetup.
Add to your dependencies:
We provide a small stable API for the JVM at the moment, but the on-disk schemais not fixed yet. We will provide a migration guide until we have reached astable on-disk schema.Take a look at the ChangeLog before upgrading.
(require '[datahike.api:as d]);; use the filesystem as storage medium(defcfg {:store {:backend:file:path"/tmp/example"}});; create a database at this place, per default configuration we enforce a strict;; schema and keep all historical data(d/create-database cfg)(defconn (d/connect cfg));; the first transaction will be the schema we are using;; you may also add this within database creation by adding :initial-tx;; to the configuration(d/transact conn [{:db/ident:name:db/valueType:db.type/string:db/cardinality:db.cardinality/one } {:db/ident:age:db/valueType:db.type/long:db/cardinality:db.cardinality/one }]);; lets add some data and wait for the transaction(d/transact conn [{:name"Alice",:age20 } {:name"Bob",:age30 } {:name"Charlie",:age40 } {:age15 }]);; search the data(d/q '[:find ?e ?n ?a:where [?e:name ?n] [?e:age ?a]] @conn);; => #{[3 "Alice" 20] [4 "Bob" 30] [5 "Charlie" 40]};; add new entity data using a hash map(d/transact conn {:tx-data [{:db/id3:age25}]});; if you want to work with queries like in;; https://grishaev.me/en/datomic-query/,;; you may use a hashmap(d/q {:query '{:find [?e ?n ?a ]:where [[?e:name ?n] [?e:age ?a]]}:args [@conn]});; => #{[5 "Charlie" 40] [4 "Bob" 30] [3 "Alice" 25]};; query the history of the data(d/q '[:find ?a:where [?e:name"Alice"] [?e:age ?a]] (d/history @conn));; => #{[20] [25]};; you might need to release the connection for specific stores(d/release conn);; clean up the database if it is not need any more(d/delete-database cfg)
The API namespace provides compatibility to a subset of Datomic functionalityand should work as a drop-in replacement on the JVM. The rest of Datahike willbe ported to core.async to coordinate IO in a platform-neutral manner.
Refer to the docs for more information:
- backend development
- benchmarking
- garbage collection
- contributing to Datahike
- configuration
- differences to Datomic
- entity spec
- logging and error handling
- schema flexibility
- time variance
- versioning
For simple examples have a look at the projects in theexamples
folder.
- Invoice creationdemonstrated at theDutch ClojureMeetup.
Datahike provides similar functionality toDatomic and canbe used as a drop-in replacement for a subset of it. The goal of Datahike is notto provide an open-source reimplementation of Datomic, but it is part of thereplikativ toolbox aimed to build distributeddata management solutions. We have spoken to many backend engineers and Clojuredevelopers, who tried to stay away from Datomic just because of its proprietarynature and we think in this regard Datahike should make an approach to Datomiceasier and vice-versa people who only want to use the goodness of Datalog insmall scale applications should not worry about setting up and depending onDatomic.
Some differences are:
- Datahike runs locally on one peer. A transactor might be provided in thefuture and can also be realized through any linearizing write mechanism, e.g.Apache Kafka. If you are interested, please contact us.
- Datahike provides the database as a transparent value, i.e. you can directlyaccess the index datastructures (hitchhiker-tree) and leverage theirpersistent nature for replication. These internals are not guaranteed to staystable, but provide useful insight into what is going on and can be optimized.
- Datahike supportsGDPR compliance by allowing tocompletely remove database entries.
- Datomic has a REST interface and a Java API
- Datomic provides timeouts
Datomic is a full-fledged scalable database (as a service) built from theauthors of Clojure and people with a lot of experience. If you need this kindof professional support, you should definitely stick to Datomic.
Datahike's query engine and most of its codebase come fromDataScript. Without the work onDataScript, Datahike would not have been possible. Differences to Datomic withrespect to the query engine are documented there.
Pick Datahike if your app has modest requirements towards a typical durabledatabase, e.g. a single machine and a few millions of entities at maximum.Similarly, if you want to have an open-source solution and be able to study andtinker with the codebase of your database, Datahike provides a comparativelysmall and well composed codebase to tweak it to your needs. You should alsoalways be able to migrate to Datomic later easily.
Pick Datomic if you already know that you will need scalability later or if youneed a network API for your database. There is also plenty of material aboutDatomic online already. Most of it applies in some form or another to Datahike,but it might be easier to use Datomic directly when you first learn Datalog.
Pick DataScript if you want the fastest possible query performance and do nothave a huge amount of data. You can easily persist the write operationsseparately and use the fast in-memory index data structure of DataScript then.Datahike also at the moment does not support ClojureScript anymore, although weplan to recover this functionality.
ClojureScript support is planned and work in progress. Please seeDiscussions.
The database can be exported to a flat file with:
(require '[datahike.migrate:refer [export-db import-db]])(export-db conn"/tmp/eavt-dump")
You must do so before upgrading to a Datahike version that has changed theon-disk format. This can happen as long as we are arriving at version1.0.0
and will always be communicated through the Changelog. After you have bumped theDatahike version you can use
;; ... setup new-conn (recreate with correct schema)(import-db new-conn"/tmp/eavt-dump")
to reimport your data into the new format.
The datoms are stored in the CBOR format, enabling migration of binary data, such as the byte array data type now supported by Datahike. You can also use the export as a backup.
If you are upgrading from pre0.1.2
where we have not had the migration codeyet, then just evaluate thedatahike.migrate
namespace manually in yourproject before exporting.
Have a look at thechange log for recent updates.
Instead of providing a static roadmap, we have moved to working closely with the community to decide what will be worked on next in a dynamic and interactive way.
How it works?
Go toDiscussions and upvote all theideas of features you would like to be added to Datahike. As soon as we have someone free to work on a new feature, we will address one with the most upvotes.
Of course, you can also propose ideas yourself - either by adding them to the Discussions or even by creating a pull request yourself. Please note thought that due to considerations about incompatibilities to earlier Datahike versions it might sometimes take a bit more time until your PR is integrated.
We are happy to provide commercial support withlambdaforge. If you are interested in a particularfeature, please let us know.
Copyright © 2014–2023 Konrad Kühne, Christian Weilbach, Chrislain Razafimahefa, Timo Kramer, Judith Massa, Nikita Prokopov, Ryan Sundberg
Licensed under Eclipse Public License (seeLICENSE).
About
A fast, immutable, distributed & compositional Datalog engine for everyone.