- Notifications
You must be signed in to change notification settings - Fork1
Julia implementation of Pentti Kanerva's sparse distributed memory.
License
bit-player/sdm-julia
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Julia implementation of Pentti Kanerva's sparse distributed memory.
The essential content of this repository is an IJulia/Jupyter notebook with code to accompany the article"The Mind Wanders," published in July 2018 onbit-player.org. The primary reference for all the underlying ideas is the following book: Kanerva, Pentti. 1988.Sparse Distributed Memory. Cambridge, Mass.: MIT Press.
The notebook file sdm.ipynb was written forJulia version 0.6.3. The file sdmj7.ipynb is updated to run under Julia 0.7 or 1.0.
Sparse distributed memory is all about computing with high-dimensional binaryvectors—long sequences of 0s and 1s interpreted as the coordinates of thevertices of a hypercube in a high-dimensional space. In the examples consideredhere the length of the vectors is generally 1,000 bits, and so there are
The data structure I have chosen for representing these patterns is the JuliaBitVector, anarray of single bits, packed into 64-bit words. The standard Julia input andoutput routines interpret these bits as Boolean values (false andtrue), but Ipromise it's safe to think of them as 0s and 1s. A big advantage of usingBitVectors is that primitive bitwise operations are executed in parallel onall the bits of a machine word. On a 64-bit processor,xor-ing two 1,000-bit vectors requires only 16 operations instead of 1,000.(As it happens,xor is the operation we're most concerned with.)
This code was written with the aim of exploring the ideas behind sparse distributed memory. It does not create a practical system forusing sparse distributed memory. (I'm not sure such a system can exist without specialized hardware.)
About
Julia implementation of Pentti Kanerva's sparse distributed memory.
Resources
License
Uh oh!
There was an error while loading.Please reload this page.