- Notifications
You must be signed in to change notification settings - Fork1
🤖 GPU accelerated Neural networks in JavaScript for Browsers and Node.js
License
ashenmagic/brain.js
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
GPU accelerated Neural networks in JavaScript for Browsers and Node.js
brain.js
is a GPU accelerated library forNeural Networks written in JavaScript.
💡 This is a continuation of theharthur/brain, which is not maintained anymore.More info
- Installation and Usage
- Examples
- Training
- Methods
- Failing
- JSON
- Standalone Function
- Options
- Streams
- Utilities
- Neural Network Types
If you can installbrain.js
withnpm:
npm install brain.js
<scriptsrc="//unpkg.com/brain.js"></script>
Download the latest brain.js for browser
Brain.js
depends on a native moduleheadless-gl
for GPU support. In most cases installingbrain.js
from npm should just work. However, if you run into problems, this means prebuilt binaries are not able to download from GitHub repositories and you might need to build it yourself.
Please make sure the following dependencies are installed and up to date and then run:
npm rebuild
- A supported version of Python
- A GNU C++ environment (available via the
build-essential
package onapt
) - libxi-dev
- Working and up-to-date OpenGL drivers
- GLEW
- pkg-config
sudo apt-get install -y build-essential libglew-dev libglu1-mesa-dev libxi-dev pkg-config
- A supported version of PythonSee:https://apps.microsoft.com/store/search/python
- Microsoft Visual Studio Build Tools 2022
- run in cmd:
npm config set msvs_version 2022
Note: This no longer works in modern versions of npm. - run in cmd:
npm config set python python3
Note: This no longer works in modern versions of npm.
* If you are usingBuild Tools 2017
then runnpm config set msvs_version 2017
Note: This no longer works in modern versions of npm.
Here's an example showcasing how to approximate the XOR function usingbrain.js
:more info on confighere.
💡A fun and practical introduction to Brain.js
// provide optional config object (or undefined). Defaults shown.constconfig={binaryThresh:0.5,hiddenLayers:[3],// array of ints for the sizes of the hidden layers in the networkactivation:'sigmoid',// supported activation types: ['sigmoid', 'relu', 'leaky-relu', 'tanh'],leakyReluAlpha:0.01,// supported for activation type 'leaky-relu'};// create a simple feed-forward neural network with backpropagationconstnet=newbrain.NeuralNetwork(config);net.train([{input:[0,0],output:[0]},{input:[0,1],output:[1]},{input:[1,0],output:[1]},{input:[1,1],output:[0]},]);constoutput=net.run([1,0]);// [0.987]
ormore info on confighere.
// provide optional config object, defaults shown.constconfig={inputSize:20,inputRange:20,hiddenLayers:[20,20],outputSize:20,learningRate:0.01,decayRate:0.999,};// create a simple recurrent neural networkconstnet=newbrain.recurrent.RNN(config);net.train([{input:[0,0],output:[0]},{input:[0,1],output:[1]},{input:[1,0],output:[1]},{input:[1,1],output:[0]},]);letoutput=net.run([0,0]);// [0]output=net.run([0,1]);// [1]output=net.run([1,0]);// [1]output=net.run([1,1]);// [0]
However, there is no reason to use a neural network to figure out XOR. (-: So, here is a more involved, realistic example:Demo: training a neural network to recognize color contrast.
You can check out this fantastic screencast, which explains how to train a simple neural network using a real-world dataset:How to create a neural network in the browser using Brain.js.
Usetrain()
to train the network with an array of training data. The network has to be trained with all the data in bulk in one call totrain()
. More training patterns will probably take longer to train, but will usually result in a network better at classifying new patterns.
Training is computationally expensive, so you should try to train the network offline (or on a Worker) and use thetoFunction()
ortoJSON()
options to plug the pre-trained network into your website.
Each training pattern should have aninput
and anoutput
, both of which can be either an array of numbers from0
to1
or a hash of numbers from0
to1
. For thecolor contrast demo it looks something like this:
constnet=newbrain.NeuralNetwork();net.train([{input:{r:0.03,g:0.7,b:0.5},output:{black:1}},{input:{r:0.16,g:0.09,b:0.2},output:{white:1}},{input:{r:0.5,g:0.5,b:1.0},output:{white:1}},]);constoutput=net.run({r:1,g:0.4,b:0});// { white: 0.99, black: 0.002 }
Here's another variation of the above example. (Note that input objects do not need to be similar.)
net.train([{input:{r:0.03,g:0.7},output:{black:1}},{input:{r:0.16,b:0.2},output:{white:1}},{input:{r:0.5,g:0.5,b:1.0},output:{white:1}},]);constoutput=net.run({r:1,g:0.4,b:0});// { white: 0.81, black: 0.18 }
Each training pattern can either:
- Be an array of numbers
- Be an array of arrays of numbers
Example using an array of numbers:
constnet=newbrain.recurrent.LSTMTimeStep();net.train([[1,2,3]]);constoutput=net.run([1,2]);// 3
Example using an array of arrays of numbers:
constnet=newbrain.recurrent.LSTMTimeStep({inputSize:2,hiddenLayers:[10],outputSize:2,});net.train([[1,3],[2,2],[3,1],]);constoutput=net.run([[1,3],[2,2],]);// [3, 1]
Each training pattern can either:
- Be an array of values
- Be a string
- Have an
input
and anoutput
- Either of which can have an array of values or a string
CAUTION: When using an array of values, you can use ANY value, however, the values are represented in the neural network by a single input. So the moredistinct values hasthe larger your input layer. If you have a hundreds, thousands, or millions of floating point valuesTHIS IS NOT THE RIGHT CLASS FOR THE JOB. Also, when deviating from strings, this gets into beta
Example using direct strings:Hello World Using Brainjs
constnet=newbrain.recurrent.LSTM();net.train(['I am brainjs, Hello World!']);constoutput=net.run('I am brainjs');alert(output);
constnet=newbrain.recurrent.LSTM();net.train(['doe, a deer, a female deer','ray, a drop of golden sun','me, a name I call myself',]);constoutput=net.run('doe');// ', a deer, a female deer'
Example using strings with inputs and outputs:
constnet=newbrain.recurrent.LSTM();net.train([{input:'I feel great about the world!',output:'happy'},{input:'The world is a terrible place!',output:'sad'},]);constoutput=net.run('I feel great about the world!');// 'happy'
Each training pattern can either:
- Be an array of numbers
- Be an array of arrays of numbers
Training an autoencoder to compress the values of a XOR calculation:
constnet=newbrain.AutoencoderGPU({hiddenLayers:[5,2,5]});net.train([[0,0,0],[0,1,1],[1,0,1],[1,1,0]]);
Encoding/decoding:
constinput=[0,1,1];constencoded=net.encode(input);constdecoded=net.decode(encoded);
Denoise noisy data:
constnoisyData=[0,1,0];constdata=net.denoise(noisyData);
Test for anomalies in data samples:
constshouldBeFalse=net.likelyIncludesAnomalies([0,1,1]);constshouldBeTrue=net.likelyIncludesAnomalies([0,1,0]);
train()
takes a hash of options as its second argument:
net.train(data,{// Defaults values --> expected validationiterations:20000,// the maximum times to iterate the training data --> number greater than 0errorThresh:0.005,// the acceptable error percentage from training data --> number between 0 and 1log:false,// true to use console.log, when a function is supplied it is used --> Either true or a functionlogPeriod:10,// iterations between logging out --> number greater than 0learningRate:0.3,// scales with delta to effect training rate --> number between 0 and 1momentum:0.1,// scales with next layer's change value --> number between 0 and 1callback:null,// a periodic call back that can be triggered while training --> null or functioncallbackPeriod:10,// the number of iterations through the training data between callback calls --> number greater than 0timeout:number,// the max number of milliseconds to train for --> number greater than 0. Default --> Infinity});
The network will stop training whenever one of the two criteria is met: the training error has gone below the threshold (default0.005
), or the max number of iterations (default20000
) has been reached.
By default, training will not let you know how it's doing until the end, but setlog
totrue
to get periodic updates on the current training error of the network. The training error should decrease every time. The updates will be printed to the console. If you setlog
to a function, this function will be called with the updates instead of printing to the console.However, if you want to use the values of the updates in your own output, thecallback
can be set to a function to do so instead.
The learning rate is a parameter that influences how quickly the network trains. It's a number from0
to1
. If the learning rate is close to0
, it will take longer to train. If the learning rate is closer to1
, it will train faster, but training results may be constrained to a local minimum and perform badly on new data.(Overfitting) The default learning rate is0.3
.
The momentum is similar to learning rate, expecting a value from0
to1
as well, but it is multiplied against the next level's change value. The default value is0.1
Any of these training options can be passed into the constructor or passed into theupdateTrainingOptions(opts)
method and they will be saved on the network and used during the training time. If you save your network to json, these training options are saved and restored as well (except for callback and log, callback will be forgotten and log will be restored using console.log).
A boolean property calledinvalidTrainOptsShouldThrow
is set totrue
by default. While the option istrue
, if you enter a training option that is outside the normal range, an error will be thrown with a message about the abnormal option. When the option is set tofalse
, no error will be sent, but a message will still be sent toconsole.warn
with the related information.
trainAsync()
takes the same arguments as train (data and options). Instead of returning the results object from training, it returns a promise that when resolved will return the training results object. Does NOT work with:
brain.recurrent.RNN
brain.recurrent.GRU
brain.recurrent.LSTM
brain.recurrent.RNNTimeStep
brain.recurrent.GRUTimeStep
brain.recurrent.LSTMTimeStep
constnet=newbrain.NeuralNetwork();net.trainAsync(data,options).then((res)=>{// do something with my trained network}).catch(handleError);
With multiple networks you can train in parallel like this:
constnet=newbrain.NeuralNetwork();constnet2=newbrain.NeuralNetwork();constp1=net.trainAsync(data,options);constp2=net2.trainAsync(data,options);Promise.all([p1,p2]).then((values)=>{constres=values[0];constres2=values[1];console.log(`net trained in${res.iterations} and net2 trained in${res2.iterations}`);// do something super cool with my 2 trained networks}).catch(handleError);
Cross Validation can provide a less fragile way of training on larger data sets. The brain.js api provides Cross Validation in this example:
constcrossValidate=newbrain.CrossValidate(()=>newbrain.NeuralNetwork(networkOptions));crossValidate.train(data,trainingOptions,k);//note k (or KFolds) is optionalconstjson=crossValidate.toJSON();// all stats in json as well as neural networksconstnet=crossValidate.toNeuralNetwork();// get top performing net out of `crossValidate`// optionally laterconstjson=crossValidate.toJSON();constnet=crossValidate.fromJSON(json);
UseCrossValidate
with these classes:
brain.NeuralNetwork
brain.RNNTimeStep
brain.LSTMTimeStep
brain.GRUTimeStep
An example of using cross validate can be found incross-validate.ts
The output oftrain()
is a hash of information about how the training went:
{error:0.0039139985510105032,// training erroriterations:406// training iterations}
Supported on classes:
brain.NeuralNetwork
brain.NeuralNetworkGPU
-> All the functionality ofbrain.NeuralNetwork
but, ran on GPU (via gpu.js in WebGL2, WebGL1, or fallback to CPU)brain.recurrent.RNN
brain.recurrent.LSTM
brain.recurrent.GRU
brain.recurrent.RNNTimeStep
brain.recurrent.LSTMTimeStep
brain.recurrent.GRUTimeStep
Example:
// feed forwardconstnet=newbrain.NeuralNetwork();net.fromJSON(json);net.run(input);// time stepconstnet=newbrain.LSTMTimeStep();net.fromJSON(json);net.run(input);// recurrentconstnet=newbrain.LSTM();net.fromJSON(json);net.run(input);
Available with the following classes. Outputs a array of predictions. Predictions being a continuation of the inputs.
brain.recurrent.RNNTimeStep
brain.recurrent.LSTMTimeStep
brain.recurrent.GRUTimeStep
Example:
constnet=newbrain.LSTMTimeStep();net.fromJSON(json);net.forecast(input,3);
Serialize neural network to json
Deserialize neural network from json
If the network failed to train, the error will be above the error threshold. This could happen if the training data is too noisy (most likely), the network does not have enough hidden layers or nodes to handle the complexity of the data, or it has not been trained for enough iterations.
If the training error is still something huge like0.4
after 20000 iterations, it's a good sign that the network can't make sense of the given data.
The instance of the net's propertymaxPredictionLength
(default 100) can be set to adjust the output of the net;
Example:
constnet=newbrain.recurrent.LSTM();// later in code, after training on a few novels, write me a new one!net.maxPredictionLength=1000000000;// Be careful!net.run('Once upon a time');
Serialize or load in the state of a trained network with JSON:
constjson=net.toJSON();net.fromJSON(json);
You can also get a custom standalone function from a trained network that acts just likerun()
:
construn=net.toFunction();constoutput=run({r:1,g:0.4,b:0});console.log(run.toString());// copy and paste! no need to import brain.js
NeuralNetwork()
takes a hash of options:
constnet=newbrain.NeuralNetwork({activation:'sigmoid',// activation functionhiddenLayers:[4],learningRate:0.6,// global learning rate, useful when training using streams});
This parameter lets you specify which activation function your neural network should use. There are currently four supported activation functions,sigmoid being the default:
- sigmoid
- relu
- leaky-relu
- related option - 'leakyReluAlpha' optional number, defaults to 0.01
- tanh
Here's a table (thanks, Wikipedia!) summarizing a plethora of activation functions —Activation Function
You can use this to specify the number of hidden layers in the network and the size of each layer. For example, if you want two hidden layers - the first with 3 nodes and the second with 4 nodes, you'd give:
hiddenLayers:[3,4];
By defaultbrain.js
uses one hidden layer with size proportionate to the size of the input array.
Usehttps://www.npmjs.com/package/train-stream to stream data to a NeuralNetwork
constlikely=require('brain/likely');constkey=likely(input,net);
Likely example see:simple letter detection
<scriptsrc="../../src/utilities/svg.js"></script>
Renders the network topology of a feedforward network
document.getElementById('result').innerHTML=brain.utilities.toSVG(network,options);
toSVG example see:network rendering
brain.NeuralNetwork
-Feedforward Neural Network with backpropagationbrain.NeuralNetworkGPU
-Feedforward Neural Network with backpropagation, GPU versionbrain.AutoencoderGPU
-Autoencoder or "AutoencoderGPU" with backpropogation and GPU supportbrain.recurrent.RNNTimeStep
-Time Step Recurrent Neural Network or "RNN"brain.recurrent.LSTMTimeStep
-Time Step Long Short Term Memory Neural Network or "LSTM"brain.recurrent.GRUTimeStep
-Time Step Gated Recurrent Unit or "GRU"brain.recurrent.RNN
-Recurrent Neural Network or "RNN"brain.recurrent.LSTM
-Long Short Term Memory Neural Network or "LSTM"brain.recurrent.GRU
-Gated Recurrent Unit or "GRU"brain.FeedForward
-Highly Customizable Feedforward Neural Network with backpropagationbrain.Recurrent
-Highly Customizable Recurrent Neural Network with backpropagation
Different neural nets do different things well. For example:
- A Feedforward Neural Network can classify simple things very well, but it has no memory of previous actions and has infinite variation of results.
- A Time Step Recurrent Neural Networkremembers, and can predict future values.
- A Recurrent Neural Networkremembers, and has a finite set of results.
If you are a developer or if you just care about how ML API should look like - please take a part and join W3C community and share your opinions or simply support opinions you like or agree with.
Brain.js is a widely adopted open source machine learning library in the javascript world. There are several reasons for it, but most notable issimplicity of usage while not sacrificing performance.We would like to keep it also simple to learn, simple to use and performant when it comes to W3C standard. We think that current brain.js API is quite close to what we could expect to become a standard.And since supporting doesn't require much effort and still can make a huge difference feel free to join W3C community group and support us with brain.js like API.
Get involved into W3C machine learning ongoing standardization processhere.You can also join our open discussion about standardizationhere.
If you have an issue, either a bug or a feature you think would benefit your project let us know and we will do our best.
Create issueshere and follow the template.
Source forbrain.js.org
is available atBrain.js.org Repository. Built using awesomevue.js
&bulma
. Contributions are always welcome.
This project exists thanks to all the people who contribute. [Contribute].
Thank you to all our backers! 🙏 [Become a backer]
Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]
About
🤖 GPU accelerated Neural networks in JavaScript for Browsers and Node.js
Resources
License
Contributing
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Languages
- TypeScript100.0%