- Notifications
You must be signed in to change notification settings - Fork4
Implementation of "Teaching Machines to Read and Comprehend" proposed by Google DeepMind
License
alexander-rakhlin/DeepMind-Teaching-Machines-to-Read-and-Comprehend
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This repository contains an implementation of the two models (the Deep LSTM and the Attentive Reader) described inTeaching Machines to Read and Comprehend by Karl Moritz Hermann and al., NIPS, 2015. This repository also contains an implementation of a Deep Bidirectional LSTM.
The three models implemented in this repository are:
deepmind_deep_lstm
reproduces the experimental settings of the DeepMind paper for the LSTM readerdeepmind_attentive_reader
reproduces the experimental settings of the DeepMind paper for the Attentive readerdeep_bidir_lstm_2x128
implements a two-layer bidirectional LSTM reader
We trained the three models during 2 to 4 days on a Titan Black GPU. The following results were obtained:
DeepMind | Us | |||
CNN | CNN | |||
Valid | Test | Valid | Test | |
Attentive Reader | 61.6 | 63.0 | 59.37 | 61.07 |
Deep Bidir LSTM | - | - | 59.76 | 61.62 |
Deep LSTM Reader | 55.0 | 57.0 | 46 | 47 |
Here is an example of attention weights used by the attentive reader model on an example:
Software dependencies:
Optional dependencies:
- Blocks Extras and a Bokeh server for the plot
We recommend usingAnaconda 2 and installing them with the following commands (wherepip
refers to thepip
command from Anaconda):
pip install git+git://github.com/Theano/Theano.gitpip install git+git://github.com/mila-udem/fuel.gitpip install git+git://github.com/mila-udem/blocks.git -r https://raw.githubusercontent.com/mila-udem/blocks/master/requirements.txt
Anaconda also includes a Bokeh server, but you still need to installblocks-extras
if you want to have the plot:
pip install git+git://github.com/mila-udem/blocks-extras.git
The corresponding dataset is provided byDeepMind but if the script does not work (or you are tired of waiting) you can checkthis preprocessed version of the dataset byKyunghyun Cho.
Set the environment variableDATAPATH
to the folder containing the DeepMind QA dataset. The training questions are expected to be in$DATAPATH/deepmind-qa/cnn/questions/training
.
Run:
cp deepmind-qa/* $DATAPATH/deepmind-qa/
This will copy our vocabulary listvocab.txt
, which contains a subset of all the words appearing in the dataset.
To train a model (see list of models at the beginning of this file), run:
./train.py model_name
Be careful to set yourTHEANO_FLAGS
correctly! For instance you might want to useTHEANO_FLAGS=device=gpu0
if you have a GPU (highly recommended!)
Teaching Machines to Read and Comprehend, by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman and Phil Blunsom, Neural Information Processing Systems, 2015.
We would like to thank the developers of Theano, Blocks and Fuel at MILA for their excellent work.
We thank Simon Lacoste-Julien from SIERRA team at INRIA, for providing us access to two Titan Black GPUs.
About
Implementation of "Teaching Machines to Read and Comprehend" proposed by Google DeepMind
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Languages
- Python100.0%