Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Advanced Differentiable Neural Computer (ADNC) with application to bAbI task and CNN RC task.

License

NotificationsYou must be signed in to change notification settings

JoergFranke/ADNC

Repository files navigation

Build StatusPythonTensorFLow

This repository contains a implementation of a Advanced Differentiable Neural Computer (ADNC) for a more robust andscalable usage in Question Answering. This work is published on theMRQA workshop at theACL 2018. The ADNC is applied to the20 bAbI QA tasks withSOTA mean results and to theCNN Reading Comprehension Task withpassable results without any adaptation or hyper-parameter tuning.

The repository contains the following features:

  • Modular implementation of controller and memory unit
  • Fully configurable model/experiment with a yaml-config-file
  • Unit tests for all key parts (memory unit, controller, etc. )
  • Pre-trained models on bAbI task and CNN RC task
  • Plots of the memory unit functionality during sequence inference
  • The following advancements to the DNC:
drawingdrawingdrawingdrawing
Bypass DropoutDNC NormalizationContent Based Memory UnitBidirectional Controller
  • Dropout to reduce the bypass connectivity
  • Forces an earlier memory usage during training
  • Normalizes the memory unit's input
  • Increases the model stability during training
  • Memory Unit without temporal linkage mechanism
  • Reduces memory consumption by up to 70
  • Bidirectional DNC Architecture
  • Allows to handle variable requests and rich information extraction

Please find detailed information about the advancements and the experiments in

The plot below shows the impact of the different advancements in the word error rate with the bAbI task 1.

diff_advancements

Furthermore, it contains a set of rich analysis tools to get a deeper insight in the functionality of the ADNC. For examplethat the advancements lead to a more meaningful gate usage of the memory cell as you can see in the following plots:

process_dncprocess_adnc

How to use:

Setup ADNC (on Ubuntu)

To setup an virtual environment and install ADNC:

git clone https://github.com/joergfranke/ADNC.gitcd ADNC/python3 -m venv venvsource venv/bin/activatepip install -e .

Inference

The repository contains different pre-trained models in the experiments folder.ForbAbI inference, choose pre-trained model e.g.adnc and run:

python scripts/inference_babi_task.py adnc

Possible models arednc,adnc,biadnc on bAbi Task 1 andbiadnc-all,biadnc-aug16-all for all bAbI tasks with or without augmentation of task 16. The augmentation provides equal word distribution during training.

ForCNN inference of pre-trained ADNC run:

python scripts/inference_babi_task.py

Training

The configuration filescripts/config.yml contains the full config of the ADNC training. The training script can be run with:

python scripts/start_training.py

It starts a bAbI training and plots every epoch a function plot to control the training progress.

Plots

To plot a function plot of the bAbI task choose pre-trained model e.g.adnc and run:

python scripts/plot_function_babi_task.py

Possible models arednc,adnc,biadnc on bAbi Task 1 andbiadnc-all,biadnc-aug16-all for all bAbI tasks with or without augmentation of task 16.

Experiments & Results

20 bAbI QA task

  • Joint trained on all 20 tasks.
  • Mean results of 5 training runs with different initializations.
  • Similar hyper-parameter as theoriginal DNC
  • The unidirectional controller has one LSTM layer and 256 hidden units and the bidirectional has 172 hidden units in each direction.
  • The memory unit has 192 locations, a width of 64 and 4 read heads.
  • Bypass Dropout is applied with a dropout rate of 10%.
  • The model is optimized with RMSprop with fixed learning rate of 3e-05 and momentum of 0.9.
  • Task 16 Augmentation: The task contains a strong local minimum. Given the most common color as answer leads to a correct answer in 50% of the cases.

bAbI Results

TaskDNCEntNetSDNCADNCBiADNCBiADNC
+aug16
1: 1 supporting fact9.0 ± 12.60.0 ± 0.10.0 ± 0.00.1 ± 0.00.1 ± 0.10.1 ± 0.0
2: 2 supporting facts39.2 ± 20.515.3 ± 15.77.1 ± 14.60.8 ± 0.50.8 ± 0.20.5 ± 0.2
3: 3 supporting facts39.6 ± 16.429.3 ± 26.39.4 ± 16.76.5 ± 4.62.4 ± 0.61.6 ± 0.8
4: 2 argument relations0.4 ± 0.70.1 ± 0.10.1 ± 0.10.0 ± 0.00.0 ± 0.00.0 ± 0.0
5: 3 argument relations1.5 ± 1.00.4 ± 0.30.9 ± 0.31.0 ± 0.40.7 ± 0.10.8 ± 0.4
6: yes/no questions6.9 ± 7.50.6 ± 0.80.1 ± 0.20.0 ± 0.10.0 ± 0.00.0 ± 0.0
7: counting9.8 ± 7.01.8 ± 1.11.6 ± 0.91.0 ± 0.71.0 ± 0.51.0 ± 0.7
8: lists/sets5.5 ± 5.91.5 ± 1.20.5 ± 0.40.2 ± 0.20.5 ± 0.30.6 ± 0.3
9: simple negation7.7 ± 8.30.0 ± 0.10.0 ± 0.10.0 ± 0.00.1 ± 0.20.0 ± 0.0
10: indefinite knowledge9.6 ± 11.40.1 ± 0.20.3 ± 0.20.1 ± 0.20.0 ± 0.00.0 ± 0.1
11: basic coreference3.3 ± 5.70.2 ± 0.20.0 ± 0.00.0 ± 0.00.0 ± 0.00.0 ± 0.0
12: conjunction5 ± 6.30.0 ± 0.00.2 ± 0.30.0 ± 0.00.0 ± 0.10.0 ± 0.0
13: compound coreference3.1 ± 3.60.0 ± 0.10.1 ± 0.10.0 ± 0.00.0 ± 0.00.0 ± 0.0
14: time reasoning11 ± 7.57.3 ± 4.55.6 ± 2.90.2 ± 0.10.8 ± 0.70.3 ± 0.1
15: basic deduction27.2 ± 20.13.6 ± 8.13.6 ± 10.30.1 ± 0.10.1 ± 0.10.1 ± 0.1
16: basic induction53.6 ± 1.953.3 ± 1.253.0 ± 1.352.1 ± 0.952.6 ± 1.60.0 ± 0.0
17: positional reasoning32.4 ± 88.8 ± 3.812.4 ± 5.918.5 ± 8.84.8 ± 4.81.5 ± 1.8
18: size reasoning4.2 ± 1.81.3 ± 0.91.6 ± 1.11.1 ± 0.50.4 ± 0.40.9 ± 0.5
19: path finding64.6 ± 37.470.4 ± 6.130.8 ± 24.243.3 ± 36.70.0 ± 0.00.1 ± 0.1
20: agent’s motivation0.0 ± 0.10.0 ± 0.00.0 ± 0.00.1 ± 0.10.1 ± 0.10.1 ± 0.1
Mean WER:16.7 ± 7.69.7 ± 2.66.4 ± 2.56.3 ± 2.73.2 ± 0.50.4 ± 0.3
Failed Tasks (<5%):11.2 ± 5.45.0 ± 1.24.1 ± 1.63.2 ± 0.81.4 ± 0.50.0 ± 0.0

CNN RC Task

  • All hyper-parameters are chosen inspired by related work.
  • The controller is a LSTM with one hidden layer and a layer size of 512 and a memory matrix with 256 locations, a width of 128 and four read heads.
  • Bypass Dropout is applied with a dropout rate of 10%.
  • The maximum sequence length during training is limited to 1400 words.
  • The model is optimized with RMSprop with fixed learning rate of 3e-05 and momentum of 0.9.

CNN Results

Modelvalidtest
Deep LSTM Reader55.057.0
Attentive Reader61.663.0
ADNC67.569.0
AS Reader68.669.5
Stanford AR72.272.4
AoA Reader73.174.4
ReasoNet72.974.7
GA Reader77.977.9

About

Advanced Differentiable Neural Computer (ADNC) with application to bAbI task and CNN RC task.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp