- Notifications
You must be signed in to change notification settings - Fork16
Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering
License
cvlab-tohoku/Dense-CoAttention-Network
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering
If you make use of this code, please cite the following paper (and give us a star ^_^):
@InProceedings{Nguyen_2018_CVPR,author = {Nguyen, Duy-Kien and Okatani, Takayuki},title = {Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering},booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},month = {June},year = {2018}}
If you have any suggestion to improve this code, please feel free to contact me atkien@vision.is.tohoku.ac.jp
.
This repository contains Pytorch implementation of "Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering" paper. The network architecture is illustrated in Figure 1.
Figure 1: The Dense Co-Attention Network architecture.├──preprocess/ - Preprocessing code before training the network├──dense_coattn/ - Dense Co-Attention code├──demo/ - Demo imgs for pretrained Dense Co-Attention modeltrain.py - Train the modelanswer.py - Generate the answer for test datasetensemble.py - Ensemble multiple results from different models
Tests are performed with following version of libraries:
- Python 3.6.3
- Pytorch >= 0.4
- Torchtext for Pytorch >= 0.4 (install via pip)
- TensorboardX
The dataset can be downloaded from:http://visualqa.org/.
We provide the scripts for training our network from scratch by simply running thetrain.py
script to train the model.
- All of arguments are described in the
train.py
file so that you can easily change the hyper-parameter and training conditions (Most of the default hyper-parameters are used in the main paper). - Pretrained GloVe word embedding is loaded fromtorchtext
Runanswer.py
file to generate all of answers for the test set. You can useensemble.py
to ensemble multiple model's results for the evaluation
The source code is licensed underMIT License.
About
Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.