Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering

License

NotificationsYou must be signed in to change notification settings

cvlab-tohoku/Dense-CoAttention-Network

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

If you make use of this code, please cite the following paper (and give us a star ^_^):

@InProceedings{Nguyen_2018_CVPR,author = {Nguyen, Duy-Kien and Okatani, Takayuki},title = {Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering},booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},month = {June},year = {2018}}

If you have any suggestion to improve this code, please feel free to contact me atkien@vision.is.tohoku.ac.jp.

Overview

This repository contains Pytorch implementation of "Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering" paper. The network architecture is illustrated in Figure 1.

Figure 1: Overview of Dense Co-Attention Network architecture.

Figure 1: The Dense Co-Attention Network architecture.

Files

├──preprocess/ - Preprocessing code before training the network├──dense_coattn/ - Dense Co-Attention code├──demo/ - Demo imgs for pretrained Dense Co-Attention modeltrain.py - Train the modelanswer.py - Generate the answer for test datasetensemble.py - Ensemble multiple results from different models

Dependencies

Tests are performed with following version of libraries:

  • Python 3.6.3
  • Pytorch >= 0.4
  • Torchtext for Pytorch >= 0.4 (install via pip)
  • TensorboardX

Training from Scratch

The dataset can be downloaded from:http://visualqa.org/.

We provide the scripts for training our network from scratch by simply running thetrain.py script to train the model.

  • All of arguments are described in thetrain.py file so that you can easily change the hyper-parameter and training conditions (Most of the default hyper-parameters are used in the main paper).
  • Pretrained GloVe word embedding is loaded fromtorchtext

Evaluation

Runanswer.py file to generate all of answers for the test set. You can useensemble.py to ensemble multiple model's results for the evaluation

License

The source code is licensed underMIT License.

About

Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp