Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Learning Multi-Domain Convolutional Neural Networks for Visual Tracking

License

NotificationsYou must be signed in to change notification settings

hyseob/MDNet

Repository files navigation

Created byHyeonseob Nam andBohyung Han at POSTECH

Project Webpage:http://cvlab.postech.ac.kr/research/mdnet/

News

(May 28, 2017) Python implementation of MDNet is avaliable![py-MDNet]

Introduction

MDNet is the state-of-the-art visual tracker based on a CNN trained on a large set of tracking sequences,and the winner tracker ofThe VOT2015 Challenge.

Detailed description of the system is provided by ourpaper.

This software is implemented usingMatConvNet and part ofR-CNN.

Citation

If you're using this code in a publication, please cite our paper.

@InProceedings{nam2016mdnet,author = {Nam, Hyeonseob and Han, Bohyung},title = {Learning Multi-Domain Convolutional Neural Networks for Visual Tracking},booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},month = {June},year = {2016}}

License

This software is being made available for research purpose only.Check LICENSE file for details.

System Requirements

This code is tested on 64 bit Linux (Ubuntu 14.04 LTS).

Prerequisites0. MATLAB (tested with R2014a)0. MatConvNet (tested with version 1.0-beta10, included in this repository)0. For GPU support, a GPU (~2GB memory) and CUDA toolkit according to theMatConvNet installation guideline will be needed.

Installation

  1. Compile MatConvNet according to theinstallation guideline. An example script is provided in 'compile_matconvnet.m'.
  2. Run 'setup_mdnet.m' to set the environment for running MDNet.

Online Tracking using MDNet

Pretrained Models

If you only need to run the tracker, you can use the pretrained MDNet models:0. models/mdnet_vot-otb.mat (trained on VOT13,14,15 excluding OTB)0. models/mdnet_otb-vot14.mat (trained on OTB excluding VOT14)0. models/mdnet_otb-vot15.mat (trained on OTB excluding VOT15)

Demo0. Run 'tracking/demo_tracking.m'.

The demo performs online tracking on'Diving' sequence using a pretrained model 'models/mdnet_vot-otb.mat'.

In case of out of GPU memory, decreaseopts.batchSize_test in 'tracking/mdnet_init.m'.You can also disable the GPU support by settingopts.useGpu in 'tracking/mdnet_init.m' to false (not recommended).

Learning MDNet

Preparing Datasets

You may need OTB and VOT datasets for learning MDNet models. You can also use other datasets by configuring 'utils/genConfig.m'.0. DownloadOTB andVOT datasets.0. Locate the OTB sequences in 'dataset/OTB' and VOT201x sequences in 'dataset/VOT/201x', or modify the variablesbenchmarkSeqHome in 'utils/genConfig.m' properly.

Demo0. Run 'pretraining/demo_pretraining.m'.

The demo trains new MDNet models using OTB or VOT sequences.

About

Learning Multi-Domain Convolutional Neural Networks for Visual Tracking

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp