Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

License

NotificationsYou must be signed in to change notification settings

wendell0218/RAFT

 
 

Repository files navigation

This repository contains the source code for our paper:

RAFT: Recurrent All Pairs Field Transforms for Optical Flow
ECCV 2020
Zachary Teed and Jia Deng

Requirements

The code has been tested with PyTorch 1.6 and Cuda 10.1.

conda create --name raftconda activate raftconda install pytorch=1.6.0 torchvision=0.7.0 cudatoolkit=10.1 matplotlib tensorboard scipy opencv -c pytorch

Demos

Pretrained models can be downloaded by running

./download_models.sh

or downloaded fromgoogle drive

You can demo a trained model on a sequence of frames

python demo.py --model=models/raft-things.pth

Required Data

To evaluate/train RAFT, you will need to download the required datasets.

By defaultdatasets.py will search for the datasets in these locations. You can create symbolic links to wherever the datasets were downloaded in thedatasets folder

├── datasets    ├── Sintel        ├──test        ├── training    ├── KITTI        ├── testing        ├── training        ├── devkit    ├── FlyingChairs_release        ├── data    ├── FlyingThings3D        ├── frames_cleanpass        ├── frames_finalpass        ├── optical_flow

Evaluation

You can evaluate a trained model usingevaluate.py

python evaluate.py --model=models/raft-things.pth --dataset=sintel --mixed_precision

Training

We used the following training schedule in our paper (2 GPUs). Training logs will be written to theruns which can be visualized using tensorboard

./train_standard.sh

If you have a RTX GPU, training can be accelerated using mixed precision. You can expect similiar results in this setting (1 GPU)

./train_mixed.sh

(Optional) Efficent Implementation

You can optionally use our alternate (efficent) implementation by compiling the provided cuda extension

cd alt_cuda_corr&& python setup.py install&&cd ..

and runningdemo.py andevaluate.py with the--alternate_corr flag Note, this implementation is somewhat slower than all-pairs, but uses significantly less GPU memory during the forward pass.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python83.5%
  • Cuda13.2%
  • Shell1.9%
  • C++1.4%

[8]ページ先頭

©2009-2026 Movatter.jp