Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

End-to-End Object Detection with Fully Convolutional Network

License

NotificationsYou must be signed in to change notification settings

Megvii-BaseDetection/DeFCN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub

This project provides an implementation for "End-to-End Object Detection with Fully Convolutional Network" on PyTorch.

Experiments in the paper were conducted on the internal framework, thus we reimplement them oncvpods and report details as below.

Requirements

Get Started

  • install cvpods locally (requires cuda to compile)
python3 -m pip install'git+https://github.com/Megvii-BaseDetection/cvpods.git'# (add --user if you don't have permission)# Or, to install it from a local clone:git clone https://github.com/Megvii-BaseDetection/cvpods.gitpython3 -m pip install -e cvpods# Or,pip install -r requirements.txtpython3 setup.py build develop
  • prepare datasets
cd /path/to/cvpodscd datasetsln -s /path/to/your/coco/dataset coco
  • Train & Test
git clone https://github.com/Megvii-BaseDetection/DeFCN.gitcd DeFCN/playground/detection/coco/poto.res50.fpn.coco.800size.3x_ms# for example# Trainpods_train --num-gpus 8# Testpods_test --num-gpus 8 \    MODEL.WEIGHTS /path/to/your/save_dir/ckpt.pth# optional    OUTPUT_DIR /path/to/your/save_dir# optional# Multi node training## sudo apt install net-tools ifconfigpods_train --num-gpus 8 --num-machines N --machine-rank 0/1/.../N-1 --dist-url"tcp://MASTER_IP:port"

Results on COCO2017 val set

modelassignmentwith NMSlr sched.mAPmARdownload
FCOSone-to-manyYes3x + ms41.459.1weight |log
FCOS baselineone-to-manyYes3x + ms40.958.4weight |log
Anchorone-to-oneNo3x + ms37.160.5weight |log
Centerone-to-oneNo3x + ms35.261.0weight |log
Foreground Lossone-to-oneNo3x + ms38.762.2weight |log
POTOone-to-oneNo3x + ms39.261.7weight |log
POTO + 3DMFone-to-oneNo3x + ms40.661.6weight |log
POTO + 3DMF + Auxmixture*No3x + ms41.461.5weight |log

* We adopt a one-to-one assignment in POTO and a one-to-many assignment in the auxiliary loss, respectively.

  • 2x + ms schedule is adopted in the paper, but we adopt3x + ms schedule here to achieve higher performance.
  • It's normal to observe ~0.3AP noise in POTO.

Results on CrowdHuman val set

modelassignmentwith NMSlr sched.AP50mMRrecalldownload
FCOSone-to-manyYes30k iters86.154.994.2weight |log
ATSSone-to-manyYes30k iters87.249.794.0weight |log
POTOone-to-oneNo30k iters88.552.296.3weight |log
POTO + 3DMFone-to-oneNo30k iters88.851.096.6weight |log
POTO + 3DMF + Auxmixture*No30k iters89.148.996.5weight |log

* We adopt a one-to-one assignment in POTO and a one-to-many assignment in the auxiliary loss, respectively.

  • It's normal to observe ~0.3AP noise in POTO, and ~1.0mMR noise in all methods.

Ablations on COCO2017 val set

modelassignmentwith NMSlr sched.mAPmARnote
POTOone-to-oneNo6x + ms40.061.9
POTOone-to-oneNo9x + ms40.262.3
POTOone-to-oneNo3x + ms39.261.1replace Hungarian algorithm byargmax
POTO + 3DMFone-to-oneNo3x + ms40.962.0remove GN in 3DMF
POTO + 3DMF + Auxmixture*No3x + ms41.561.5remove GN in 3DMF

* We adopt a one-to-one assignment in POTO and a one-to-many assignment in the auxiliary loss, respectively.

  • Forone-to-one assignment, more training iters lead to higher performance.
  • Theargmax (also known as top-1) operation is indeed the approximate solution of bipartite matching in dense prediction methods.
  • It seems harmless to remove GN in 3DMF, which also leads to higher inference speed.

Acknowledgement

This repo is developed based on cvpods. Please checkcvpods for more details and features.

License

This repo is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Citing

If you use this work in your research or wish to refer to the baseline results published here, please use the following BibTeX entries:

@article{wang2020end,  title   =  {End-to-End Object Detection with Fully Convolutional Network},  author  =  {Wang, Jianfeng and Song, Lin and Li, Zeming and Sun, Hongbin and Sun, Jian and Zheng, Nanning},  journal =  {arXiv preprint arXiv:2012.03544},  year    =  {2020}}

Contributing to the project

Any pull requests or issues about the implementation are welcome. If you have any issue about the library (e.g. installation, environments), please refer tocvpods.

About

End-to-End Object Detection with Fully Convolutional Network

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp