- Notifications
You must be signed in to change notification settings - Fork6
jshilong/DDQ
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
By Shilong Zhang*, Xinjiang Wang*, Jiaqi Wang, Jiangmiao Pang, Chengqi Lyu, Wenwei Zhang, Ping Luo, Kai Chen
21/3/2023: DDQ DETR(12 epoch 52.1 mAP) has been released atddq_detr branch. Please rungit checkout ddq_detr
to use it.
20/3/2023: DDQ FCN and DDQ R-CNN has been released atmain branch
One-to-one label assignment in object detection has successfully obviated the need of non-maximum suppression (NMS) as a postprocessingand makes the pipeline end-to-end. However, it triggers a new dilemma as the widely used sparse queries cannot guarantee a high recall, while dense queries inevitably bring more similar queries and encounters optimization difficulty. As both sparse and dense queries are problematic, then what are the expected queries in end-to-end object detection? This paper shows that the solution should be Dense Distinct Queries (DDQ). Concretely, we first lay dense queries like traditional detectors and then select distinct ones for one-to-one assignments. DDQ blends the advantages of traditional and recent end-to-end detectors and significantly improves the performance of various detectors including FCN, R-CNN, and DETRs. Most impressively, ddq_detr achieves 52.1 AP on MS-COCO dataset within 12 epochs using a ResNet-50 backbone, outperforming all existing detectors in the same setting. DDQ also shares the benefit of end-to-end detectors in crowded scenes and achieves 93.8 AP on CrowdHuman. We hope DDQ can inspire researchers to consider the complementarity between traditional methods and end-to-end detectors.
The DDQ FCN/RCNN was implemented based on mmdetection2.0, which has been released in thismain branch .
DDQ DETR is implemented based on mmdetection3.0. You need to switch to theddq_detr branch of this repo.
DDQ FCN and DDQ R-CNN has been fully tested underMMDetection V2.22.0 andMMCV V1.4.7 undertorch 1.9.0+cu111, other versions may not be compatible.We include the necessary source code of MMCV and MMDetection in this repo, you can build the MMCV by following command.
cd mmcv-1.4.7MMCV_WITH_OPS=1 python setup.py build_ext --inplaceln -s mmcv-1.4.7/mmcv ./cd ..export PYTHONPATH=`pwd`:$PYTHONPATH
DDQ DETR has been fully tested underMMDetection V3.0.0rc6,MMCV v2.0.0rc4 andMMEngine v0.6.0 undertorch 1.9.0+cu111, other versions may not be compatible.We include the necessary source code of MMCV, MMDetection and MMEngine in theddq_detr branch of this repo, you just need to build the MMCV by following command.
git check ddq_detrcd mmcv-2.0.0rc4MMCV_WITH_OPS=1 python setup.py build_ext --inplaceln -s mmcv-2.0.0rc4/mmcv ./cd ..export PYTHONPATH=`pwd`:$PYTHONPATH
DDQ├── data│ ├── coco│ │ ├── annotations│ │ │ ├──instances_train2017.json│ │ │ ├──instances_val2017.json│ │ ├── train2017│ │ ├── val2017
8 is the number of gpus.
Train
GPUS=8 sh tools/slurm_train.sh partition_name job_name projects/configs/ddq_fcn/ddq_fcn_r50_1x.py ./exp/ddq-fcn
Test
GPUS=8 sh tools/slurm_test.sh partition_name job_name projects/configs/ddq_fcn/ddq_fcn_r50_1x.py path_to_checkpoint --eval bbox
Train
sh tools/train.sh projects/configs/ddq_fcn/ddq_fcn_r50_1x.py 8 ./exp/ddq_fcn
Test
sh tools/test.sh projects/configs/ddq_fcn/ddq_fcn_r50_1x.py path_to_checkpoint 8 --eval bbox
We find that the performance is unstable and may fluctuate by about 0.2 mAP.
Model | Backbone | Lr schd | Augmentation | box AP(val) | Model | log |
---|---|---|---|---|---|---|
DDQ FCN | R-50 | 12e | Normal | 41.5 | ckpt | log |
DDQ FCN | R-50 | 36e | DETR | 44.8 | ckpt | log |
DDQ R-CNN | R-50 | 12e | Normal | 44.6 | ckpt | log |
DDQ R-CNN | R-50 | 36e | DETR | 48.1 | ckpt | log |
Model | Backbone | Lr schd | Augmentation | box AP(val) | Model | log |
---|---|---|---|---|---|---|
DDQ DETR-4scale | R-50 | 12e | DETR | 51.3 | ckpt | log |
DDQ DETR-5scale | R-50 | 12e | DETR | 52.1 | ckpt | log |
DDQ DETR-4scale | Swin-L | 30e | DETR | 58.7 | ckpt | log |
Please cite our paper in your publications if it helps your research:
@InProceedings{Zhang_2023_CVPR, author = {Zhang, Shilong and Wang, Xinjiang and Wang, Jiaqi and Pang, Jiangmiao and Lyu, Chengqi and Zhang, Wenwei and Luo, Ping and Chen, Kai}, title = {Dense Distinct Query for End-to-End Object Detection}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {7329-7338}}