Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[CVPR'23] Universal Instance Perception as Object Discovery and Retrieval

License

NotificationsYou must be signed in to change notification settings

MasterBin-IIAU/UNINEXT

Repository files navigation

UNINEXTThis is the official implementation of the paperUniversal Instance Perception as Object Discovery and Retrieval.

PWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWCPWC

News

Highlight

  • UNINEXT is accepted byCVPR2023.
  • UNINEXT reformulates diverse instance perception tasks intoa unified object discovery and retrieval paradigm and can flexibly perceive different types of objects by simply changing the input prompts.
  • UNINEXT achievessuperior performance on 20 challenging benchmarks using a single model with the same model parameters.

Introduction

TASK-RADAR

Object-centric understanding is one of the most essential and challenging problems in computer vision. In this work, we mainly discuss 10 sub-tasks, distributed on the vertices of the cube shown in the above figure. Since all these tasks aim to perceive instances of certain properties, UNINEXT reorganizes them into three types according to the different input prompts:

  • Category Names
    • Object Detection
    • Instance Segmentation
    • Multiple Object Tracking (MOT)
    • Multi-Object Tracking and Segmentation (MOTS)
    • Video Instance Segmentation (VIS)
  • Language Expressions
    • Referring Expression Comprehension (REC)
    • Referring Expression Segmentation (RES)
    • Referring Video Object Segmentation (R-VOS)
  • Target Annotations
    • Single Object Tracking (SOT)
    • Video Object Segmentation (VOS)

Then we propose a unified prompt-guided object discovery and retrieval formulationto solve all the above tasks. Extensiveexperiments demonstrate that UNINEXT achieves superior performance on 20 challenging benchmarks.

Demo

UNINEXT_DEMO_VID_9M.mp4

UNINEXT can flexibly perceive various types of objects by simply changing the input prompts, such as category names, language expressions, and target annotations. We also provide a simpledemo script, which supports 4 image-level tasks (object detection, instance segmentation, REC, RES).

Results

Retrieval by Category Names

OD-ISMOT-MOTS-VIS

Retrieval by Language Expressions

REC-RES-RVOS

Retrieval by Target Annotations

SOT-VOS

Getting started

  1. Installation: Please refer toINSTALL.md for more details.
  2. Data preparation: Please refer toDATA.md for more details.
  3. Training: Please refer toTRAIN.md for more details.
  4. Testing: Please refer toTEST.md for more details.
  5. Model zoo: Please refer toMODEL_ZOO.md for more details.

Citing UNINEXT

If you find UNINEXT useful in your research, please consider citing:

@inproceedings{UNINEXT,title={Universal Instance Perception as Object Discovery and Retrieval},author={Yan, Bin and Jiang, Yi and Wu, Jiannan and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},booktitle={CVPR},year={2023}}

Acknowledgments

  • ThanksUnicorn for providing experience of unifying four object tracking tasks (SOT, MOT, VOS, MOTS).
  • ThanksVNext for providing experience of Video Instance Segmentation (VIS).
  • ThanksReferFormer for providing experience of REC, RES, and R-VOS.
  • ThanksGLIP for the idea of unifying object detection and phrase grounding.
  • ThanksDetic for the implementation of multi-dataset training.
  • Thanksdetrex for the implementation of denoising mechnism.

[8]ページ先頭

©2009-2025 Movatter.jp