Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[CVPR2022] DanceTrack: Multiple Object Tracking in Uniform Appearance and Diverse Motion

License

NotificationsYou must be signed in to change notification settings

DanceTrack/DanceTrack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DanceTrack is a benchmark for tracking multiple objects in uniform appearance and diverse motion.

DanceTrack provides box and identity annotations. It contains 100 videos, 40 for training(annotations public), 25 for validation(annotations public) and 35 for testing(annotations unpublic). For evaluating on test set, please seeCodaLab(Old CodaLab). We also have aProject Page for exhibition.


Paper (CVPR2022)

DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion

News

Paper List

TitleIntroDescriptionLinks
SUSHIUnifying Short and Long-Term Tracking with Graph Hierarchies[Github]
MOTRv2MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors[Github]
MOT_FCGMultiple Object Tracking from appearance by hierarchically clustering trackletsMultiple Object Tracking from appearance by hierarchically clustering tracklets[Github]
OC-SORTObservation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking[Github]
StrongSORTStrongSORT: Make DeepSORT Great Again[Github]
MOTRMOTR: End-to-End Multiple-Object Tracking with TRansformer[Github]

Dataset

Download the dataset fromHugging Face, Google Drive (deprecated, use HuggingFance instead) orBaidu Drive (code:awew).

Organize as follows:

{DanceTrack ROOT}|-- dancetrack|   |-- train|   |   |-- dancetrack0001|   |   |   |-- img1|   |   |   |   |-- 00000001.jpg|   |   |   |   |-- ...|   |   |   |-- gt|   |   |   |   |-- gt.txt            |   |   |   |-- seqinfo.ini|   |   |-- ...|   |-- val|   |   |-- ...|   |-- test|   |   |-- ...|   |-- train_seqmap.txt|   |-- val_seqmap.txt|   |-- test_seqmap.txt|-- TrackEval|-- tools|-- ...

We align our dataset annotations with MOT, so each line in gt.txt contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, 1, 1, 1

Evaluation

We useByteTrack as an example of using DanceTrack. For training details, please seeinstruction. We provide the trained models inHugging Face, Google Drive (deprecated, use HuggingFance instead) orBaidu Drive (code:awew).

To do evaluation with our provided tookit, we organize the results of validation set as follows:

{DanceTrack ROOT}|-- val|   |-- TRACKER_NAME|   |   |-- dancetrack000x.txt|   |   |-- ...|   |-- ...

where dancetrack000x.txt is the output file of the video episode dancetrack000x, each line of which contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, -1, -1, -1

Then, simply run the evalution code:

python3 TrackEval/scripts/run_mot_challenge.py --SPLIT_TO_EVAL val  --METRICS HOTA CLEAR Identity  --GT_FOLDER dancetrack/val --SEQMAP_FILE dancetrack/val_seqmap.txt --SKIP_SPLIT_FOL True   --TRACKERS_TO_EVAL '' --TRACKER_SUB_FOLDER ''  --USE_PARALLEL True --NUM_PARALLEL_CORES 8 --PLOT_CURVES False --TRACKERS_FOLDER val/TRACKER_NAME
TrackerHOTADetAAssAMOTAIDF1
ByteTrack47.170.531.588.251.9

Besides, we also provide the visualization script. The usage is as follow:

python3 tools/txt2video_dance.py --img_path dancetrack --split val --tracker TRACKER_NAME

Competition

Organize the results of test set as follows:

{DanceTrack ROOT}|-- test|   |-- tracker|   |   |-- dancetrack000x.txt|   |   |-- ...

Each line of dancetrack000x.txt contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, -1, -1, -1

Archive tracker folder to tracker.zip and submit toCodaLab. Please note: (1) archive tracker folder, instead of txt files. (2) the folder name must be tracker.

The return will be:

TrackerHOTADetAAssAMOTAIDF1
tracker47.771.032.189.653.9

For more detailed metrics and metrics on each video, click ondownload output from scoring step in CodaLab.

Run the visualization code:

python3 tools/txt2video_dance.py --img_path dancetrack --split test --tracker tracker

Joint-Training

We use joint-training with other datasets to predict mask, pose and depth.CenterNet is provided as an example. For details of joint-trainig, please seejoint-training instruction. We provide the trained models inHugging Face, Google Drive (deprecated, use HuggingFance instead) orBaidu Drive(code:awew).

For mask demo, run

cd CenterNet/srcpython3 demo.py ctseg --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_coco_mask.pth --debug 4 --tracking cd ../..python3 tools/img2video.py --img_file CenterNet/exp/ctseg/default/debug --video_name dancetrack000x_mask.avi

For pose demo, run

cd CenterNet/srcpython3 demo.py multi_pose --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_coco_pose.pth --debug 4 --tracking cd ../..python3 tools/img2video.py --img_file CenterNet/exp/multi_pose/default/debug --video_name dancetrack000x_pose.avi

For depth demo, run

cd CenterNet/srcpython3 demo.py ddd --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_kitti_ddd.pth --debug 4 --tracking --test_focal_length 640 --world_size 16 --out_size 128cd ../..python3 tools/img2video.py --img_file CenterNet/exp/ddd/default/debug --video_name dancetrack000x_ddd.avi

Agreement

  • The annotations of DanceTrack are licensed under aCreative Commons Attribution 4.0 License.
  • The dataset of DanceTrack is available fornon-commercial research purposes only.
  • All videos and images of DanceTrack are obtained from the Internet which are not property of HKU, CMU or ByteDance. These three organizations are not responsible for the content nor the meaning of these videos and images.
  • The code of DanceTrack is released under the MIT License.

Acknowledgement

The evaluation metrics and code are fromMOT Challenge andTrackEval. The inference code is fromByteTrack. The joint-training code is modified fromCenterTrack andCenterNet, where the instance segmentation code is fromCenterNet-CondInst. Thanks for their wonderful and pioneering works !

Citation

If you use DanceTrack in your research or wish to refer to the baseline results published here, please use the following BibTeX entry:

@article{peize2021dance,title   ={DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion},author  ={Peize Sun and Jinkun Cao and Yi Jiang and Zehuan Yuan and Song Bai and Kris Kitani and Ping Luo},journal ={arXiv preprint arXiv:2111.14690},year    ={2021}}

About

[CVPR2022] DanceTrack: Multiple Object Tracking in Uniform Appearance and Diverse Motion

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp