Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

MOT using deepsort and yolov3 with pytorch

License

NotificationsYou must be signed in to change notification settings

ZQPei/deep_sort_pytorch

Repository files navigation

Update(1-1-2020)

Changes

  • fix bugs
  • refactor code
  • accerate detection by adding nms on gpu

Update(07-22)

Changes

  • bug fix (Thanks @JieChen91 and @yingsen1 for bug reporting).
  • using batch for feature extracting for each frame, which lead to a small speed up.
  • code improvement.

Futher improvement direction

  • Train detector on specific dataset rather than the official one.
  • Retrain REID model on pedestrain dataset for better performance.
  • Replace YOLOv3 detector with advanced ones.

Update(23-05-2024)

tracking

  • Added resnet network to the appearance feature extraction network in the deep folder

  • Fixed the NMS bug in thepreprocessing.py and also fixed covariance calculation bug in thekalmen_filter.py in the sort folder

detecting

  • Added YOLOv5 detector, aligned interface, and added YOLOv5 related yaml configuration files. Codes references this repo:YOLOv5-v6.1.

  • Thetrain.py,val.py anddetect.py in the original YOLOv5 were deleted. This repo only needyolov5x.pt.

deepsort

  • Added tracking target category, which can display both category and tracking ID simultaneously.

Update(28-05-2024)

segmentation

  • Added Mask RCNN instance segmentation model. Codes references this repo:mask_rcnn. Visual result saved indemo/demo2.gif.
  • Similar to YOLOv5,train.py,validation.py andpredict.py were deleted. This repo only needmaskrcnn_resnet50_fpn_coco.pth.

deepsort

  • Added tracking target mask, which can display both category, tracking ID and target mask simultaneously.

latest Update(09-06-2024)

feature extraction network

  • Usingnn.parallel.DistributedDataParallel in PyTorch to support multiple GPUs training.
  • AddedGETTING_STARTED.md for better usingtrain.py andtrain_multiGPU.py.

UpdatedREADME.md for previously updated content(#Update(23-05-2024) and #Update(28-05-2024)).

Any contributions to this repository is welcome!

Introduction

This is an implement of MOT tracking algorithm deep sort. Deep sort is basicly the same with sort but added a CNN model to extract features in image of human part bounded by a detector. This CNN model is indeed a RE-ID model and the detector used inPAPER is FasterRCNN , and the original source code isHERE.
However in original code, the CNN model is implemented with tensorflow, which I'm not familier with. SO I re-implemented the CNN feature extraction model with PyTorch, and changed the CNN model a little bit. Also, I useYOLOv3 to generate bboxes instead of FasterRCNN.

Dependencies

  • python 3(python2 not sure)
  • numpy
  • scipy
  • opencv-python
  • sklearn
  • torch >= 1.9
  • torchvision >= 0.13
  • pillow
  • vizer
  • edict
  • matplotlib
  • pycocotools
  • tqdm

Quick Start

  1. Check all dependencies installed
pip install -r requirements.txt

for user in china, you can specify pypi source to accelerate install like:

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
  1. Clone this repository
git clone git@github.com:ZQPei/deep_sort_pytorch.git
  1. Download detector parameters
# if you use YOLOv3 as detector in this repocd detector/YOLOv3/weight/wget https://pjreddie.com/media/files/yolov3.weightswget https://pjreddie.com/media/files/yolov3-tiny.weightscd ../../../# if you use YOLOv5 as detector in this repocd detector/YOLOv5wget https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.ptor wget https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5m.ptcd ../../# if you use Mask RCNN as detector in this repocd detector/Mask_RCNN/save_weightswget https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pthcd ../../../
  1. Download deepsort feature extraction networks weight
# if you use original model in PAPERcd deep_sort/deep/checkpoint# download ckpt.t7 fromhttps://drive.google.com/drive/folders/1xhG0kRH1EX5B9_Iz8gQJb7UNnn_riXi6 to this foldercd ../../../# if you use resnet18 in this repocd deep_sort/deep/checkpointwget https://download.pytorch.org/models/resnet18-5c106cde.pthcd ../../../
  1. (Optional) Compile nms module if you use YOLOv3 as detetor in this repo
cd detector/YOLOv3/nmssh build.shcd ../../..

Notice:If compiling failed, the simplist way is to **Upgrade your pytorch >= 1.1 and torchvision >= 0.3" and you can avoid the troublesome compiling problems which are most likely caused by eithergcc version too low orlibraries missing.

  1. (Optional) Prepare third party submodules

fast-reid

This library supports bagtricks, AGW and other mainstream ReID methods through providing an fast-reid adapter.

to prepare our bundled fast-reid, then follow instructions in its README to install it.

Please refer toconfigs/fastreid.yaml for a sample of using fast-reid. SeeModel Zoo for available methods and trained models.

MMDetection

This library supports Faster R-CNN and other mainstream detection methods through providing an MMDetection adapter.

to prepare our bundled MMDetection, then follow instructions in its README to install it.

Please refer toconfigs/mmdet.yaml for a sample of using MMDetection. SeeModel Zoo for available methods and trained models.

Run

git submodule update --init --recursive
  1. Run demo
usage: deepsort.py [-h]                   [--fastreid]                   [--config_fastreid CONFIG_FASTREID]                   [--mmdet]                   [--config_mmdetection CONFIG_MMDETECTION]                   [--config_detection CONFIG_DETECTION]                   [--config_deepsort CONFIG_DEEPSORT] [--display]                   [--frame_interval FRAME_INTERVAL]                   [--display_width DISPLAY_WIDTH]                   [--display_height DISPLAY_HEIGHT] [--save_path SAVE_PATH]                   [--cpu] [--camera CAM]                   VIDEO_PATH# yolov3 + deepsortpython deepsort.py [VIDEO_PATH] --config_detection ./configs/yolov3.yaml# yolov3_tiny + deepsortpython deepsort.py [VIDEO_PATH] --config_detection ./configs/yolov3_tiny.yaml# yolov3 + deepsort on webcampython3 deepsort.py /dev/video0 --camera 0# yolov3_tiny + deepsort on webcampython3 deepsort.py /dev/video0 --config_detection ./configs/yolov3_tiny.yaml --camera 0# yolov5s + deepsortpython deepsort.py [VIDEO_PATH] --config_detection ./configs/yolov5s.yaml# yolov5m + deepsortpython deepsort.py [VIDEO_PATH] --config_detection ./configs/yolov5m.yaml# mask_rcnn + deepsortpython deepsort.py [VIDEO_PATH] --config_detection ./configs/mask_rcnn.yaml --segment# fast-reid + deepsortpython deepsort.py [VIDEO_PATH] --fastreid [--config_fastreid ./configs/fastreid.yaml]# MMDetection + deepsortpython deepsort.py [VIDEO_PATH] --mmdet [--config_mmdetection ./configs/mmdet.yaml]

Use--display to enable display image per frame.
Results will be saved to./output/results.avi and./output/results.txt.

All files above can also be accessed from BaiduDisk!
linker:BaiduDiskpasswd:fbuw

Training the RE-ID model

CheckGETTING_STARTED.md to start training progress using standard benchmark orcustomized dataset.

Demo videos and images

demo.avidemo2.avi

1.jpg2.jpg

References


[8]ページ先頭

©2009-2025 Movatter.jp