Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Multi Person Skeleton Based Action Recognition and Tracking

License

NotificationsYou must be signed in to change notification settings

CV-ZMH/human-action-recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

visitors

News

💥 Added tensorrt conversion script for reid models.

💥 Added reid models which are trained on mars and market1501 datasets.

💥 Added trained weight ofsiamesenet networks and training script for reid model. They are used in cosine metric learnings of deep sort pipeline.

💥 Added debug-tracker flag todemo.py script for visualizing tracker bboxes and keypoints bboxes. So, you can easily learn by visualizing how the tracker algorithm works.

Pretrained actions, total 9 classes :['stand', 'walk', 'run', 'jump', 'sit', 'squat', 'kick', 'punch', 'wave']

Fight scene demoFight scene debug demo
Street scene demoStreet scene debug demo
Street walk demoStreet walk debug demo

Table of Contents


Overview

This is the 3 steps multi-person action recognition pipeline. But it achieves real time performance with 33 FPS for whole action recognition pipeline with 1 person video. The steps include:

  1. pose estimation withtrtpose
  2. people tracking withdeepsort
  3. action classifier withdnn

Overview of Action Recognition Pipeline

Action classifier is used fromthis repo and his dataset also.

Inference Speed

Tested PC specification

  • OS: Ubuntu 18.04
  • CPU: Ryzen 5 3600 @3.766GHz
  • GPU: RTX 2060
  • CUDA: 10.2
  • TensorRT: 7.1.3.4

❗ Below table is based on a single person video. For multi person testing, the result may vary.

Pipeline StepModelStep's Model Input Size (H, W)Pytorch FPSTensorRT FPS
Pose Estimationdensenet121(256x256)25 fps38 fps
Pose Estimation + Trackingdensenet121 + deepsortsiamese reid(256x256) + (256x128)22 fps34 fps
Pose Estimation + Trackingdensenet121 + deepsortwideresnet reid(256x256) + (256x128)22 fps31 fps
Pose Estimation + Tracking + Actiondensenet121 + deepsortsiamese reid + dnn(256x256) + (256x128) + (--)21 fps33 fps
Pose Estimation + Tracking + Actiondensenet121 + deepsortwideresnet reid + dnn(256x256) + (256x128) + (--)21 fps30 fps

Installation

First, Python >= 3.6

Step 1 - Install Dependencies

Check thisinstallation guide for deep learning packages installation.

Here is required packages for this project and you need to install each of these.

  1. Nvidia-driver 450
  2. Cuda-10.2 andCudnn 8.0.5
  3. Pytorch 1.7.1 andTorchvision 0.8.2
  4. TensorRT 7.1.3
  5. ONNX 1.9.0

Step 2 - Installtorch2trt

git clone https://github.com/NVIDIA-AI-IOT/torch2trtcd torch2trtsudo python3 setup.py install --plugins

Step 3 - Install trt_pose

git clone https://github.com/NVIDIA-AI-IOT/trt_posecd trt_posesudo python setup.py install

Other python packages are inrequirements.txt.

Run below command to install them.

pip install -r requirements.txt

Run Quick Demo

Step 1 - Download the Pretrained Models

Action Classifier Pretrained models are already uploaded in the pathweights/classifier/dnn.

  • Download the pretrained weight files to run the demo.
Model TypeNameTrained DatasetWeight
Pose EstimationtrtposeCOCOdensenet121
Trackingdeepsort reidMarket1501wide_resnet
Trackingdeepsort reidMarket1501siamese_net
Trackingdeepsort reidMarswide_resnet
Trackingdeepsort reidMarssiamese_net
  • Then put them to these folder
    • deepsort weight toweights/tracker/deepsort/
    • trt_pose weight toweights/pose_estimation/trtpose.

Step 2 - TensorRT Conversion (Optional)

If you don't have installed tensorrt on your system, just skip this step. You just need to set pytorch model weights of the correspondingmodel path in this config file.

Converttrtpose model

# check the I/O weight file in configs/trtpose.yamlcd export_modelspython convert_trtpose.py --config ../configs/infer_trtpose_deepsort_dnn.yaml

‼️ Originaldensenet121_trtpose model is trained with256 input size. So, if you want to convert tensorrt model with bigger input size (like 512), you need to changesize parameter inconfigs/infer_trtpose_deepsort_dnn.yaml file.

ConvertDeepsort reid model, pytorch >> onnx >> tensorRT

cd export_models#1. torch to onnxpython convert_reid2onnx.py \--model_path <your reid model path> \--reid_name <siamesenet/wideresnet> \--dataset_name <market1501/mars> \--check#2. onnx to tensorRTpython convert_reid2trt.py \--onnx_path <your onnx model path> \--mode fp16 \--max_batch 100#3. check your tensorrt converted model with pytorch modelpython test_trt_inference.py \--trt_model_path <your tensorrt model path> \--torch_model_path <your pytorch model path> \--reid_name <siamesenet/wideresnet> \--dataset_name <market1501/mars> \

Step 3 - Run Demo.py

Arguments list ofDemo.py

  • task [pose, track, action] : Inference mode for testingPose Estimation,Tracking orAction Recognition.
  • config : inference config file path. (default=../configs/inference_config.yaml)
  • source : video file path to predict action or track. If not provided, it will usewebcam source as default.
  • save_folder : save the result video folder path. Output filename format is composed of "{source video name/webcam}{pose network name}{deepsort}{reid network name}{action classifier name}.avi". If not provided, it will not save the result video.
  • draw_kp_numbers : flag to draw keypoint numbers of each person for visualization.
  • debug_track : flag to debug tracking for tracker's bbox state and current detected bbox of tracker's inner process with bboxes visualization.

‼️Before running thedemo.py, you need to change some parameters inconfiigs/infer_trtpose_deepsort_dnn.yaml file.

Examples:

Then, Runaction recogniiton.

cd src# for video, use --source flag to your video pathpython demo.py --task action --source ../test_data/fun_theory.mp4 --save_folder ../output --debug_track# for webcam, no need to provid --source flagpython demo.py --task action --save_path ../output --debug_track

Runpose tracking.

# for video, use --src flag to your video pathpython demo.py --task track --source ../test_data/fun_theory.mp4 --save_path ../output# for webcam, no need to provid --source flagpython demo.py --task track --save_path ../output

Runpose estimation only.

# for video, use --src flag to your video pathpython demo.py --task pose --source ../test_data/fun_theory.mp4 --save_path ../output# for webcam, no need to provid --source flagpython demo.py --task pose --save_path ../output

Training

Train Action Classifier Model

cd src&& bash ./train_trtpose_dnn_action.sh

Train reID Model for DeepSort Tracking

To train different reid network for cosine metric learning used in deepsort:

  • Download the reid datasetMars
  • Preparemars dataset with this command. This will split train/val from mars bbox-train folder and calculate mean & std over the train set. Use this mean & std for dataset normalization.
cd src&& python prepare_mars.py --root<your dataset root> --train_percent 0.8 --bs 256
  • Modify thetune_params for multiple runs to find hyper parameter search as your need.

  • Then run below to train reid network.

cd src&& python train_reid.py --config ../configs/train_reid.yaml

References

TODO

  • Add different reid network used in DeepSort
  • Add tensorrt for reid model
  • Add more pose estimation models
  • Add more tracking methods
  • Add more action recognition models

About

Multi Person Skeleton Based Action Recognition and Tracking

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp