Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

[IJCAI 2022] Code for the paper "Dite-HRNet: Dynamic Lightweight High-Resolution Network for Human Pose Estimation"

License

NotificationsYou must be signed in to change notification settings

ZiyiZhang27/Dite-HRNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

This is an official PyTorch implementation of our IJCAI-ECAI 2022 paperDite-HRNet: Dynamic Lightweight High-Resolution Network for Human Pose Estimation. We present a Dynamic lightweight High-Resolution Network (Dite-HRNet), which can efficiently extract multi-scale contextual information and model long-range spatial dependency for human pose estimation. Specifically, we propose two methods, dynamic split convolution and adaptive context modeling, and embed them into two novel lightweight blocks, which are named Dynamic Multi-scale Context (DMC) block and Dynamic Global Context (DGC) block. These two blocks, as the basic component units of our Dite-HRNet, are specially designed for the high-resolution networks to make full use of the parallel multi-resolution architecture. Experimental results show that the proposed network achieves superior performance on both COCO and MPII human pose estimation datasets, surpassing the state-of-the-art lightweight networks.

  • The Dite-HRNet architecture:

Dite-HRNet

  • The DMC block and DGC block:

dynamic_blocks

Models and results

Results on COCOval2017 with detector having human AP of 56.4 on COCOval2017 dataset

ModelInput Size#ParamsFLOPsAPAP50AP75APMAPLARAR50AR75ARMARL
Dite-HRNet-18256x1921.1M209.8M0.6590.8730.7400.6320.7160.7210.9160.7930.6830.776
Dite-HRNet-30256x1921.8M329.8M0.6830.8820.7620.6550.7410.7420.9260.8120.7040.797
Dite-HRNet-18384x2881.1M471.8M0.6900.8800.7600.6550.7550.7500.9220.8140.7080.810
Dite-HRNet-30384x2881.8M741.7M0.7150.8890.7820.6820.7770.7720.9280.8330.7320.831

Results on MPIIval dataset

ModelInput Size#ParamsFLOPsPCKh@0.5PCKh@0.1
Dite-HRNet-18256x2561.1M279.5M0.8700.311
Dite-HRNet-30256x2561.8M439.5M0.8760.317

Environment

The code is developed and tested using python 3.6 and 8 GeForce RTX 3090 GPUs. Other python versions or GPUs are not fully tested.

Requirements

  • Linux (Windows is not officially supported)
  • Python 3.6+
  • PyTorch 1.3+
  • CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
  • GCC 5+
  • mmcv (Please install the latest version of mmcv-full)
  • Numpy
  • cv2
  • json_tricks
  • xtcocotools

Quick Start

1. Installation

a. Install mmcv, we recommend you to install the pre-build mmcv as below.

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html

Please replace{cu_version} and{torch_version} in the url to your desired one. For example, to install the latestmmcv-full withCUDA 11.1 andPyTorch 1.10.0, use the following command:

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html

Seehere for different versions of MMCV compatible to different PyTorch and CUDA versions.

Optionally you can choose to compile mmcv from source by the following command

git clone https://github.com/open-mmlab/mmcv.gitcd mmcvMMCV_WITH_OPS=1 pip install -e.# package mmcv-full, which contains cuda ops, will be installed after this step# OR pip install -e .  # package mmcv, which contains no cuda ops, will be installed after this stepcd ..

Important: You need to runpip uninstall mmcv first if you have mmcv installed. If mmcv and mmcv-full are both installed, there will beModuleNotFoundError.

b. Install build requirements

pip install -r requirements.txt

2. Prepare datasets

It is recommended to symlink the dataset root to$DITE_HRNET/data.If your folder structure is different, you may need to change the corresponding paths in config files.

For COCO dataset, please download fromCOCO download, 2017 Train/Val is needed for COCO keypoints training and validation.HRNet-Human-Pose-Estimation provides person detection result of COCO val2017 to reproduce our multi-person pose estimation results. Please download fromOneDrive orGoogleDrive. Optionally, to evaluate on COCO test-dev2017, please download theimage-info.Download and extract them under$DITE_HRNET/data, and make them look like this:

dite_hrnet├── configs├── models├── tools`── data    │── coco        │-- annotations        │   │-- person_keypoints_train2017.json        │   |-- person_keypoints_val2017.json        │   |-- person_keypoints_test-dev-2017.json        |-- person_detection_results        |   |-- COCO_val2017_detections_AP_H_56_person.json        |   |-- COCO_test-dev2017_detections_AP_H_609_person.json        │-- train2017        │   │-- 000000000009.jpg        │   │-- 000000000025.jpg        │   │-- 000000000030.jpg        │   │-- ...        `-- val2017            │-- 000000000139.jpg            │-- 000000000285.jpg            │-- 000000000632.jpg            │-- ...

For MPII dataset, please download fromMPII Human Pose Dataset.The original annotation files have been converted into json format. Please download them frommpii_annotations.Extract them under$DITE_HRNET/data, and make them look like this:

dite_hrnet├── configs├── models├── tools`── data    │── mpii        |── annotations        |   |── mpii_gt_val.mat        |   |── mpii_test.json        |   |── mpii_train.json        |   |── mpii_trainval.json        |   `── mpii_val.json        `── images            |── 000001163.jpg            |── 000003072.jpg            │-- ...

Training and Testing

1. Training

All outputs (log files and checkpoints) will be saved to the working directory,which is specified bywork_dir in the config file.

By default we evaluate the model on the validation set after each epoch, you can change the evaluation interval by modifying the interval argument in the training config

evaluation=dict(interval=5)# This evaluate the model per 5 epoch.

According to theLinear Scaling Rule, you need to set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

# train with a signle GPUpython tools/train.py${CONFIG_FILE} [optional arguments]# train with multiple GPUs./tools/dist_train.sh${CONFIG_FILE}${GPU_NUM} [optional arguments]

Optional arguments are:

  • --work-dir ${WORK_DIR}: Override the working directory specified in the config file.
  • --resume-from ${CHECKPOINT_FILE}: Resume from a previous checkpoint file.
  • --no-validate: Whether not to evaluate the checkpoint during training.
  • --gpus ${GPU_NUM}: Number of gpus to use, which is only applicable to non-distributed training.
  • --gpu-ids ${GPU_IDS}: IDs of gpus to use, which is only applicable to non-distributed training.
  • --seed ${SEED}: Seed id for random state in python, numpy and pytorch to generate random numbers.
  • --deterministic: If specified, it will set deterministic options for CUDNN backend.
  • --cfg-options CFG_OPTIONS: Override some settings in the used config, the key-value pair in xxx=yyy format will be merged into config file. For example, '--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'.
  • --launcher ${JOB_LAUNCHER}: Items for distributed job initialization launcher. Allowed choices arenone,pytorch,slurm,mpi. Especially, if set to none, it will test in a non-distributed mode.
  • --autoscale-lr: If specified, it will automatically scale lr with the number of gpus byLinear Scaling Rule.
  • LOCAL_RANK: ID for local rank. If not specified, it will be set to 0.

Difference betweenresume-from andload-from:resume-from loads both the model weights and optimizer status, and the epoch is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally.load-from only loads the model weights and the training epoch starts from 0. It is usually used for finetuning.

Examples:

# train DiteHRNet-18 on COCO dataset with 8 GPUs./tools/dist_train.sh configs/top_down/dite_hrnet/coco/ditehrnet_18_coco_256x192.py 8# train DiteHRNet-18 on MPII dataset with 8 GPUs./tools/dist_train.sh configs/top_down/dite_hrnet/mpii/ditehrnet_18_mpii_256x256.py 8

2. Testing

You can use the following commands to test a dataset.

# single-gpu testingpython tools/test.py${CONFIG_FILE}${CHECKPOINT_FILE} [--out${RESULT_FILE}] [--fuse-conv-bn] \    [--eval${EVAL_METRICS}] [--gpu_collect] [--tmpdir${TMPDIR}] [--cfg-options${CFG_OPTIONS}] \    [--launcher${JOB_LAUNCHER}] [--local_rank${LOCAL_RANK}]# multi-gpu testing./tools/dist_test.sh${CONFIG_FILE}${CHECKPOINT_FILE}${GPU_NUM} [--out${RESULT_FILE}] [--fuse-conv-bn] \    [--eval${EVAL_METRIC}] [--gpu_collect] [--tmpdir${TMPDIR}] [--cfg-options${CFG_OPTIONS}] \    [--launcher${JOB_LAUNCHER}] [--local_rank${LOCAL_RANK}]

Note that the providedCHECKPOINT_FILE is either the path to the model checkpoint file downloaded in advance, or the url link to the model checkpoint.

Optional arguments:

  • RESULT_FILE: Filename of the output results. If not specified, the results will not be saved to a file.
  • --fuse-conv-bn: Whether to fuse conv and bn, this will slightly increase the inference speed.
  • EVAL_METRICS: Items to be evaluated on the results. Allowed values depend on the dataset.
  • --gpu_collect: If specified, recognition results will be collected using gpu communication. Otherwise, it will save the results on different gpus toTMPDIR and collect them by the rank 0 worker.
  • TMPDIR: Temporary directory used for collecting results from multiple workers, available when--gpu_collect is not specified.
  • CFG_OPTIONS: Override some settings in the used config, the key-value pair in xxx=yyy format will be merged into config file. For example, '--cfg-options model.backbone.depth=18 model.backbone.with_cp=True'.
  • JOB_LAUNCHER: Items for distributed job initialization launcher. Allowed choices arenone,pytorch,slurm,mpi. Especially, if set to none, it will test in a non-distributed mode.
  • LOCAL_RANK: ID for local rank. If not specified, it will be set to 0.

Example:

# test DiteHRNet-18 on COCO (without saving the test results) dataset with 8 GPUS, and evaluate the mAP../tools/dist_test.sh configs/top_down/dite_hrnet/coco/ditehrnet_18_coco_256x192.py \    checkpoints/SOME_CHECKPOINT.pth 8 \    --eval mAP

3. Computing model complexity

You can use the following commands to compute the complexity of one model.

python tools/summary_network.py${CONFIG_FILE} --shape${SHAPE}
  • SHAPE: Input size.

Example:

# compute the complexity of DiteHRNet-18 with 256x192 resolution input.python tools/summary_network.py configs/top_down/dite_hrnet/coco/ditehrnet_18_coco_256x192.py \    --shape 256 192 \

Citation

If you find this project useful in your research, please consider cite:

@inproceedings{LiZXZB22,  title={{Dite-HRNet}: Dynamic Lightweight High-Resolution Network for Human Pose Estimation},  author={Qun Li and Ziyi Zhang and Fu Xiao and Feng Zhang and Bir Bhanu},  booktitle={International Joint Conference on Artificial Intelligence (IJCAI)},  pages = {1095-1101},  year = {2022}}

Acknowledgement

Thanks to:

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp