Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

License

NotificationsYou must be signed in to change notification settings

leoxiaobin/deep-high-resolution-net.pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

News

Introduction

This is an official pytorch implementation ofDeep High-Resolution Representation Learning for Human Pose Estimation.In this work, we are interested in the human pose estimation problem with a focus on learning reliable high-resolution representations. Most existing methodsrecover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed networkmaintains high-resolution representations through the whole process.We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworksin parallel. We conductrepeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich high-resolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset.

Illustrating the architecture of the proposed HRNet

Main Results

Results on MPII val

ArchHeadShoulderElbowWristHipKneeAnkleMeanMean@0.1
pose_resnet_5096.495.389.083.288.484.079.688.534.0
pose_resnet_10196.995.989.584.488.484.580.789.134.0
pose_resnet_15297.095.990.085.089.285.381.389.635.0
pose_hrnet_w3297.195.990.386.489.187.183.390.337.7

Note:

Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 dataset

ArchInput size#ParamsGFLOPsAPAp .5AP .75AP (M)AP (L)ARAR .5AR .75AR (M)AR (L)
pose_resnet_50256x19234.0M8.90.7040.8860.7830.6710.7720.7630.9290.8340.7210.824
pose_resnet_50384x28834.0M20.00.7220.8930.7890.6810.7970.7760.9320.8380.7280.846
pose_resnet_101256x19253.0M12.40.7140.8930.7930.6810.7810.7710.9340.8400.7300.832
pose_resnet_101384x28853.0M27.90.7360.8960.8030.6990.8110.7910.9360.8510.7450.858
pose_resnet_152256x19268.6M15.70.7200.8930.7980.6870.7890.7780.9340.8460.7360.839
pose_resnet_152384x28868.6M35.30.7430.8960.8110.7050.8160.7970.9370.8580.7510.863
pose_hrnet_w32256x19228.5M7.10.7440.9050.8190.7080.8100.7980.9420.8650.7570.858
pose_hrnet_w32384x28828.5M16.00.7580.9060.8250.7200.8270.8090.9430.8690.7670.871
pose_hrnet_w48256x19263.6M14.60.7510.9060.8220.7150.8180.8040.9430.8670.7620.864
pose_hrnet_w48384x28863.6M32.90.7630.9080.8290.7230.8340.8120.9420.8710.7670.876

Note:

Results on COCO test-dev2017 with detector having human AP of 60.9 on COCO test-dev2017 dataset

ArchInput size#ParamsGFLOPsAPAp .5AP .75AP (M)AP (L)ARAR .5AR .75AR (M)AR (L)
pose_resnet_152384x28868.6M35.30.7370.9190.8280.7130.8000.7900.9520.8560.7480.849
pose_hrnet_w48384x28863.6M32.90.7550.9250.8330.7190.8150.8050.9570.8740.7630.863
pose_hrnet_w48*384x28863.6M32.90.7700.9270.8450.7340.8310.8200.9600.8860.7780.877

Note:

Environment

The code is developed using python 3.6 on Ubuntu 16.04. NVIDIA GPUs are needed. The code is developed and tested using 4 NVIDIA P100 GPU cards. Other platforms or GPU cards are not fully tested.

Quick start

Installation

  1. Install pytorch >= v1.0.0 followingofficial instruction.Note that if you use pytorch's version < v1.0.0, you should following the instruction athttps://github.com/Microsoft/human-pose-estimation.pytorch to disable cudnn's implementations of BatchNorm layer. We encourage you to use higher pytorch's version(>=v1.0.0)

  2. Clone this repo, and we'll call the directory that you cloned as ${POSE_ROOT}.

  3. Install dependencies:

    pip install -r requirements.txt
  4. Make libs:

    cd ${POSE_ROOT}/libmake
  5. InstallCOCOAPI:

    # COCOAPI=/path/to/clone/cocoapigit clone https://github.com/cocodataset/cocoapi.git $COCOAPIcd $COCOAPI/PythonAPI# Install into global site-packagesmake install# Alternatively, if you do not have permissions or prefer# not to install the COCO API into global site-packagespython3 setup.py install --user

    Note that instructions like # COCOAPI=/path/to/install/cocoapi indicate that you should pick a path where you'd like to have the software cloned and then set an environment variable (COCOAPI in this case) accordingly.

  6. Init output(training model output directory) and log(tensorboard log directory) directory:

    mkdir output mkdir log

    Your directory tree should look like this:

    ${POSE_ROOT}├── data├── experiments├── lib├── log├── models├── output├── tools ├── README.md└── requirements.txt
  7. Download pretrained models from our model zoo(GoogleDrive orOneDrive)

    ${POSE_ROOT} `-- models     `-- pytorch         |-- imagenet         |   |-- hrnet_w32-36af842e.pth         |   |-- hrnet_w48-8ef0771d.pth         |   |-- resnet50-19c8e357.pth         |   |-- resnet101-5d3b4d8f.pth         |   `-- resnet152-b121ed2d.pth         |-- pose_coco         |   |-- pose_hrnet_w32_256x192.pth         |   |-- pose_hrnet_w32_384x288.pth         |   |-- pose_hrnet_w48_256x192.pth         |   |-- pose_hrnet_w48_384x288.pth         |   |-- pose_resnet_101_256x192.pth         |   |-- pose_resnet_101_384x288.pth         |   |-- pose_resnet_152_256x192.pth         |   |-- pose_resnet_152_384x288.pth         |   |-- pose_resnet_50_256x192.pth         |   `-- pose_resnet_50_384x288.pth         `-- pose_mpii             |-- pose_hrnet_w32_256x256.pth             |-- pose_hrnet_w48_256x256.pth             |-- pose_resnet_101_256x256.pth             |-- pose_resnet_152_256x256.pth             `-- pose_resnet_50_256x256.pth

Data preparation

For MPII data, please download fromMPII Human Pose Dataset. The original annotation files are in matlab format. We have converted them into json format, you also need to download them fromOneDrive orGoogleDrive.Extract them under {POSE_ROOT}/data, and make them look like this:

${POSE_ROOT}|-- data`-- |-- mpii    `-- |-- annot        |   |-- gt_valid.mat        |   |-- test.json        |   |-- train.json        |   |-- trainval.json        |   `-- valid.json        `-- images            |-- 000001163.jpg            |-- 000003072.jpg

For COCO data, please download fromCOCO download, 2017 Train/Val is needed for COCO keypoints training and validation. We also provide person detection result of COCO val2017 and test-dev2017 to reproduce our multi-person pose estimation results. Please download fromOneDrive orGoogleDrive.Download and extract them under {POSE_ROOT}/data, and make them look like this:

${POSE_ROOT}|-- data`-- |-- coco    `-- |-- annotations        |   |-- person_keypoints_train2017.json        |   `-- person_keypoints_val2017.json        |-- person_detection_results        |   |-- COCO_val2017_detections_AP_H_56_person.json        |   |-- COCO_test-dev2017_detections_AP_H_609_person.json        `-- images            |-- train2017            |   |-- 000000000009.jpg            |   |-- 000000000025.jpg            |   |-- 000000000030.jpg            |   |-- ...             `-- val2017                |-- 000000000139.jpg                |-- 000000000285.jpg                |-- 000000000632.jpg                |-- ...

Training and Testing

Testing on MPII dataset using model zoo's models(GoogleDrive orOneDrive)

python tools/test.py \    --cfg experiments/mpii/hrnet/w32_256x256_adam_lr1e-3.yaml \    TEST.MODEL_FILE models/pytorch/pose_mpii/pose_hrnet_w32_256x256.pth

Training on MPII dataset

python tools/train.py \    --cfg experiments/mpii/hrnet/w32_256x256_adam_lr1e-3.yaml

Testing on COCO val2017 dataset using model zoo's models(GoogleDrive orOneDrive)

python tools/test.py \    --cfg experiments/coco/hrnet/w32_256x192_adam_lr1e-3.yaml \    TEST.MODEL_FILE models/pytorch/pose_coco/pose_hrnet_w32_256x192.pth \    TEST.USE_GT_BBOX False

Training on COCO train2017 dataset

python tools/train.py \    --cfg experiments/coco/hrnet/w32_256x192_adam_lr1e-3.yaml \

Visualization

Visualizing predictions on COCO val

python visualization/plot_coco.py \    --prediction output/coco/w48_384x288_adam_lr1e-3/results/keypoints_val2017_results_0.json \    --save-path visualization/results

Other applications

Many other dense prediction tasks, such as segmentation, face alignment and object detection, etc. have been benefited by HRNet. More information can be found atHigh-Resolution Networks.

Other implementation

mmpose
ModelScope (中文)
timm

Citation

If you use our code or models in your research, please cite with:

@inproceedings{sun2019deep,  title={Deep High-Resolution Representation Learning for Human Pose Estimation},  author={Sun, Ke and Xiao, Bin and Liu, Dong and Wang, Jingdong},  booktitle={CVPR},  year={2019}}@inproceedings{xiao2018simple,    author={Xiao, Bin and Wu, Haiping and Wei, Yichen},    title={Simple Baselines for Human Pose Estimation and Tracking},    booktitle = {European Conference on Computer Vision (ECCV)},    year = {2018}}@article{WangSCJDZLMTWLX19,  title={Deep High-Resolution Representation Learning for Visual Recognition},  author={Jingdong Wang and Ke Sun and Tianheng Cheng and           Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and           Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},  journal   = {TPAMI}  year={2019}}

About

The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors8

Languages


[8]ページ先頭

©2009-2025 Movatter.jp