Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

a solid and strong baseline of pedestrian attribute recognition

NotificationsYou must be signed in to change notification settings

tonylibing/Strong_Baseline_of_Pedestrian_Attribute_Recognition

 
 

Repository files navigation

The code for our paperRethinking of Pedestrian Attribute Recognition: Realistic Datasets with Efficient Method.

Considering the big performance gap of various SOTA baseline, we provide a solid and strong baseline for fair comparison.

Update

  • 20200901 add infer.py

Dependencies

  • pytorch 1.4.0
  • torchvision 0.5.0
  • tqdm 4.43.0
  • easydict 1.9

Tricks

  • sample-wise loss not label-wise loss
  • big learning rate combined with clip_grad_norm
  • augmentation Pad combined with RandomCrop
  • add BN after classifier layer

Performance Comparision

Baseline Performance

  • Compared with baseline performance of MsVAA, VAC, ALM, our baseline make a huge performance improvement.
  • Compared with our reimplementation of MsVAA, VAC, ALM, our baseline is better.
  • We try our best to reimplementMsVAA,VAC and thanks to their code.
  • We also try our best to reimplement ALM and try to contact the authors, but no reply received.

BaselinePerf

BaselinePerf

SOTA Performance

  • Compared with performance of recent state-of-the-art methods, the performance of our baseline is comparable, even better.

SOTAPerf

  • DeepMAR (ACPR15) Multi-attribute Learning for Pedestrian Attribute Recognition in Surveillance Scenarios.
  • HPNet (ICCV17) Hydraplus-net: Attentive deep features for pedestrian analysis.
  • JRL (ICCV17) Attribute recognition by joint recurrent learning of context and correlation.
  • LGNet (BMVC18) Localization guided learning for pedestrian attribute recognition.
  • PGDM (ICME18) Pose guided deep model for pedestrian attribute recognition in surveillance scenarios.
  • GRL (IJCAI18) Grouping Attribute Recognition for Pedestrian with Joint Recurrent Learning.
  • RA (AAAI19) Recurrent attention model for pedestrian attribute recognition.
  • VSGR (AAAI19) Visual-semantic graph reasoning for pedestrian attribute recognition.
  • VRKD (IJCAI19) Pedestrian Attribute Recognition by Joint Visual-semantic Reasoning and Knowledge Distillation.
  • AAP (IJCAI19) Attribute aware pooling for pedestrian attribute recognition.
  • MsVAA (ECCV18) Deep imbalanced attribute classification using visual attention aggregation.
  • VAC (CVPR19) Visual attention consistency under image transforms for multi-label image classification.
  • ALM (ICCV19) Improving Pedestrian Attribute Recognition With Weakly-Supervised Multi-Scale Attribute-Specific Localization.

Dataset Info

PETA: Pedestrian Attribute Recognition At Far Distance [Paper][Project]

PA100K[Paper][Github]

RAP : A Richly Annotated Dataset for Pedestrian Attribute Recognition

Zero-shot Protocal

Realistic datasets of PETA and RAPv2 are provided atGoogle Drive.

You can just replace the 'dataset.pkl' with 'peta_new.pkl' or 'rapv2_new.pkl' to run experiments under new protocal.

Pretrained Models

Pretrained models are provided now atGoogle Drive.

Because we ran the experiments again, so there may be subtle differences in performance.

Get Started

  1. Rungit clone https://github.com/valencebond/Strong_Baseline_of_Pedestrian_Attribute_Recognition.git
  2. Create a directory to dowload above datasets.
    cd Strong_Baseline_of_Pedestrian_Attribute_Recognitionmkdir data
  3. Prepare datasets to have following structure:
    ${project_dir}/data    PETA        images/        PETA.mat        README    PA100k        data/        annotation.mat        README.txt    RAP        RAP_dataset/        RAP_annotation/    RAP2        RAP_dataset/        RAP_annotation/
  4. Run theformat_xxxx.py to generatedataset.pkl respectively
    python ./dataset/preprocess/format_peta.pypython ./dataset/preprocess/format_pa100k.pypython ./dataset/preprocess/format_rap.pypython ./dataset/preprocess/format_rap2.py
  5. Train baseline based on resnet50
    CUDA_VISIBLE_DEVICES=0 python train.py PETA

Acknowledgements

Codes are based on the repository fromDangwei LiandHoujing Huang. Thanks for their released code.

Citation

If you use this method or this code in your research, please cite as:

@misc{jia2020rethinking,    title={Rethinking of Pedestrian Attribute Recognition: Realistic Datasets with Efficient Method},    author={Jian Jia and Houjing Huang and Wenjie Yang and Xiaotang Chen and Kaiqi Huang},    year={2020},    eprint={2005.11909},    archivePrefix={arXiv},    primaryClass={cs.CV}}

About

a solid and strong baseline of pedestrian attribute recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python100.0%

[8]ページ先頭

©2009-2025 Movatter.jp