- Notifications
You must be signed in to change notification settings - Fork272
leeyeehoo/CSRNet-pytorch
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
This is the PyTorch version repo forCSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes in CVPR 2018, which delivered a state-of-the-art, straightforward and end-to-end architecture for crowd counting tasks.
ShanghaiTech Dataset:Google Drive
We strongly recommend Anaconda as the environment.
Python: 2.7
PyTorch: 0.4.0
CUDA: 9.2
Please follow themake_dataset.ipynb to generate the ground truth. It shall take some time to generate the dynamic ground truth. Note you need to generate your own json file.
Trypython train.py train.json val.json 0 0 to start training process.
Follow theval.ipynb to try the validation. You can try to modify the notebook and see the output of each image.
ShanghaiA MAE: 66.4Google DriveShanghaiB MAE: 10.6Google Drive
If you find the CSRNet useful, please cite our paper. Thank you!
@inproceedings{li2018csrnet, title={CSRNet: Dilated convolutional neural networks for understanding the highly congested scenes}, author={Li, Yuhong and Zhang, Xiaofan and Chen, Deming}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, pages={1091--1100}, year={2018}}Please cite the Shanghai datasets and other works if you use them.
@inproceedings{zhang2016single, title={Single-image crowd counting via multi-column convolutional neural network}, author={Zhang, Yingying and Zhou, Desen and Chen, Siqin and Gao, Shenghua and Ma, Yi}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={589--597}, year={2016}}About
CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.