- Notifications
You must be signed in to change notification settings - Fork3
(BMVC 2020 Oral) Neighbourhood-Insensitive Point Cloud Normal Estimation Network
ActiveVisionLab/NINormal
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Project Page |Paper |Video |Supp |Data |Pretrained Models
Zirui Wang andVictor Adrian Prisacariu. Active Vision Lab, University of Oxford.BMVC 2020 (Oral Presentation).
Update 30 June 2025: data and pretrained checkpoints are now available at HuggingFace (link).
Update: We use our university's OneDrive to store our pretrained models and the preprocessed dataset. The university just changed the access policy and this stops us sharing our data through a public link so the data and pretrained links above are broken. The easiest fix for now isyou can send your email address to ryan[AT]robots.ox.ac.uk and I'll share it through email. We will try to find out a way to share it with a link properly later.
Python == 3.7PyTorch >= 1.1.0CUDA >= 9.0h5py == 2.10We tried PyTorch 1.1/1.3/1.4/1.5 and CUDA 9.0/9.2/10.0/10.1/10.2. So the code should be able to run as long as you have a modern PyTorch and CUDA installed.
Setup env:
conda create -n ninormal python=3.7conda activate ninormalconda install pytorch=1.1.0 torchvision cudatoolkit=9.0 -c pytorch # you can change the pytorch and cuda version here.conda install tqdm futureconda install -c anaconda scikit-learnpip install h5py==2.10pip install tensorflow # this is the cpu version, we just need the tensorboard.(Optional) Installopen3d for visualisation. You might need a physical monitor to install this lib.
conda install -c open3d-admin open3dClone this repo:
git clone https://github.com/ActiveVisionLab/NINormal.gitWe use the PCPNet dataset in our paper. The official PCPNet dataset is available athere. We pre-processed the official PCPNet dataset using scikit-learn's KDTree and wrapped the processed knn point patches in h5 files. For the best training efficiency, we produce an h5 file for each K and each train/test/eval split.
To simply reproduce our paper results with k=20 or k=50, we provide a subset of our full pre-processed datasethere.
To fully reproduce our results from k=3 to k=50 (paper Fig. 2), the full pre-processed dataset is availablehere.
Untar the dataset.
tar -xvf path/to/the/tar.gzWe train all our models (except k=40 and k=50) using 3 Nvidia 1080Ti GPUs. For k=40 and k=50, we use 3 Nvidia Titan-RTX GPUs. All models are trained with batch size 6. To reproduce our paper results, set the--batchsize_train=6 and--batchsize_eval=6. Reduce the batch size when out of memory.
Train with 20 neighbours:
python train.py \--datafolder='path/to/the/folder/contains/h5/files' \--batchsize_train=6 \--batchsize_eval=6Train with 50 neighbours:
python train.py \--datafolder='path/to/the/folder/contains/h5/files' \--train_dataset_name='train_patchsize_2000_k_50.h5' \--eval_dataset_name='eval_patchsize_2000_k_50.h5' \--batchsize_train=6 \--batchsize_eval=6Alternatively, you can create a symlink that points to the downloaded dataset:
cd NINormal # our repomkdir dataset_dircd dataset_dirln -s path/to/the/folder/contains/h5/files ./pcp_knn_patch_h5_filesand train with:
python train.pyThe batch size 6 is the batch size that theConv2D() function processes. Our network can be implemented using theConv1D() orLinear() but we use the 1x1Conv2D() along with our pre-processed dataset to achieve the best balance between data loading and training. When setting the batch size to 6, the actual batch size our network processes is 6 x 2000 = 12000, as mentioned at the end of Sec. 3 in our paper. The number 2000 is the number of knn patches we packed in a subgroup in an h5 file. See the pre-processing script in./utils and thePcpKnnPatchesDataset for more details.
Similar to the dataset, we provide a tar file that contains models trained with k=20 and k=50here.
To evaluate all models that we present in Fig.2 (k=3 to k=50), download all modelshere.
Untar the downloaded checkpoints file.
tar -xvf path/to/the/ckpts/tar.gzIMPORTANT NOTE:The k for trained checkpoints and the k for a dataset must match. E.g. Use the nb20 ckpt with the nb20 dataset:
python test.py \--ckpt_path='/path/to/the/ckpts/nb_20' \--test_dataset_name='test_patchsize_2000_k_20.h5'Like the dataset, you can also do a symlink that points to the downloaded checkpoint folder:
cd NINormal # our repoln -s path/to/the/folder/just/extracted ./paper_ckptsand run test with just:
python test.pyWe recommend visualising attention weights using k=50 (with the model trained with k=50 of course...) to see how our network pays extra attention to the boundary of a patch.
Install the opend3d lib. You might need a PC with a physical monitor to install this library...
conda install -c open3d-admin open3dSimilar to the testing procedure, after got datasets, run:
python test_vis_attn_map_3d.py --ckpt_path='/path/to/the/ckpts/nb_50'We aim to release it soon.
The authors would like to thankMin Chen,Tengda Han,Shuda Li,Tim Yuqing Tang andShangzhe Wufor insightful discussions and proofreading.
@inproceedings{wang2020ninormal, title={Neighbourhood-Insensitive Point Cloud Normal Estimation Network}, author={Wang, Zirui and Prisacariu, Victor Adrian}, booktitle={BMVC}, year={2020}}About
(BMVC 2020 Oral) Neighbourhood-Insensitive Point Cloud Normal Estimation Network
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Contributors2
Uh oh!
There was an error while loading.Please reload this page.