Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

License

NotificationsYou must be signed in to change notification settings

sair-lab/PLNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is the official implementation of PLNet, which is the feature detector of ourAirSLAM. PLNet is a convolutional neural network (CNN) designed to simultaneously detect keypoints and structural lines. It leverages a shared backbone and specialized headers for keypoint and line detection. The shared backbone design makles PLNet highly efficient.

Data Downloading

  • The training and testing data (includingWireframe dataset andYorkUrban dataset) can be downloaded viaGoogle Drive.Many thanks to authors of these two excellent datasets!

  • You can also use thegdown to download the data in the terminal by

    gdown 134L-u9pgGtnzw0auPv8ykHqMjjZ2claOunzip data.zip

Installation

Anaconda
  • Clone the code repo:git clone https://github.com/sair-lab/PLNet.git.
  • Install ninja-build bysudo apt install ninja-build.
  • Create a conda environment by
conda create -n plnet python==3.9conda activate plnetpip install -e.
  • Run the following command lines to install the dependencies of PLNet
# Install pytorch, please be careful for the version of CUDA on your machinepip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116# Install other dependenciespip install -r requirement.txt
  • Verify the installation.
python -c"import torch; print(torch.cuda.is_available())"# Check if the installed pytorch supports CUDA.
Docker

We also provide aDockerfile. You could build the docker image by running the following command lines.

sudo docker build -< Dockerfile --tag plnet:latest

Training

Run the following command line to train the PLNet on the Wireframe dataset.

python -m hawp.fsl.train configs/plnet.yaml --logdir outputs

Evaluation

We provide a pre-trained model that can be downloaded viaOneDrive.

  • Test using the Wireframe dataset:

    python -m hawp.fsl.benchmark configs/plnet.yaml \  --ckpt /path/to/your/model \  --dataset wireframe
  • Test using the YorkUrban dataset:

    python -m hawp.fsl.benchmark configs/plnet.yaml \  --ckpt /path/to/your/model \  --dataset york

Citations

If you find our work useful in your research, please consider citing:

@article{xu2024airslam,  title = {{AirSLAM}: An Efficient and Illumination-Robust Point-Line Visual SLAM System},  author = {Xu, Kuan and Hao, Yuefan and Yuan, Shenghai and Wang, Chen and Xie, Lihua},  journal = {arXiv preprint arXiv:2408.03520},  year = {2024},  url = {https://arxiv.org/abs/2408.03520},  code = {https://github.com/sair-lab/AirSLAM},}

This code builds on HAWP and SuperPoint. Please consider citing:

@article{HAWP-journal,  title = "Holistically-Attracted Wireframe Parsing: From Supervised to Self-Supervised Learning",  author = "Nan Xue and Tianfu Wu and Song Bai and Fu-Dong Wang and Gui-Song Xia and Liangpei Zhang and Philip H.S. Torr  journal = "IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI)",  year = {2023}}@inproceedings{detone2018superpoint,  title={Superpoint: Self-supervised interest point detection and description},  author={DeTone, Daniel and Malisiewicz, Tomasz and Rabinovich, Andrew},  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition workshops},  pages={224--236},  year={2018}}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp