- Notifications
You must be signed in to change notification settings - Fork18
Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020
License
Zengyi-Qin/Weakly-Supervised-3D-Object-Detection
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Created byZengyi Qin, Jinglu Wang and Yan Lu. The repository contains an implementation of thisACM MM 2020 Paper. Readers are strongly recommended to create and enter avirtual environment with Python 3.6 before running the code.
Clone this repository:
git clone https://github.com/Zengyi-Qin/Weakly-Supervised-3D-Object-Detection.git
Enter the main folder and run installation:
pip install -r requirements.txt
Download thedemo data to the main folder and rununzip vs3d_demo.zip. Readers can try out the quick demo with Jupyter Notebook:
cd corejupyter notebook demo.ipynbDownload theKitti Object Detection Dataset (image,calib andlabel) and place them intodata/kitti. Download the ground planes and front-view XYZ maps fromhere and rununzip vs3d_train.zip. Download the pretrained teacher network fromhere and rununzip vs3d_pretrained.zip. The data folder should be in the following structure:
├── data│ ├── demo│ └── kitti│ └── training│ ├── calib│ ├── image_2│ ├── label_2│ ├── sphere│ ├── planes│ └── velodyne│ ├── train.txt│ └── val.txt│ └── pretrained│ ├── student│ └── teacherThesphere folder contains the front-view XYZ maps converted fromvelodyne point clouds using the script in./preprocess/sphere_map.py. After data preparation, readers can train VS3D from scratch by running:
cd corepython main.py --mode train --gpu GPU_IDThe models are saved in./core/runs/weights during training. Reader can refer to./core/main.py for other options in training.
Readers can run the inference on KITTI validation set by running:
cd corepython main.py --mode evaluate --gpu GPU_ID --student_model SAVED_MODELReaders can also directly use the pretrained model for inference by passing--student_model ../data/pretrained/student/model_lidar_158000. Predicted 3D bounding boxes are saved in./output/bbox in KITTI format.
@article{qin2020vs3d, title={Weakly Supervised 3D Object Detection from Point Clouds}, author={Zengyi Qin and Jinglu Wang and Yan Lu}, journal={ACM Multimedia}, year={2020}}About
Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.

