Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[CVPR'19] Dataset and code used in the research project Scan2CAD: Learning CAD Model Alignment in RGB-D Scans

License

NotificationsYou must be signed in to change notification settings

skanti/Scan2CAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

We presentScan2CAD, a novel data-driven method that learns to align 3D CAD models from a shape database to 3D scans.

Scan2CAD

Download Paper (.pdf)

See Youtube Video

Link to the annotation webapp source code

Scan2CAD Benchmark Link

Download Dataset

Thank you for your interest in Scan2CAD. Please follow the following link to download the dataset (.zip):Download Link

With the download you comply to thisGoogle Form

Inside the .zip file, you will find following files:

unzip_root/  - 'full_annotations.json' is the actual dataset with the Scan2CAD alignments and keypoints on the public dataset  - 'cad_appearances.json' is a helper file that contains how often an unique ShapeNet model appears per scene  - 'unique_cad.csv' is a helper file that contains a unique list of all used ShapeNet model in this dataset  - benchmark_input_files/      - 'cad_appearances_hidden_testset.json' contains information on the number of appearances of a CAD model appears per scene in the hidden testset      - 'scannet_hidden_testset.json' contains a list of all scenes that appear in the hidden test set which are used in this benchmark

Some useful info:

To visualize the dataset, please use the visualizer in this repo.In this work we used 3D scans from the ScanNet dataset and CAD models from ShapeNet (version 2.0).

Good luck!

Demo samples

Scan2CAD Alignments

Loadu

Orientated Bounding Boxes for Objects

Scan2CAD

Description

Dataset used in the research project:Scan2CAD: Learning CAD Model Alignment in RGB-D Scans

For the public dataset, we provide annotations with:

  • 97607 keypoint correspondences between Scan and CAD models
  • 14225 objects between Scan and CAD
  • 1506 scans

An additional annotated hidden testset, that is used for our Scan2CAD benchmark contains:

  • 7557 keypoint correspondences between Scan and CAD models
  • 1160 objects between Scan and CAD
  • 97 scans

Benchmark

We published a new benchmark for CAD model alignment in 3D scans (and more tasks to come)here.

Get started

  1. Clone repo:

git clone https://github.com/skanti/Scan2CAD.git

  1. Ask for dataset: (see sections below. You will needScanNet,ShapeNet andScan2CAD).

  2. Copy dataset content into./Routines/Script/.

  3. Visualize data:

python3 ./Routines/Script/Annotation2Mesh.py

  1. Compilec++ programs
cd {Vox2Mesh, DFGen, CropCentered}make
  1. Voxelize CADs (shapenet):

python3 ./Routines/Script/CADVoxelization.py

  1. Generate data (correspondences):

python3 ./Routines/Script/GenerateCorrespondences.py

  1. Startpytorch training for heatmap prediction:
cd ./Network/pytorch./run.sh
  1. Run alignment algorithm:
cd Routines/Scriptspython3 Alignment9DoF.py --projectdir /Network/pytorch/output/dummy
  1. Mesh and view alignment result:
cd Routines/Scriptspython3 Alignment2Mesh.py --alignment ./tmp/alignments/dummy/scene0470_00.csv --out ./

DownloadScan2CAD Dataset (Annotation Data)

If you would like to download theScan2CAD dataset, please fill out thisgoogle-form.

A download link will be provided to download a.zip file (approx. 8MB) that contains the dataset.

Format of the Datasets

Format of "full_annotions.json"

The file contains1506 entries, where the field of one entry is described as:

[{id_scan :"scannet scene id",trs :{// <-- transformation from scan space to world spacetranslation :[tx,ty,tz],// <-- translation vectorrotation :(qw,qx,qy,qz],// <-- rotation quaternionscale :[sx,sy,sz],// <-- scale vector},aligned_models :[{// <-- list of aligned models for this scenesym :"(__SYM_NONE, __SYM_ROTATE_UP_2, __SYM_ROTATE_UP_4 or __SYM_ROTATE_UP_INF)",// <-- symmetry property only one appliescatid_cad  :"shapenet category id",id_cad :"shapenet model id"trs :{// <-- transformation from CAD space to world spacetranslation :[tx,ty,tz],// <-- translation vectorrotation :[qw,qx,qy,qz],// <-- rotation quaternionscale :[sx,sy,sz]// <-- scale vector},keypoints_scan :{// <-- scan keypointsn_keypoints` : "(int) number of keypoints",        position :  [x1, y1, z1, ... xN, yN, zN], // <--  scan keypoints positions in world space},    keypoints_cad : { // <-- cad keypoints        n_keypoints` :"(int) number of keypoints",position :[x1,y1,z1, ...xN,yN,zN],// <--  cad keypoints positions in world space},// NOTE: n_keypoints (scan) = n_keypoints (CAD) always true}]},{ ...},{ ...},]

Format of "cad_appearances.json"

This file is merely a helper file as the information in this file are deducible from "full_annotations.json". The file contains1506 entries, where the field of one entry is described as:

{scene00001_00 :{// <-- scan id as key"00000001_000000000000abc" :2,// <-- catid_cad + "_" + id_cad as key, the number denotes the number of appearances of that CAD in the scene"00000003_000000000000def" :1,"00000030_000000000000mno" :1,   ...},scene00002_00 :{    ...},},

Visualization of the Dataset + BBoxes

Once you have downloaded the dataset files, you can run./Routines/Script/Annotation2Mesh.py to preview the annotations as seen here (toggle scan/CADs/BBox):

Data Generation forScan2CAD Alignment

Scan and CAD Repository

In this work we used 3D scans from theScanNet dataset and CAD models fromShapeNetCore (version 2.0). If you want to use it too, then you have to send an email and ask for the data - they usually do it very quickly.

Here is a sample (see in./Assets/scannet-sample/ and./Assets/shapenet-sample/):

ScanNet ColorScanNet Labels
ShapeNet TrashbinShapeNet ChairShapeNet Table

Voxelization of Data as Signed Distance Function (sdf) and unsigned Distance Function (df) files

The data must be processed such that scans are represented assdf and CADs asdf voxel grids as illustrated here (see in./Assets/scannet-voxelized-sdf-sample/ and./Assets/shapenet-voxelized-df-sample/):

ShapeNet Trashbin VoxShapeNet Chair VoxShapeNet Table Vox

In order to createsdf voxel grids from the scans,volumetric fusion is performed to fuse depth maps into a voxel grid containing the entire scene.For the sdf grid we used a voxel resolution of3cm and a truncation distance of15cm.

In order to generate thedf voxel grids for the CADs we used a modification (seeCADVoxelization.py) ofthis repo (thanks to @christopherbatty).

Creating Training Samples

In order to generate training samples for your CNN, you can run./Routines/Script/GenerateCorrespondences.py.From theScan2CAD dataset this will generate following:

  1. Centered crops of the scan
  2. Heatmaps on the CAD (= correspondence to the scan)
  3. Scale (x,y,z) for the CAD
  4. Match (0/1) indicates whether both inputs match semantically

The generated data totals to approximately500GB. Here is an example of the data generation (see in./Assets/training-data/scan-centers-sample/ and./Assets/training-data/CAD-heatmaps-sample/)

Scan Center VoxCAD Heatmap Vox (to be gaussian blurred)

Citation

If you use this dataset or code please cite:

@InProceedings{Avetisyan_2019_CVPR,author = {Avetisyan, Armen and Dahnert, Manuel and Dai, Angela and Savva, Manolis and Chang, Angel X. and Niessner, Matthias},title = {Scan2CAD: Learning CAD Model Alignment in RGB-D Scans},booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},month = {June},year = {2019}}

About

[CVPR'19] Dataset and code used in the research project Scan2CAD: Learning CAD Model Alignment in RGB-D Scans

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp