Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Differentiable Point-based Inverse Rendering

NotificationsYou must be signed in to change notification settings

hg-chung/DPIR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 

Repository files navigation

This repository contains the implementation of the paper:

Differentiable Point-based Inverse Rendering

Hoon-Gyu Chung, Seokjun Choi, Seung-Hwan Baek

CVPR, 2024

Abstract

We present differentiable point-based inverse rendering, DPIR, an analysis-by-synthesis method that processes images captured under diverse illuminations to estimate shape and spatially-varying BRDF. To this end, we adopt point-based rendering, eliminating the need for multiple samplings per ray, typical of volumetric rendering, thus significantly enhancing the speed of inverse rendering. To realize this idea, we devise a hybrid point-volumetric representation for geometry and a regularized basis-BRDF representation for reflectance. The hybrid geometric representation enables fast rendering through point-based splatting while retaining the geometric details and stability inherent to SDF-based representations. The regularized basis-BRDF mitigates the ill-posedness of inverse rendering stemming from limited light-view angular samples. We also propose an efficient shadow detection method using point-based shadow map rendering. Our extensive evaluations demonstrate that DPIR outperforms prior works in terms of reconstruction accuracy, computational efficiency, and memory footprint. Furthermore, our explicit point-based representation and rendering enables intuitive geometry and reflectance editing.

Installation

We recommend you to use Conda environment. Install pytorch3d followingINSTALL.md.

conda create -n DPIR python=3.9conda activate DPIRconda install pytorch=1.13.0 torchvision pytorch-cuda=11.6 -c pytorch -c nvidiaconda install -c fvcore -c iopath -c conda-forge fvcore iopathconda install numpy matplotlib tqdm imageiopip install scikit-image plotly opencv-python pyhocon open3d lpips kornia icecreamconda install pytorch3d -c pytorch3d

Dataset

We utilized multi-view multi-light image dataset(DiLiGenT-MV Dataset) and photometric image dataset.

Multi-view multi-light image dataset was preprocessed followingPS-NeRF, which contains 5 objects.

Photometric image dataset was rendered by Blender, which contains 4 objects.

You can download dataset fromGoogle Drive and put them in the corresponding folder.

Train and Evaluation

You can train multi-view multi-light dataset(DiLiGenT-MV Dataset) or photometric dataset.

If you want to train and evaluateDiLiGenT-MV dataset,

cd code_diligentpython main.py --conf confs/buddha.conf --datadir DiLiGenT-MV --dataname buddha --basedir outputpython evaluation.py --conf confs/buddha.conf --datadir DiLiGenT-MV --dataname buddha --basedir output

If you want to train and evaluatephotometric dataset,

cd code_photometricpython main.py --conf confs/maneki.conf --datadir Photometric --dataname maneki --basedir outputpython evaluation.py --conf confs/maneki.conf --datadir Photometric --dataname maneki --basedir output

Result

Citation

If you find this work useful in your research, please consider citing:

@inproceedings{chung2024differentiable,  title={Differentiable Point-based Inverse Rendering},  author={Chung, Hoon-Gyu and Choi, Seokjun and Baek, Seung-Hwan},  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},  year={2024}}

Acknowledgement

Part of our code is based on the previous works:point-radiance,PS-NeRF, andPhySG.


[8]ページ先頭

©2009-2025 Movatter.jp