Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

"HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting" (NeurIPS 2024)

License

NotificationsYou must be signed in to change notification settings

caiyuanhao1998/HDR-GS

Repository files navigation

 

PWC

arXivvideozhihuAKMrNeRF

HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting

Scene BathroomScene ChairScene Dog
Scene BearScene DeskScene Sponza

 

Introduction

This is the official implementation of our NeurIPS 2024 paper "HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting". We have run the SfM algorithm to recalibrate the data. If you find this repo useful, please give it a star ⭐ and consider citing our paper. Thank you.

News

  • 2024.12.01 : We provide code for direct loading model to test and render spiral demo video. Welcome to have a try! 🤗
  • 2024.11.30 : We set up a leaderboard on thepaper-with-code website! Welcome to submit your entry! 🏆
  • 2024.11.26 : Code, recalibrated data following the opencv standard, and training logs have been released. Feel free to check and have a try! 🤗
  • 2024.07.01 : Our HDR-GS has been accepted by NeurIPS 2024! Code will be released before the start date of the conference (2024.12.10). Stay tuned. 🚀
  • 2024.05.24 : Our paper is onarxiv now. Code, data, and training logs will be released. Stay tuned. 💫

Performance

Synthetic Datasets

results1

results2

Real Datasets

results1

results2

 

Interactive Results

Scene BathroomScene ChairScene Diningroom
Scene DogScene SofaScene Sponza

 

1. Create Environment

We recommend usingConda to set up the environment.

# cloning our repogit clone https://github.com/caiyuanhao1998/HDR-GS --recursiveSET DISTUTILS_USE_SDK=1# Windows only# install the official environment of 3DGScd HDR-GSconda env create --file environment.ymlconda activate hdr_gs

 

2. Prepare Dataset

Download our recalibrated and reorganized datasets fromGoogle Drive orBaidu Disk (code:cyh2). Then put the downloaded datasets into the folderdata_hdr/ as

|--data_hdr|--synthetic|--bathroom|--exr|--0.exr|--1.exr          ...|--images|--0_0.png|--0_1.png          ...|--sparse|--0|--cameras.bin|--images.bin|--points3D.bin|--points3D.ply|--project.ini|--bear      ...|--real|--flower|--input_images|--000_0.jpg|--000_1.jpg          ...|--poses_bounds_exps.npy|--sparse|--0|--cameras.bin|--images.bin|--points3D.bin|--points3D.ply|--project.ini|--computer      ...

Note: The original datasets are collected byHDR-NeRF. But the camera poses follow the normalized device coordinates, which are not suitable for 3DGS. Besides, HDR-NeRF does not provide the initial point clouds. So we use the Structure-from-Motion algorithm to recalibrate the camera poses and generate the initial point clouds. We also organize the datasets according to the description of HDR-NeRF, which is different from its official repo.

 

3. Testing

We write the code for directly loading the model to test and render spiral video. Please download our pre-trained weightsbathroom fromGoogle Drive orBaidu Disk (code:cyh2) and then put it into the folderpretrained_weights.

# For synthetic scenespython3 train_synthetic.py --config config/bathroom.yaml --eval --gpu_id 0 --syn --load_path pretrained_weights/bathroom  --test_only

Besides, if you train a model with config:bathroom.yaml, you will get a profile as:

|--output|--mlp|--bathroom|--exp-time|--point_cloud|interation_x|--point_cloud.ply|--tone_mapper.pth            ...|--test_set_vis|--videos|--cameras.json|--cfg_args|--input.ply|--log.txt

Then the--load_path should be "output/mlp/bathroom/exp-time/point_cloud/interation_x"

4. Training

We provide training logs for your convenience to debug. Please download them from theGoogle Drive orBaidu Disk (code:cyh2).

You can run the.sh file by

# For synthetic scenesbash train_synthetic.sh# for real scenesbash train_real.sh

Or you can directly train on specific scenes as

# For synthetic scenespython3 train_synthetic.py --config config/sponza.yaml --eval --gpu_id 0 --synpython3 train_synthetic.py --config config/sofa.yaml --eval --gpu_id 0 --synpython3 train_synthetic.py --config config/bear.yaml --eval --gpu_id 0 --synpython3 train_synthetic.py --config config/chair.yaml --eval --gpu_id 0 --synpython3 train_synthetic.py --config config/desk.yaml --eval --gpu_id 0 --synpython3 train_synthetic.py --config config/diningroom.yaml --eval --gpu_id 0 --synpython3 train_synthetic.py --config config/dog.yaml --eval --gpu_id 0 --synpython3 train_synthetic.py --config config/bathroom.yaml --eval --gpu_id 0 --syn# for real scenespython3 train_real.py --config config/flower.yaml --eval --gpu_id 0python3 train_real.py --config config/computer.yaml --eval --gpu_id 0python3 train_real.py --config config/box.yaml --eval --gpu_id 0python3 train_real.py --config config/luckycat.yaml --eval --gpu_id 0

 

4. Citation

@inproceedings{hdr_gs,  title={HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting},  author={Yuanhao Cai and Zihao Xiao and Yixun Liang and Minghan Qin and Yulun Zhang and Xiaokang Yang and Yaoyao Liu and Alan Yuille},  booktitle={NeurIPS},  year={2024}}

About

"HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting" (NeurIPS 2024)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp