Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

A re-implementation of mip-NeRF in PyTorch

NotificationsYou must be signed in to change notification settings

bebeal/mipnerf-pytorch

Repository files navigation

A reimplementation of mip-NeRF in PyTorch.

nerfTomipnerf

Not exactly 1-to-1 with the official repo, as we organized the code to out own liking (mostly how the datasets are structued, and hyperparam changes to run the code on a consumer level graphics card), made it more modular, and removed some repetitive code, but it achieves the same results.

Features

  • Can use Spherical, or Spiral poses to generate videos for all 3 datasets
  • Spherical:
video.mp4
  • Spiral:
spiral.mp4
  • Depth and Normals video renderings:
  • Depth:
depth.mp4
  • Normals:
normals.mp4
  • Can extract meshes
mesh_.mp4
mesh.mp4

Future Plans

In the future we plan on implementing/changing:

Installation/Running

  1. Create a conda environment usingmipNeRF.yml
  2. Get the training data
    1. runbash scripts/download_data.sh to download all 3 datasets: LLFF, Blender, and Multicam.
    2. Individually run the bash script corresponding to an individual dataset
      • bash scripts/download_llff.sh to download LLFF
      • bash scripts/download_blender.sh to download Blender
      • bash scripts/download_multicam.sh to download Multicam (Note this will also download the blender dataset since it's derived from it)
  3. Optionally change config parameters: can change default parameters inconfig.py or specify with command line arguments
    • Default config setup to run on a high-end consumer level graphics card (~8-12GB)
  4. Runpython train.py to train
    • python -m tensorboard.main --logdir=log to start the tensorboard
  5. Runpython visualize.py to render a video from the trained model
  6. Runpython extract_mesh.py to render a mesh from the trained model

Code Structure

I explain the specifics of the code more in detailhere but here is a basic rundown.

  • config.py: Specifies hyperparameters.
  • datasets.py: Base genericDataset class + 3 default dataset implementations.
    • NeRFDataset: Base class that all datasets should inherent from.
    • Multicam: Used for multicam data as in the original mip-NeRF paper.
    • Blender: Used for the synthetic dataset as in original NeRF.
    • LLFF: Used for the llff dataset as in the original NeRF.
  • loss.py: mip-NeRF loss, pretty much just MSE, but also calculates psnr.
  • model.py: mip-NeRF model, not as modular as the way the original authors wrote it, but easier to understand its structure when laid out verbatim like this.
  • pose_utils.py: Various functions used to generate poses.
  • ray_utils.py: Various functions related involving rays that the model uses as input, most are used within the forward function of the model.
  • scheduler.py: mip-NeRF learning rate scheduler.
  • train.py: Trains a mip-NeRF model.
  • visualize.py: Creates the videos using a trained mip-NeRF.

mip-NeRF Summary

Here's a summary on how NeRF and mip-NeRF work that I wrote when writing this originally.

Results

All PSNRs are average PSNR (coarse + fine).

LLFF - Trex

pic0pic1
pic2pic3

Video:
video_.mp4

Depth:
depth_.mp4

Normals:
normals_.mp4

Blender - Lego

pic0pic1
pic2pic3

Video:

video.mp4

Depth:
depth.mp4

Normals:
normals.mp4

Multicam - Mic

pic0pic1
pic2pic3

Video:

video_.mp4

Depth:
depth_.mp4

Normals:
normals.mp4

References/Contributions

About

A re-implementation of mip-NeRF in PyTorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp