NeuRay
Rendered video without training on the scene.
Project page | Paper
Todo List
- Generalization models and rendering codes.
- Training of generalization models.
- Finetuning codes and finetuned models.
Usage
Setup
git clone [email protected]:liuyuan-pal/NeuRay.git
cd NeuRay
pip install -r requirements.txtDependencies
- torch==1.7.1
- opencv_python==4.4.0
- tensorflow==2.4.1
- numpy==1.19.2
- scipy==1.5.2
Download datasets and pretrained models
- Download processed datasets: DTU-Test / LLFF / NeRF Synthetic.
- Download pretrained model NeuRay-Depth and NeuRay-CostVolume.
- Organize datasets and models as follows
NeuRay
|-- data
|--model
|-- neuray_gen_cost_volume
|-- neuray_gen_depth
|-- dtu_test
|-- llff_colmap
|-- nerf_synthetic
Render
# render on lego of the NeRF synthetic dataset
python render.py --cfg configs/gen/neuray_gen_depth.yaml \
--database nerf_synthetic/lego/black_800 \ # nerf_synthetic/lego/black_400
--pose_type eval
# render on snowman of the DTU dataset
python render.py --cfg configs/gen/neuray_gen_depth.yaml \
--database dtu_test/snowman/black_800 \ # dtu_test/snowman/black_400
--pose_type eval
# render on fern of the LLFF dataset
python render.py --cfg configs/gen/neuray_gen_depth.yaml \
--database llff_colmap/fern/high \ # llff_colmap/fern/low
--pose_type evalThe rendered images locate in data/render/<database_name>/<renderer_name>-pretrain-eval/.
If the pose_type is eval, we also generate ground-truth images in data/render/<database_name>/gt.
Explanation on parameters of render.py.
cfgis the path to the renderer config file, which can also beconfigs/gen/neuray_gen_cost_volume.yamldatabaseis a database name consisting of<dataset_name>/<scene_name>/<scene_setting>.nerf_synthetic/lego/black_800means the scene "lego" from the "nerf_synthetic" dataset using "black" background and the resolution "800X800".dtu_test/snowman/black_800means the scene "snowman" from the "dtu_test" dataset using "black" background and the resolution "800X600".llff_colmap/fern/highmeans the scene "fern" from the "llff_colmap" dataset using "high" resolution (1008X756).- We may also use
llff_colmlap/fern/lowwhich renders with "low" resolution (504X378)
Evaluation
# psnr/ssim/lpips will be printed on screen
python eval.py --dir_pr data/render/<database_name>/<renderer_name>-pretrain-eval \
--dir_gt data/render/<database_name>/gt
# example of evaluation on "fern".
# note we should already render images in the "dir_pr".
python eval.py --dir_pr data/render/llff_colmap/fern/high/neuray_gen_depth-pretrain-eval \
--dir_gt data/render/llff_colmap/fern/high/gtRender on custom scenes
To render on custom scenes, please refer to this
Generalization model training
Download training sets
- Download Google Scanned Objects, RealEstate10K Space Dataset and LLFF released Scenes from IBRNet.
- Download colmap depth for forward-facing scenes at here.
- Download DTU training images at here.
- Download colmap depth for DTU training images at here.
Rename directories and organize datasets like
NeuRay
|-- data
|-- google_scanned_objects
|-- real_estate_dataset # RealEstate10k-subset
|-- real_iconic_noface
|-- spaces_dataset
|-- colmap_forward_cache
|-- dtu_train
|-- colmap_dtu_cacheTrain generalization model
Train the model with NeuRay initialized from estimated depth of COLMAP.
python run_training.py --cfg configs/train/gen/neuray_gen_depth_train.yamlTrain the model with NeuRay initialized from constructed cost volumes.
python run_training.py --cfg configs/train/gen/neuray_gen_cost_volume_train.yamlModels will be saved at data/model. On every 10k steps, we will validate the model and images will be saved at data/vis_val/<model_name>-<val_set_name>
Render with trained models
python render.py --cfg configs/gen/neuray_gen_depth_train.yaml \
--database llff_colmap/fern/high \
--pose_type evalScene-specific finetuning
Finetuning
# finetune on lego from the NeRF synthetic dataset
python run_training.py --cfg configs/train/ft/neuray_ft_depth_lego.yaml
# finetune on fern from the LLFF dataset
python run_training.py --cfg configs/train/ft/neuray_ft_depth_fern.yaml
# finetune on birds from the DTU dataset
python run_training.py --cfg configs/train/ft/neuray_ft_depth_birds.yaml
# finetune the model initialized from cost volume
python run_training.py --cfg configs/train/ft/neuray_ft_cv_lego.yamlThe finetuned models will be saved at data/model.
Finetuned models
We provide the finetuned models on the NeRF synthetic datasets at here.
Download the models and organize files like
NeuRay
|-- data
|-- model
|-- neuray_ft_lego_pretrain
|-- neuray_ft_chair_pretrain
...Render with finetuned models
# render on lego of the NeRF synthetic dataset
python render.py --cfg configs/ft/neuray_ft_lego_pretrain.yaml \
--database nerf_synthetic/lego/black_800 \
--pose_type eval \
--render_type ftCode explanation
We have provided explanation on variable naming convention in here to make our codes more readable.
Acknowledgements
In this repository, we have used codes or datasets from the following repositories. We thank all the authors for sharing great codes or datasets.
- IBRNet
- MVSNet-official and MVSNet-kwea123
- BlendedMVS
- NeRF-official and NeRF-torch
- MVSNeRF
- PixelNeRF
- COLMAP
- IDR
- RealEstate10K
- DeepView
- Google Scanned Objects
- LLFF
- DTU
Citation
@inproceedings{liu2022neuray,
title={Neural Rays for Occlusion-aware Image-based Rendering},
author={Liu, Yuan and Peng, Sida and Liu, Lingjie and Wang, Qianqian and Wang, Peng and Theobalt, Christian and Zhou, Xiaowei and Wang, Wenping},
booktitle={CVPR},
year={2022}
}

