You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The hash encoder will be compiled on the fly when running the code.
48
+
49
+
## Dataset
50
+
For downloading the preprocessed data, run the following script. The data for the DTU, Replica, Tanks and Temples is adapted from [VolSDF](https://github.com/lioryariv/volsdf), [Nice-SLAM](https://github.com/cvg/nice-slam), and [Vis-MVSNet](https://github.com/jzhangbs/Vis-MVSNet), respectively.
We also provided script for evaluating all Replica scenes:
117
+
```
118
+
cd replica_eval
119
+
python evaluate.py
120
+
```
121
+
please check the script for more details.
122
+
123
+
## ScanNet
124
+
```
125
+
cd scannet_eval
126
+
python evaluate.py
127
+
```
128
+
please check the script for more details.
129
+
130
+
## Tanks and Temples
131
+
You need to submit the reconstruction results to the [official evaluation server](https://www.tanksandtemples.org), please follow their guidance. We also provide an example of our submission [here](https://drive.google.com/file/d/1Cr-UVTaAgDk52qhVd880Dd8uF74CzpcB/view?usp=sharing) for reference.
132
+
133
+
# Custom dataset
134
+
We provide an example of how to preprocess scannet to monosdf format. First, run the script to subsample training images, normalize camera poses, and etc.
135
+
```
136
+
cd preprocess
137
+
python scannet_to_monosdf.py
138
+
```
139
+
140
+
Then, we can extract monocular depths and normals (please install [omnidata model](https://github.com/EPFL-VILAB/omnidata) before running the command):
This project is built upon [VolSDF](https://github.com/lioryariv/volsdf). We use pretrained [Omnidata](https://omnidata.vision) for monocular depth and normal extraction. Cuda implementation of Multi-Resolution hash encoding is based on [torch-ngp](https://github.com/ashawkey/torch-ngp). Evaluation scripts for DTU, Replica, and ScanNet are taken from [DTUeval-python](https://github.com/jzhangbs/DTUeval-python), [Nice-SLAM](https://github.com/cvg/nice-slam) and [manhattan-sdf](https://github.com/zju3dv/manhattan_sdf) respectively. We thank all the authors for their great work and repos.
32
149
33
-
<br>
34
150
151
+
# Citation
35
152
If you find our code or paper useful, please cite
36
153
```bibtex
37
154
@article{Yu2022MonoSDF,
38
155
author = {Yu, Zehao and Peng, Songyou and Niemeyer, Michael and Sattler, Torsten and Geiger, Andreas},
39
156
title = {MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction},
40
-
journal = {arXiv:2022.00665},
157
+
journal = {Advances in Neural Information Processing Systems (NeurIPS)},
0 commit comments