Skip to content

Commit

Permalink
final project update
Browse files Browse the repository at this point in the history
  • Loading branch information
BachiLi committed Jan 5, 2024
1 parent 6bb559e commit 10402de
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 1 deletion.
5 changes: 4 additions & 1 deletion handouts/final_project.tex
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,9 @@ \section{Implementation ideas}
\paragraph{Faster null-scattering using space partitioning.}
Implement Yue et al.'s kd-tree data structure~\cite{Yue:2010:UAS} or uniform grid~\cite{Yue:2011:TOS} for better upper bound of the extinction coefficients. As a bonus, implement progressive null tracking~\cite{Misso:2023:PNT}.

\paragraph{Emissive volumes.}
Add emissive volumes to your Homework 2 code. You might want to read Pixar's volume sampling paper from Villemin and Hery~\cite{Villemin:2013:PIF}.

\paragraph{Efficient transmittance estimation.}
Implement Kettunen et al.'s unbiased ray marching estimator~\cite{Kettunen:2021:URT}.
You can extend your volumetric path tracer in homework 2 for this.
Expand Down Expand Up @@ -171,7 +174,7 @@ \section{Research project ideas}
Current rendering denoisers are usually trained separately from the rendering algorithms (in particular, they are usually trained using a standard path tracer). Can we train the denoiser with the sampling algorithm together to make them compensate for each other? You might want to read Bako's Offline Deep Importance Sampling~\cite{Bako:ODI:2019}.

\paragraph{Learned multiple importance sampling.}
While optimal importance sampling~\cite{Kondapaneni:2019:OMI} is shown to be optimal variance-wise, it has a few drawbacks: 1) it is computationally expensive, 2) it is only optimal under linear combination of existing contributions, and 3) it is only optimal for unbiased rendering -- it does not optimize for the Mean Square Error (bias squared plus variance). Therefore, it is tempting to simply learn to do multiple importance sampling. Take a few features as input (e.g., PDFs, roughness, curvature, ambient occlusion), train a parametric function (does not need to be a neural network) to combine your rendering samples. For this project, you don't need to implement in a renderer to start, you can just try it on some test functions.
While optimal multiple importance sampling~\cite{Kondapaneni:2019:OMI} is shown to be optimal variance-wise, it has a few drawbacks: 1) it is computationally expensive, 2) it is only optimal under linear combination of existing contributions, and 3) it is only optimal for unbiased rendering -- it does not optimize for the Mean Square Error (bias squared plus variance). Therefore, it is tempting to simply learn to do multiple importance sampling. Take a few features as input (e.g., PDFs, roughness, curvature, ambient occlusion), train a parametric function (does not need to be a neural network) to combine your rendering samples. For this project, you don't need to implement in a renderer to start, you can just try it on some test functions.

\paragraph{Neural mutation for Metropolis light transport.}
Neural networks have been shown to be helpful for importance sampling in a Monte Carlo path tracer~\cite{Muller:2019:NIS}.
Expand Down
10 changes: 10 additions & 0 deletions handouts/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -300,6 +300,16 @@ @inproceedings{Burley:2012:PBS
year = {2012}
}

@article{Villemin:2013:PIF,
author = {Ryusuke Villemin and Christophe Hery},
title = {Practical Illumination from Flames},
year = {2013},
journal = {Journal of Computer Graphics Techniques},
volume = {2},
number = {2},
pages = {142--155}
}
@inproceedings{Aila:2013:QMB,
title = {On quality metrics of bounding volume hierarchies},
author = {Aila, Timo and Karras, Tero and Laine, Samuli},
Expand Down

0 comments on commit 10402de

Please sign in to comment.