Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed RFC Suggestion: Ground truth provider #42

Closed
Bindless-Chicken opened this issue May 18, 2022 · 8 comments
Closed

Proposed RFC Suggestion: Ground truth provider #42

Bindless-Chicken opened this issue May 18, 2022 · 8 comments
Labels
rfc-suggestion Request for Comments for a Suggestion

Comments

@Bindless-Chicken
Copy link

Motivation

Currently, visual fidelity in O3DE is driven by existing state of the art implementations, but to grow O3DE to full maturity and utilize it as a platform to lead research and foster innovative development in the industry, we will need a way to validate our results against established ground truth solutions.

The current situation asks for tedious manual processing and precise mirroring of the setup in external packages, to reproduce the result. During this process, small errors can be introduced generating false negatives or positives in the results.

O3DE

Cornell box rendered in O3DE with diffuse direct illumination enabled

Mitsuba

The same Cornell box rendered using Mitsuba with one light bounce

Overview

The system proposed here will allow a seamless integration of an offline renderer into O3DE to allow quick and easy comparison of the rendered visuals.

Theory of operation

Scenes are created directly in O3DE and are exported from within the engine to the (offline) ground truth provider. The system is designed to operate with minimal user input and provide repeatable results to be used as ground truth comparison points to validate visual results.

The ground truth provider: Mitsuba

The system proposed here is designed around Mitsuba. Mitsuba is a well-established open source rendering system (https://github.com/mitsuba-renderer/mitsuba), which makes it a prime candidate for our purpose.

Mitsuba is a research-oriented rendering system in the style of PBRT, from which it derives much inspiration.

In comparison to other open source renderers, Mitsuba places a strong emphasis on experimental rendering techniques, such as path-based formulations of Metropolis Light Transport and volumetric modeling approaches. Thus, it may be of genuine interest to those who would like to experiment with such techniques that haven't yet found their way into mainstream renderers, and it also provides a solid foundation for research in this domain.

In many cases, physically-based rendering packages force the user to model scenes with the underlying algorithm (specifically: its convergence behavior) in mind. For instance, glass windows are routinely replaced with light portals […]. One focus of Mitsuba will be to develop path-space light transport algorithms, which handle such cases more gracefully.

The O3DE scene exporter

The system is designed to function with minimal user input and process scenes automatically.

It will present itself as a scene level component in which the user can setup the export properties such as camera setup and export path.

For the few Atom components where an automatic conversion cannot be achieved, specialized components could be implemented to provide the required properties.

Discussion points

User defined materials

Most existing materials can be trivially represented by a model already available in Mitsuba, but representing user defined materials can be quite complex. Evaluating the shader could be done by transpiling the azsl into its C++ counter part, or by forcing the user to provide a custom Mitsuba implementation for their material.

LookDev library

In many cases testing a scene side by side is too broad of a comparison. A specialized LookDev testing environment which can be exported to validate material properties independent of several other effects in a scene should exist so users can for instance test lights, GI bouncing behavior, materials or camera properties (maybe even post-processes such as DoF)  independently.

Built-in metrics

A range of different metrics exist to validate results from comparisons, a simple one for instance being PSNR. However, the aim of an algorithm might be to be biased on purpose while retaining the highest perceptual quality. Showing multiple of these indicators can give a user an overview if an algorithm in O3DE might have different strengths or weaknesses depending on the context it is measured in.

@galibzon
Copy link
Collaborator

Need to follow up with the TSC.

@santorac
Copy link
Contributor

santorac commented May 18, 2022

If license differences with mitsuba is a concern, possibly consider LuxCore as an alternative because it is Apache 2.0 as well. (Suggested by @rgba16f )

@RoaldFre
Copy link

I have looked into LuxCoreRender and tinkered with some 'white furnace' scenes as an initial validation check. These are scenes where every object has a purely diffuse material with no absorption (i.e. 'matte white'). If you put a diffuse area light or environment light in such a scene, the analytical solution is known: it should be as bright as that light source every way you look (think of a path tracer with perfect importance sampling of the diffuse brdf: light keeps bouncing around on the surfaces without absorption and with Monte Carlo weight of 1 until it hits a light source and returns its emitted radiance).

This works OK in simple scenes with only a constant environment light, but I cannot get it to give the correct results in scenes with area lights -- it sometimes even breaks energy conservation by having areas that are brighter that the light source(s) themselves. (Note: I couldn't seem to set an unlimited max path depth in LuxCoreRender, these results are with a max path depth of 1k.)

Inside a closed Cornell box with spherical light source (correct solution: entire image as bright as the light source):
luxcorerender_insideClosedBox_1kPathLength

Classical 'open' Cornell box (without front wall) with box-shaped area light and constant environment light (there seems to be a dark view-dependent artefact on the light source, and the interior of the box is brighter than the light sources):
luxcorerender_openBox_envLightAndExtrudedLightsource

Similar setup, but with a spherical light source inside the Cornell box (energy violation is more pronounced here):
luxcorerender_openBox_envLightAndSpherMeshLight

This GH issue also talks about incorrect lighting results: LuxCoreRender/LuxCore#580
Someone asks about the physical correctness here: LuxCoreRender/LuxCore#588

IMO, this currently makes LuxCoreRender unsuitable as a ground truth provider: you don't want a situation where you are second guessing the validity of your ground truth validation result itself.

@rgba16f
Copy link
Contributor

rgba16f commented May 27, 2022

Is there any possibility that we could make a plugin for Mitsuba so it could load O3DE scenes?

@galibzon
Copy link
Collaborator

galibzon commented Jun 1, 2022

@RoaldFre thanks a lot for your feedback. It seems, at the moment, using Mitsuba is not completely ruled out. As We already use GPL projects like gcc, etc. I'm still navigating the legal discussion.

@thefranke
Copy link

The outcome from the last meeting was that as long as Mitsuba code does not go into O3DE and it remains an exporter that you can use with Mitsuba it should be fine. Please correct me if I'm wrong @galibzon.

@galibzon
Copy link
Collaborator

@thefranke , correct. I got confirmation from the legal department that as long as We don't modify code in Mitsuba then We are good to go. I'll announce our plan to use Mitsuba as ground truth provider in the Technical Steering Committee meeting which occurs tomorrow 6/28 at 8am PT in Discord.

@galibzon
Copy link
Collaborator

Closing this issue as approved. We'll use Mitsuba as ground truth provider. This also means that at the time We can not contribute code to Mitsuba because of its LGPL3.0 license. But We can create scripts, etc that can export scene data in a format compatible with Mitsuba.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rfc-suggestion Request for Comments for a Suggestion
Projects
None yet
Development

No branches or pull requests

6 participants