B3D Upscaling Algorithm Test Suite

iroboto

Daft Funk
Moderator
Legend
Supporter
Hello all,

As we move into more AI based rendering, I think it's becoming clear we need new ways to break down and test how these algorithms work. We are looking to have community discussion on ideas and eventually build out a git repo and code to get it to go. This will be created under the B3D banner, we will need some things to setup, but please get involved if you help in anyway.

Context:
Build a suite of test tools to decompose how well each algorithm operates - DLSS4, 3, 2, 1, FSR, PSSR, etc.
 
We had the quest of getting ground truth data mentioned earlier, let's make a stab at that first.

Spontaniously I would say we have this options:
* make your own deterministic renderer/ing, integrate the technologies, do the comparison on the fly
* acquire a VFX project, render out the frames and make a small app replaying the "movie" with the technologies
* pick a game supporting DirectSR, hack the DLL and redirect it's inputs to disk once, rest same as above

The custom renderer has the advantage that we can display arbitrary cases at arbitrary time steps, the disadvantage that some things are unlikely or time consuming (bleeding edge techniques) to implement.
The VFX project has the advantage of potentially being extreme high quality, more than games even, and the disadvantage of acquiration and that it's likely to be frozen.
The DLL hack is the quickest to do, but simulation time vs. render time (which includes capturing the data) at ground truth quality might mess up temporal progression of the things going on in the in-game world, and there might be no way to fudge it to produce ground truth.

I personally would probably just pick glTF Cauldron, add the upscalers and make sure parameters are available to produce ground truth. It supports animation, transparency, and there are many glTF scenes/projects available. Including various Sponzas, San Miguel etc.
 
Back
Top