B3D Upscaling Algorithm Test Suite

iroboto

Daft Funk
Moderator
Legend
Supporter
Hello all,

As we move into more AI based rendering, I think it's becoming clear we need new ways to break down and test how these algorithms work. We are looking to have community discussion on ideas and eventually build out a git repo and code to get it to go. This will be created under the B3D banner, we will need some things to setup, but please get involved if you help in anyway.

Context:
Build a suite of test tools to decompose how well each algorithm operates - DLSS4, 3, 2, 1, FSR, PSSR, etc.
 
We had the quest of getting ground truth data mentioned earlier, let's make a stab at that first.

Spontaniously I would say we have this options:
* make your own deterministic renderer/ing, integrate the technologies, do the comparison on the fly
* acquire a VFX project, render out the frames and make a small app replaying the "movie" with the technologies
* pick a game supporting DirectSR, hack the DLL and redirect it's inputs to disk once, rest same as above

The custom renderer has the advantage that we can display arbitrary cases at arbitrary time steps, the disadvantage that some things are unlikely or time consuming (bleeding edge techniques) to implement.
The VFX project has the advantage of potentially being extreme high quality, more than games even, and the disadvantage of acquiration and that it's likely to be frozen.
The DLL hack is the quickest to do, but simulation time vs. render time (which includes capturing the data) at ground truth quality might mess up temporal progression of the things going on in the in-game world, and there might be no way to fudge it to produce ground truth.

I personally would probably just pick glTF Cauldron, add the upscalers and make sure parameters are available to produce ground truth. It supports animation, transparency, and there are many glTF scenes/projects available. Including various Sponzas, San Miguel etc.
 
Is it possible to leverage UE5 to do these? It seems like it has everything we need.

We make a scene. Doll it up. Drag a camera around, render out with various technologies?
 
Last edited:
Is it possible to leverage UE5 to do these? It seems like it has everything we need.

We make a scene. Doll it up. Drag a camera around, render out with various technologies?
Sure, should be doable on a fundamental level. It's a good suggestion. I suppose we could clone the repository of Unreal too.

We would have to investigate if we can artificially throttle/control game time. First, do hide the added latency from copying frames out, and you can actually use 100s of ms to export to disk instead of stealing large amounts on VRAM from the "game". And second, to simulate CPU/GPU limited situations in regards to frame-to-frame time progression delta. It also allows us to test motion artifacts, as the distance gap between elements on the screen from frame to frame becomes larger, especially in respect to frame generation.

I'll try to read up a bit and skim the code. Any Unreal "programmers" interested in this project?
 
I guess one thing to keep an eye on is whether motion vectors are created for particles. Not sure how motion vectors work in UE5, and whether you can control when they're on or off. You may want to test a case where you have moving particles that do not have motion vectors. Seems to be one of the ugly failure cases for TAA and all of these upscalers.
 
Is DLSS programming manual still NDA'd?
Only ray reconstruction stuffs are under explicit NDA, others can be found here:
 
Last edited:
I wonder if old accumulation buffer methods would be good for rendering ground truth images. (Render scene n times and average result.)
This allows custom sample patterns and if needed a motion blur and DoF. (Quite sure it's used in Gran Turismo camera mode.)

AA is also seperate from post effects, so it should work with them without additional work. (And preserve problems while allowing great oversampling of image.)
 
Back
Top