I originally posted about this in the next-gen console thread ( PS6/ next MS box),
but since I consider DF to currently be the in public experts on spatial and temporal
up-scaling techniques I would love to hear what they think about my idea...
Basically any chance game engines might start rendering, partial frames, that only include accurate
information for the up-scaling part of the rendering process. I understand existing game engines already contribute
a lot of accurate info already, but as I understand it, no render engine provides anything in the way of temporal up-scaling data?
My comment...
I wouldn't be surprised if we start seeing game engines outputting "mid-frames",
which are just motion data, and other low level info - perhaps some raw texture data,
and then a temporal up-scaling system to use this as input to generate intermediate frames.
Basically, instead of having the GPU calculate all you motion vectors, optical flow, and Depth data from 2 different frames,
the GPU, creates that data for you, giving a much better quality temporal upscale as the result.
Eg. GPU calculates complete render and rasterizes/RT's for the entire frame to completion. = Frame 0
GPU calculates just the updated geometry, and possibly some basic lighting information, then provides a temporal up-scaling
system the Motion vectors, optical flow, updated geometry and light(ing) locations. = Frame 1.
This may in fact place MORE burden on the CPU, but in could over time, develop a much better system for the resulting image stream.
Essentially applying some of the original Render data to the temporal up-scaling system,
or applying AI earlier in the render pipeline.
In Engine super TAA I guess?
Thoughts?
Looking through the current DLSS SDK on guithub,
aint that helpful as it's still 2.4, not sure how much if any stuff wold be helpful for DLSS3 temporal stuff wold be exposed..?
Might be a good side project for anyone thats interested...