Silent_Buddha
Legend
Going back to the original question though:
The one term that's appropriate for all techniques that aren't simple spatial interpolations is 'image reconstruction', of which 'temporal upscaling' is a subset, no? DLSS2 is even described as such by nVidia : DLSS 2.0 - Image Reconstruction for Real-time Rendering with Deep Learning
Yes, both temporal upscaling and frame interpolation are forms of image generation.
Technically, frame interpolation is frame construction as you are creating something entirely new which does not exist. You're adding a made up (interpolated) frame into the stream of actual frames using data from the previous and next frame (for video) or previous frame(s) combined with a prediction of what the next "real" frame might be. How well that works depends on the algorithm used to create the new frame.
Temporal reconstruction generally uses information from previous frames and the current frame to add detail to the current frame, IE - it's reconstructing the current frame using data from previous frames. Adding data to it rather than creating something entirely new. Now some of the added detail might be created but it's generally blended in from previous frames. Basically, you're just constantly adding information to the current frame but you aren't generally making up an entirely new frame.
As such, since temporal reconstruction is generally adding information to the current frame in order to present the final frame it's upscaling the resolution of the final frame. You're quite literally taking the current frame and reconstructing it.
Frame interpolation can be thought of more accurately as a form of temporal construction, rather than adding detail to the current frame you are creating an entirely new frame (at the same resolution as the previous and next frames) that roughly follows what came before and may not accurately predict what comes in the next real frame (for games or real time video without a delay) in order to create the intermediate frame. Thus you can end up with some VERY weird anomalies during frame interpolation when there is either fast motion or erratic motion or worst case fast and erratic motion.
With motion video you can instead use 2 real frames (or more, and one of the reasons that TVs doing interpolation have lots of added latency because at a minimum they must have the data from the next frame before generating the intermediate frame) and create an intermediary frame using information from those two frames. So display real frame, read in next real frame, interpolate (create/construct) intermediate frame using previous real frame + real frame that was just read in, display intermediate frame, display real frame that was read in...rinse and repeat for a simple TV based video interpolation stream. Anomalies related to quick erratic motion can still be a problem, however, but can be mitigated by looking far enough ahead in the video stream to try to better predict an appropriate middle frame. Much harder to do with a game where you want a new "intermediate" frame to be generated before the next "real" frame is rendered.
That's where you can get into things such as asynchronous reprojection from the VR world to reproject things based on camera (head) movement and sort of fill in blanks with solid colors at the periphery.
Regards,
SB
Last edited: