Can you explain that more? It goes way over my head but sounds interesting
Yeah, i thought it's obvious.
I'll explain using the outdated example of OpenGL accumulation buffers, which was used in the last century, i would say.
The typical application was AA and motion blur. And it worked by rendering many frames. Each frame has the same subpixel jitter we use now for TAA to achieve AA, and each frame also is rendered a it's own point in time to subdivide the duration between two shown frames.
The accumulation buffer then was used to sum up all those (sub)frames and calculate the averaged result, which then was displayed (or stored, since this was more useful for offline rendering than realtime).
I have used the same method also for offline renderings at work. 3Ds Max had a very fast renderer using rasterization.
So it could not jitter time per pixel like a raytracer eventually can. So to get motion blur, the same method of accumulating whole frames at subdivided time was used.
Usually i have used 10 subframes to get a single final frame, which was smooth enough. It causes banding between the subframes for fast moving objects, which was visible but no real issue.
Nowadays, using high end GPUs we often see frame rates of 200 - 300 fps, at least for older games. New ones still often over 120.
And using a 60Hz display the is no benefit from that.
But if we would accumulate 2-4 frames, each one having fake post processing motion blur and TAA, we would get smoother motion and better AA on our low Hz display for the games where the GPU can do this.
It's low afford, and people like me would be happy, since i'll not replace my old display before it's broken.
And in general i want a constant framerate first, and secondly a high constant frame rate. I'm not really convinced about VRR. Though, i have not yet seen it in action at all.