I'm clear on your stance against the _condition_, which is fine and interesting but you also seem to imply Nyquist prevents one from using previous frame info to predict parts of next frame. Wasn't that the point of that ugly graph?Assume that you can, I don't dispute the conditional.
I dispute the claim though, it's where I'm getting at.
Regarding obtaining even columns of the ideal rendering (that is ignoring the temporal issues of the technique), I don't think pixel shading or any form of sampling that does not require neighboring sample values would prevent one from accurately "half rendering".That's a good argument, the less details there is in the original scene, the better you can reconstruct, because there's less to reconstruct. I actually don't dispute this, but I think this is skipping a more accurate model and trading more more artificial details?
Texture filtering is just one dimension, when you change the camera, even just so slightly, don't you change the light as observed on the pixels, at least in some extreme cases like on the tangent? It's probably not detectable with human eyes, which is why it works pretty well, but that's not to say that you can reverse a pixel-to-pixel perfectly?
I agree, though, the likes of postprocessing and texture filtering makes the problem non-trivial.
I don't dispute on the fact this is quite a feat given the the quality obtained and the amount of computation saved. I dispute on the claim that this is somehow exactly equal to what you get with native 1080p60 (maybe it's obvious, it just seem to me that few people are still yammering about it)
Yeah, it is obvious.
The practical question is though how much it is better than 960x1080 (or worse than 1080p). GG has the engine, let's hope we will see some quantitative data at GDC.