Doesn't timewarp work by internally rendering a larger FoV than what the player can see and then using the headset motion to change the player view within that already rendered frame to produce a new frame for the user based on their new head position?
I guess so. But if you can change the camera position from a static image, and have way to deal with resulting disocclusion artifacts well enough,
you can just add in motion vectors to do the same to extropolate time, or interpolate and mix from two source frames.
Two sources would also help with disocclusion, as one source likely always has the information missing from the other.
I have no doubt a non ML solution is possible, and if devs take a hand on it this would also eliminate problems like figuring out what's HUD elements, for example.
If so that's very different to Nvidia's frame generation. DLSS 3 performance mode is already rendering only 1 out of 8 pixels with the rest being AI generated.
Yes, but the argument of improved IQ does not hold if we look at it from another angle: Raytracing, which exactly is the primary reason those solutions exist at all.
To get IQ from RT, we need many samples. DLSS reduces those samples to 1/8th, and it does not 'invent' this information with ML to compensate the lack.
Thus, to me that's all a taylored marketing campaign in the first place, with contradictions hard to spot even for experts.
Basically you could reduce RT resoultion and sample it using traditional (rep)rojection methods to the frame rendered at (allmost) native resolution and FPS.
Quality would be the same or better, but then you have no selling point for proprietary features.
We've already has nAo on here talking about that being the direction of the industry up to and including 100% AI generated frames. There's no way AMD is getting away with ignoring this paradigm shift.
I'm not sure if a NV researcher claiming neural rendering is the future for games is an objective proof this will indeed happen anytime soon.
But just saying. No disrespect from my side. I just think we need to be careful if a single company claims ownership over gfx innovation of another industry.
We need to remain critical and objective. Currently, DLSS solves a problem which does not really exist, as 1080p is still the most widely used resolution, upscaling to 1440p is fine even with trivial filters, and it's not clear if a 22fps game blown up to look smooth is indeed better than a native 60fps game with some less gfx effects.
But again: No disrespect. And it's not that i'm against neural rendering. I'm happy i do not have to work on upscaling on my side.
As said before, personally i'd love to see ML motion blur, for example. Maybe that's one of the next features.
And i'd like some other AI applications even more, e.g. dynamic AI conversation with NPCs. Or it might give us practical large scene fluid simulations, etc. All of that is in the works, with promising results already shown.
It's just that, to convince me, we still need more than just DLSS, and it has to come from more sources than just a single company (which currently sells at too high prices), to proof the actual and real demand.
And then HW acceleration is welcome.