Good post, except missing a significant perspective that contrasts this theory.
If you throw a neural net at antialising a black line on a white background, you could train and execute a model that could get great results, but if you use Xiolin Wu's algorithm, you can draw that line perfectly with very little processing requirement, because the problem can be distilled down to a mathematical solution.
I think this is one of those scenarios where you don't use a nuclear power plant to boil a cup of tea. Using the right tool for the job still applies.
It's not at all given that ML is the best way to solve upscaling. No-one's performed a decent comparison of nVidia's system versus Insomniac's, and Insomniac's upscaling is running on a poky APU after rendering the entire game. There's a good case to be made that conventional algorithm based upscaling on compute could get the same results (or better) as ML based solutions with a fraction of the silicon cost.
It's not, but it's certainly very effective and I'm willing to bet with Tensor cores it would be extremely effective.
Which one of the RTX demos wasn't using nVidia's denoising but was running it on compute? The fact I can't readily remember shows it didn't look substantially worse than the ML solution.
All the RTX demos were using algorithmic denoising. Nvidia only showcased AI denoising at Siggraph for Optic. No DirectML, no realtime neural networks.
So perhaps given two GPUs with 300 mm², on one you'd have 150 mm² of compute and 150 mm² of machine learning and BVH for noisy tracing followed by ML denoising and upscaling, and on the other you'd have 300 mm² of compute that is slower at the ray tracing but solves lighting differently and upscales with reconstructive algorithms, and the end results on screen may be difficult for people to judge.
Perhaps. But you haven't seen realtime ML denoising coupled with ML AI-Up-Resolution yet. No one has, because that's a tremendous amount of effort on both drivers and an API that isn't ready for prime time yet. Discounting it before it's even ready is fair, because it's not out, but 2 years from now I'm not certain this will hold.
A GPU 2x the power of PS4 (3.6 TF) could render a PS4-quality game (HZD, Spider Man, GoW) from a secondary position to create reflections, which would look a lot closer to RT'd reflections and not need RTing or denoising at all...
And before the pro-RT side says, "but that perspective wouldn't be right," of course it wouldn't, but the question is would it be close enough? Side by side, RT console versus non-RT console, would gamers know the difference? Enough to buy the RT console, all other things being equal?
We have games with planar, cubed, SSAO etc type reflections and we've seen a game with voxel cone tracing, I'm not sure what you're trying to imply here. Do they work?
Sure.
But we're talking about getting to next generation graphics not just more of the same right?
When you compare the reflections in this demo with RTX on vs when RTX is off. It's such a massive difference right here
And when it's off, that's what we're used to seeing. We're just so used to accepting low quality reflections and shadow maps your brain is probably outright ignoring it.
We're so used to baked lighting and non GI solutions, that we probably think it looks correct.
Once you get used to correctness, I don't think you'll be able to go back.
This is a very similar debate to when a user switches to 4K gaming. I had everyone under the moon tell me that 1080p and 4K you couldn't tell the difference. But once you get used to 4K or even F4K, you can't go back to 1080p. It's murder.
And it's exactly the same with HDR, you don't have an HDR screen, you don't know what you're missing.
Same thing with frame rates, you don't have the frame rate, you don't know what you're missing.
So I get that it's easy to take that position that you can't notice these differences.
Except.
If you had 4K HDR Ray Traced at great FPS.
All these little things that people can't seemingly notice, would become such an apparent leap in graphical fidelity.
With 4K and HDR, you're talking about insane amount of detail since the individual pixels can pop on their own. Particle effects and everything just have so much great detail. Now we can get more AO, reflections and shadows at significantly higher quality, working together with the 4K and HDR? My mind is already blown just thinking about what that might look like. I can't change your mind on what constitutes as looking better. Just like i can't change some people's minds that 4K looks better than 1080p. I don't have a counter argument because you've already made up your mind on the topic - and you've anchored yourself on some really early pre-release footage using incomplete drivers and APIs on a less than ideal viewing setup.
We're still very early on in RT. And as you have known with consoles, launch titles are not very representative of the graphics near the end of the generation. No reason to treat this any differently.