They aren't.Agree on your post, nicely written. I wasnt debating software vs hardware, cause obviously software is better if possible, but we dont have 200TF GPU's just yet. Its like PS2 vs GF4 Ti4600....
Again what i didnt agree on is that 'console RT is better than PC RT', which is the first time i heard someone claiming this btw, not to withstand what the actual results show.
Faster generic speed? All the stars aligned with Ampere versus Turing when it comes to ray tracing performance. NVidia seemed to indicate that raw ray tracing performance in Ampere is 2x+ that of Turing and I believe there were some non-gaming benchmarks that demonstrated this.I don't think people are against customization at the lowest possible of levels. I think the debate is around perspective of what needs to arrive first: faster generic speed, at the cost of customization, or slower generic speed with customization.
Nope, once you push RT workload upwards, Turing gets even more faster than RDNA2, as demonestrated in Minecraft, Quake 2, Call of Duty Cold War, Cyperbunk, Control .. etc.It's worth remembering that AMD with a compute-SIMD "slow" approach has equalled NVidia's dedicated-MIMD in Turing.
Here, almost 2x increase from 2080Ti to 3090 in many pro RT apps.I don't know if there was ever an in-depth analysis of how NVidia gained 2x+ raw ray tracing performance in Ampere. Links?
I don't know if there was ever an in-depth analysis of how NVidia gained 2x+ raw ray tracing performance in Ampere. Links? Such an analysis would be the most productive baseline for a discussion of where NVidia can go from here, hence "a roadmap of generic speed".
Nope, once you push RT workload upwards, Turing gets even more faster than RDNA2, as demonestrated in Minecraft, Quake 2, Call of Duty Cold War, Cyperbunk, Control .. etc.
Why is 6900XT nearly twice as fast as 3090 here:Nope, once you push RT workload upwards, Turing gets even more faster than RDNA2, as demonestrated in Minecraft, Quake 2, Call of Duty Cold War, Cyperbunk, Control .. etc./
Out of that list only "triangle intersection rate" (did you mean ray traversal rate, or this is a specific part of ray traversal that you're referring to?) and bandwidth are on topic for Ampere's increased performance over Turing.Double FP32 throughput, better L1 Cache, double triangle intersection, 50% more bandwidth with GDDR6X and 50% more transistors.
It's worth remembering that AMD with a compute-SIMD "slow" approach has equalled NVidia's dedicated-MIMD in Turing. With Ampere, NVidia gained 40% on Turing in games, but it seems likely there are no major gains to be had from "better MIMD" in Lovelace.
I don't know if there was ever an in-depth analysis of how NVidia gained 2x+ raw ray tracing performance in Ampere. Links? Such an analysis would be the most productive baseline for a discussion of where NVidia can go from here, hence "a roadmap of generic speed".
Why is 6900XT nearly twice as fast as 3090 here:
TweakTown.com Enlarged Image
with no DLSS/CAS? 10.6fps versus 6.6fps.
Out of that list only "triangle intersection rate" (did you mean ray traversal rate, or this is a specific part of ray traversal that you're referring to?) and bandwidth are on topic for Ampere's increased performance over Turing.
Is the rate a side-effect solely of more ray tracing cores?
Faulty testing for sure.Why is 6900XT nearly twice as fast as 3090 here:
TweakTown.com Enlarged Image
with no DLSS/CAS? 10.6fps versus 6.6fps.
Those were quoted wrt to professional apps, the closest game at 1,8x was Q2 RTX:I don't know if there was ever an in-depth analysis of how NVidia gained 2x+ raw ray tracing performance in Ampere. Links? Such an analysis would be the most productive baseline for a discussion of where NVidia can go from here, hence "a roadmap of generic speed".
I almost forgot about the added MB support. It's an unexpected improvement over Turing.So they did some real work between Gen1 and Gen2.
For gaming, Nvidia only ever said to expect up to 2x 2080 performance.Faster generic speed? All the stars aligned with Ampere versus Turing when it comes to ray tracing performance. NVidia seemed to indicate that raw ray tracing performance in Ampere is 2x+ that of Turing and I believe there were some non-gaming benchmarks that demonstrated this.
I almost forgot about the added MB support. It's an unexpected improvement over Turing.
I assume they target offline rendering application, not games? Has DXR even support?
And if offline is the target, i think one serious limitation here was limited levels of instancing. So are there improvements too, and all those things are exposed by Optix maybe?