Why do you say that? That would still be a 2x improvement over a 6900xt in RT performance. Expecting more than that is extremely unrealistic.The hardware would have to be broken for it to reach only 3090Ti for ray tracing...
Why do you say that? That would still be a 2x improvement over a 6900xt in RT performance. Expecting more than that is extremely unrealistic.The hardware would have to be broken for it to reach only 3090Ti for ray tracing...
Why ?Why do you say that? That would still be a 2x improvement over a 6900xt in RT performance. Expecting more than that is extremely unrealistic.
Because it hasn't ever happened in the history of modern GPUs. AMD also gains less from the node improvement since they were starting from a better place than Nvidia last gen.Why ?
If AMD has traversal acceleration inside RDNA3, the improvement will be huge far above the node improvement. They miss an important feature for performance.
What is the equivalent feature to RT in past GPUs? (Can we talk about the DX9 improvement from FX 5900 to 6800 Ultra, or AMD's tessellation performance improvement?)Because it hasn't ever happened in the history of modern GPUs. AMD also gains less from the node improvement since they were starting from a better place than Nvidia last gen.
No, you're not getting MFMA there, ever.Same for ML acceleration
Sure certain features offered over a 2x improvement when benchmarked in isolation as a synthetic test, but this was not reflected in real world games when tons of other processes are in flight and bottlenecks are shifting all around.What is the equivalent feature to RT in past GPUs? (Can we talk about the DX9 improvement from FX 5900 to 6800 Ultra, or AMD's tessellation performance improvement?)
Anyway, the point is that 3090 level performance in RT for 4090 level performance in rasterization would mean that the enhanced ray tracing capabilities AMD has referred to effectively do nothing.
The sole fact that Ada is anywhere from 4 to 8 times faster in RT than RDNA2 proves that it is indeed possible to achieve.Sure certain features offered over a 2x improvement when benchmarked in isolation as a synthetic test, but this was not reflected in real world games when tons of other processes are in flight and bottlenecks are shifting all around.
Because it hasn't ever happened in the history of modern GPUs. AMD also gains less from the node improvement since they were starting from a better place than Nvidia last gen.
You're expecting AMD to make a performance jump multiple times bigger than Nvidia was able to with a smaller improvement to manufacturing over their previous GPU.So you dont have any good reasons ?
what does Nvidia have to do with anything?
I really think you dont know how to count.You're expecting AMD to make a performance jump multiple times bigger than Nvidia was able to with a smaller improvement to manufacturing over their previous GPU.
And if they can further add something like SER in Nvidia GPUs, then they might be even competitive in Raytracing performance. Lets hope for the best.Its rumored that there will be dedicated bvh acceleration in rdna3 gpu's, akin to Intel and NV. Same for ML acceleration. We will know soon.
So which past features were equivalent to RT?Sure certain features offered over a 2x improvement when benchmarked in isolation as a synthetic test, but this was not reflected in real world games when tons of other processes are in flight and bottlenecks are shifting all around.
With SER, Nvidia is already quoting a 44% improvement in Cyberpunk's new ray tracing mode and a 29% improvement in Portal RTX. So the premise about >2X being impossible may be wrong even on Nvidia hardware.You're expecting AMD to make a performance jump multiple times bigger than Nvidia was able to with a smaller improvement to manufacturing over their previous GPU.
Why are you hung up on some 'dedicated cores'. Whether something is dedicated or part of some bigger unit doesn't dictate the performanceIf amd doesn't have dedicated cores for rdna 3 ray tracing l.. it will loose to ampere and be embarrassed by ada and waste 5nm node density
Doesnt make sense. nVidia and Intel incorporate their RT cores into "some bigger unit". So yes, it dictates performance when your competition has more ressources to solve a problem.Why are you hung up on some 'dedicated cores'. Whether something is dedicated or part of some bigger unit doesn't dictate the performance
No, it's not. The slide you're talking about has three "levels" - software, hardware and cloud, but nowhere it suggests Global Illumination is tied to any specific "level". Software is ProRender etc, Hardware is for 'Select effects for real time gaming' which is exactly what we currently have and Cloud is for 'Full Scene Ray Tracing' aka everything is traced (porting ancient games to be fully traced now is different, obviously)AMD never set out to become the leader in RTRT. I don't thing they'll ever waste consumer silicon for dedicated RT hardware. In their view, Global Illumination is the only true feature that makes a visual difference, and in one of their older slides, that's supposed to be accelerated via Cloud.
Having dedicated units (what Intel & NVIDIA have atm) or something part of bigger unit (what AMD has) doesn't dictate performance. One doesn't necessarily have more resources than the other, it depends on what's in those units that dictates the performance.Doesnt make sense. nVidia and Intel incorporate their RT cores into "some bigger unit". So yes, it dictates performance when your competition has more ressources to solve a problem.