AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
Because it hasn't ever happened in the history of modern GPUs. AMD also gains less from the node improvement since they were starting from a better place than Nvidia last gen.
What is the equivalent feature to RT in past GPUs? (Can we talk about the DX9 improvement from FX 5900 to 6800 Ultra, or AMD's tessellation performance improvement?)

Anyway, the point is that 3090 level performance in RT for 4090 level performance in rasterization would mean that the enhanced ray tracing capabilities AMD has referred to effectively do nothing.
 
What is the equivalent feature to RT in past GPUs? (Can we talk about the DX9 improvement from FX 5900 to 6800 Ultra, or AMD's tessellation performance improvement?)

Anyway, the point is that 3090 level performance in RT for 4090 level performance in rasterization would mean that the enhanced ray tracing capabilities AMD has referred to effectively do nothing.
Sure certain features offered over a 2x improvement when benchmarked in isolation as a synthetic test, but this was not reflected in real world games when tons of other processes are in flight and bottlenecks are shifting all around.
 
Sure certain features offered over a 2x improvement when benchmarked in isolation as a synthetic test, but this was not reflected in real world games when tons of other processes are in flight and bottlenecks are shifting all around.
The sole fact that Ada is anywhere from 4 to 8 times faster in RT than RDNA2 proves that it is indeed possible to achieve.
 
AMD never set out to become the leader in RTRT. I don't thing they'll ever waste consumer silicon for dedicated RT hardware. In their view, Global Illumination is the only true feature that makes a visual difference, and in one of their older slides, that's supposed to be accelerated via Cloud.
 
You're expecting AMD to make a performance jump multiple times bigger than Nvidia was able to with a smaller improvement to manufacturing over their previous GPU.
I really think you dont know how to count.

So explain Vega on tsmc 7nm vs N21 , 60% bigger die for ~ 150% more performance.
OR N22 being the same size on the same 7nm process with 1/2 the memory bandwidth while having 50% more performance.

by your logic AMD must be GODS amongst men. how could they make such a performance jump, after all only process matters, right ?

Now they get approx. double the logic transistors (7nm to 5nm) with approx. the same die size dedicated to ALU/ROP/MTU/PCI/Display etc when comparing N21 to N31. They be walking it in yeah?
 
Sure certain features offered over a 2x improvement when benchmarked in isolation as a synthetic test, but this was not reflected in real world games when tons of other processes are in flight and bottlenecks are shifting all around.
So which past features were equivalent to RT?
You're expecting AMD to make a performance jump multiple times bigger than Nvidia was able to with a smaller improvement to manufacturing over their previous GPU.
With SER, Nvidia is already quoting a 44% improvement in Cyberpunk's new ray tracing mode and a 29% improvement in Portal RTX. So the premise about >2X being impossible may be wrong even on Nvidia hardware.

However Nvidia already accelerate more of the RT pipeline, and are already 2X faster or more in RT alone. All AMD need to do to is close the gap a little, for example by adding hardware acceleration of ray traversal, which Intel managed on their first generation of RT hardware! Again, if they are 2X faster in rasterization, *any* reduction to the cost of ray tracing will increase performance by >2X. So essentially your claim must be that it's just impossible for AMD to improve their ray tracing capabilities and Intel has them beat.
 
AMD never set out to become the leader in RTRT. I don't thing they'll ever waste consumer silicon for dedicated RT hardware. In their view, Global Illumination is the only true feature that makes a visual difference, and in one of their older slides, that's supposed to be accelerated via Cloud.
No, it's not. The slide you're talking about has three "levels" - software, hardware and cloud, but nowhere it suggests Global Illumination is tied to any specific "level". Software is ProRender etc, Hardware is for 'Select effects for real time gaming' which is exactly what we currently have and Cloud is for 'Full Scene Ray Tracing' aka everything is traced (porting ancient games to be fully traced now is different, obviously)
Doesnt make sense. nVidia and Intel incorporate their RT cores into "some bigger unit". So yes, it dictates performance when your competition has more ressources to solve a problem.
Having dedicated units (what Intel & NVIDIA have atm) or something part of bigger unit (what AMD has) doesn't dictate performance. One doesn't necessarily have more resources than the other, it depends on what's in those units that dictates the performance.
 
Status
Not open for further replies.
Back
Top