Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
If power consumption is not factored as a limiting factor, I made some calculations based on actual frame rates and scaling at 4K using the Techpowerup numbers for 5500XT, 5600XT, 5700 and 5700XT.
A theoretical Navi 21 @2,2GHz, 384 bit bus @16Gbps, 80CU, 2x ROP throughput, should be at least 2,2x5700XT. In actual frame rates. Considering the scaling of the four cards listed above at 4K. In rasterization, of course.
Oh, non factoring in any architectural (IPC) improvement. Only the doubling of DCU according to the RDNA1 block scheme and the double ROP count.
 
If power consumption is not factored as a limiting factor, I made some calculations based on actual frame rates and scaling at 4K using the Techpowerup numbers for 5500XT, 5600XT, 5700 and 5700XT.
A theoretical Navi 21 @2,2GHz, 384 bit bus @16Gbps, 80CU, 2x ROP throughput, should be at least 2,2x5700XT. In actual frame rates. Considering the scaling of the four cards listed above at 4K. In rasterization, of course.
Oh, non factoring in any architectural (IPC) improvement. Only the doubling of DCU according to the RDNA1 block scheme and the double ROP count.
I don't think we'll see a doubling of ROP. I would gladly be wrong though.
 
Obviously it will be about the benefit of Raytracing. And after that it will be about the "abused" usage of Raytracing.

I don't think abuse will be a problem. It's not like tessellation where Nvidia had performance to spare. RT is a bitch even on Ampere. It will be optimized to hell and back.
 
I don't think abuse will be a problem. It's not like tessellation where Nvidia had performance to spare. RT is a bitch even on Ampere. It will be optimized to hell and back.
Depends. If you look at software RT like in Crytek's Neon Noir, Turing seems much better suited for the requirements. Now, i have no idea how good/badly optimized that benchmark is for the respective architectures and how much optimization RDNA2 has for this (I guess a lot) and/or how much of the most painful calculations would be removed by the respective dedicated HW blocks in Ampere and RDNA2.
 
Depends. If you look at software RT like in Crytek's Neon Noir, Turing seems much better suited for the requirements. Now, i have no idea how good/badly optimized that benchmark is for the respective architectures and how much optimization RDNA2 has for this (I guess a lot) and/or how much of the most painful calculations would be removed by the respective dedicated HW blocks in Ampere and RDNA2.

That's an interesting data point. Navi is well ahead of Vega and Turing is well ahead of comparable Navi cards.

Not sure we can read too much into that aside from it seems to be more optimized for Turing. It's all shader code so not really relevant to the hardware RT discussion. It would be super interesting though to see how Crytek's shader based RT workload maps to each architecture.
 
That one is a 4SA part.
Yeah... It's 1 shader engine with 4 shader arrays...
The 5700XT is 2 shader engines with each 2 shader arrays.
Big Navi could be 2 shader engines with 4 shader arrays, or, 4 shader engines with 2 shader arrays. Or they could keep the two shader engines with two shader arrays, and double the CUs. Which do you think is more likely?

Turing is well ahead of comparable Navi cards.
In IPC they are pretty much identical.
 
Status
Not open for further replies.
Back
Top