Benetanegia
Regular
I'm sure you didn't forget the part where you inflated Vega's clocks, you're only pretending you did.
I didn't inflate anything. I went with official specs. You know the ones given by your beloved company.
Nah, they're just typical load frequencies when playing games, or the same benchmarks seen in your techpowerup reference.
ModEdit: Removed unnecessary lingo
Sure, those are coincidentally the only two graphics card in existence that boost the same in every game. *redacted* In your link they were literally stress testing the Vega card...
What exactly is "all performance metrics!!!111oneone"
Vega 56 (typical 1.3GHz clock) vs. RX580 (typical >1.4GHz clock):
1.43x FLOPs (9.3 vs. 6.5 TFLOPs)
1.6x memory BW (256 vs. 410GB/s)
1.84x pixel fillrate (45 vs. 83 GPixel/s)
1.45x texel fillrate (201 vs. 291 GTexel/s)
1.38x higher performance at 1440p, 1.45x higher performance at 4K
So where is your 21% between all performance metrics here?
In the Vega 64, which by your own admision was as fast as the Vega56, at same clocks. You forget we are talking about scaling. You deliberately chosing a card with less FP32, less pixel fillrate, less texel fillrate, less of everything, to make the comparison, basically proving the scaling issues on Vega that you are so desperately trying to disprove is hilarious. You're literally saying that Vega 64, had too much of everything == scaling issues. You don't disprove scaling issues by choosing a GPU with less of "everything" needing to scale. *redacted*
BTW, would you like to make a similar comparison between e.g. the RTX 3090 and the RTX 2060? We would all like to see how "Ampere is broken" after looking at that comparison.
Sure. But I warn you, the results are going to be very different than what you expect, so:
4.09x TOPs (36 vs. 8.8 TOPs) (TOPs being flops + 36 INT pr 100 FP32)
2.79x memory BW (336 vs. 936GB/s)
2.77x texel fillrate (556 vs. 201 GTexel/s)
2.03x pixel fillrate (162 vs. 80 GPixel/s)
2.94x higher performance at 4K
So basically even better scaling of performance above metrics scaling than the 3080 vs 2080. Uh oh... that's not what we wanted to find. Ups!
So Vega 10 was bad because it needed "full refactoring of a renderer to make use of its FP32 capabilities". Ampere is good because "it needs a new game engine".
Got it.
To make use of its FP32, pixel and texel crunching and enormous bandwidth capabilities, you mean. In Vega if you used more FP32, you would just have much more ROPs, TMUs and bandwidth sitting around doing nothing, which is again bad scaling. You just moved the inefficiency from FP32 to ROPs or TMU, you didn't make better use of the silicon. In Ampere the only thing sitting around doing nothing are the FP32 units, everything else is forcefully being used to its full capabilities.
Last edited by a moderator: