Sure let's compare RX580 at maximum boost vs non maximum Vega... Seems fair.
No, you compare both cards at their average clocks speeds because those are the clocks being used on your TPU performance comparisons.
The
Vega cards were reviewed using AMD's reference blower coolers which caused
thermal throttling at 85ºC. The RX580 was launched as a partner-only GPU with no reference cooler, meaning most OEMs paired it with open-air coolers that prevented thermal throttling. That said,
most RX580 cards actually
averaged at above 1400MHz, i.e. well above the 1340MHz "boost frequency" you used to take the 6.2 TFLOPs number in your post.
Using the reference coolers without undervolting, the average clocks for Vega 64 are close to 1400MHz after a while, and for Vega 56 it's 1300MHz.
Vega 10's less-than-expected performance came from the fact that its power/frequency curve is pretty terrible compared to Pascal, and not because the architecture is broken. If the Vega architecture was so bad, AMD wouldn't have kept using it to deliver the fastest mobile iGPUs in the market.
And it doesn't even make a difference, because clocks affect all metrics equally, so it doesn't change a thing compared to Ampere.
Except clocks affect compute and texture/pixel fillrate throughput, which you used in your post to try to
prove that
Vega is more broken than Ampere and worse than Polaris.
Your own comparison is what made it relevant.
Basically people who think nvidia lied about the number of tflops just don't understand how gpus work.
No one made this claim.
The claim is Ampere's very high FP32 throughput may never result in significantly higher performance in future games because the architecture isn't designed to use all that throughput in games anyway (e.g. it can't really do 30 TFLOPs unless the TMUs, ROPs, geometry processors, etc. are unused).
And just like Vega 10 was a chip developed to compete in too many market segments (gaming + productivity + compute), Ampere / GA102 might have also been developed to increase nvidia's competitiveness on non-CUDA compute workloads,
which was apparently nothing to right home about compared to Vega.
The reality is that claiming "
Vega / GCN5 / GFX9 architecture is broken" makes just as much sense as claiming "
Games aren't ready for Ampere".