It's fair to use adjust GPU die size estimates to compensate for differences in die sizes.You are not comparing like for like unless you take into account functionality. Perhaps all the extra transistors is needed for that additional functionality. How do you know the competitors wouldnt end up with even more trannies to hit the advanced functionality?
At launch, a 360mm2 Tahiti performed worse than a 300mm2 GK104, but at least the latter had way better FP64 and a 50% larger memory system. And if it had given AMD any traction in the commercial compute space, that cost would have been justified.
But for Vega I just don't see it. Area is consumed by the units with the highest multiples: shaders primarily, texture units and ROPs and MCs and caches after that.
A control or management unit that doesn't manipulate data like HBCC isn't the kind of thing that's going to explain a massive die size difference.
So you are betting on some unknown feature to explain the difference. I don't think that makes sense, and it surprises me that after more than a decade on this board, you're saying that it's unfair to compare die sizes of GPUs are very similar GP102-FP16-adjusted and Vega.
If Vega turns out to have a large amount of FP64 after all, you'd have a point.
If AMD decided to spend 40% more area on a speculative, currently unused feature, and betting that it will start making them boatloads of money soon (enough to justify being a year later than the competition), then they made a huge mistake, IMO.
But I don't think any of that is true: Vega's pathetic results can best be explained by an obscure architectural mistake or corner case bug that could not be fixed without a full base spin. They had no choice but to release one of their worst performing chips ever.
I believe future Vegas will return to a performance ratio that's more or less in line with Pascal.