Either I can't see in anyway how GCN should be a given either versus older architecture that proved brilliant or something really custom (changing too much to proved and well performing design sounds risky depending at how much resources you throw at the design).
GNC has less raw performance/mm^2 in comparison to VLIW architecture, that's a fact. A 28nm VLIW chip of the size of Tahiti could have packed probably 30% more ALUs.
But performance are much more consistent than before, with a much higher min. framerate, and overall I don't think that real-world scenario perf/mm^2 have declined considering the diminuished return they were getting as they increased the number of ALUs.
In the future, real perf/mm^2 for GNC will greatly surpass the old architecture, as more and more engine move to compute shader technology. Look at the performance gains AMD achieved in Civ 5, a game that uses heavily compute shader technology. It's more than 65% faster than the older architecture.
Even if in the console space architecture are exploited in a different way, so a VLIW chip may reach more susteined and closer to its peak performance, compute is not only bound by FLOPs, but also by cache architecture, internal bandwidth and so on. And GNC (or Kepler/Fermi) offers much more on this side.
Microsoft knows where the graphics is heading. DirectX11 introduced DirectCompute, they are working on C++ AMP, an extension for high-performance computing an many other things. I'm going as far as to say that they won't have GNC in their console, but probably even a more compute orient architecture, for example Sea Island or, if they launch in 2013, something similar to that year architecture.
A fully HSA console would greatly benefit the entire AMD business.