SemiAccurate put out the transcript of the interview they had with Andrew Richard and Tim Sweeney.
Putting aside their divergence about software rendering they still make pretty interesting points.
One point they pretty much agree on is that having discrete CPU and GPU doesn't make sense. Andrew Richard goes as far as stating that if you have to have to chips he would favor two "fusion" chips vs a CPU and a discrete GPU.
As architecture like Larrabee may not be valid option before at least five years as usual "random" questions arise into my mind (not all them are console related it's a bit more generic about the needs in personal computing).
Soon we will have one the same chip, CPU, GPU, decoder, encoder (in Sandy bridge for instance). The burden of quiet some tasks is being removed from the CPU. Pretty much every things the "average joe" does with his computer will get accelerated in some way (even browsing will be GPU accelerated soon).
So the question that arises in my mind is does SIMD units (like SSE, Altivex, AVX) still make sense?
In the X86 realm you can't pass on them for compatibility reasons but in a console (or in the embedded space see tegra2) will it make sense to invest silicon on this type units?
They put quiet some constrains on the CPU designs and end up taking up a significant part of a CPU die (see
here or
here).
By watching the Bobcat floor plan for example it looks like (huge assumption I know) by giving up on the SIMD units could allow to "bulldozerized" a bobcat as far as die size is concerned.
Could the option be valuable for a console?