http://pixelstoomany.wordpress.com/ (about the response to Tim Sweeney's rant about the dead of fixed function APIs)
This is what I understand from the "one chip does everything" part:
1- An heterogeneous design like Cell (harder to manage?)
2- An multi-core, each core being multi purpose, like Larrabee (maximum efficiency in trade off maximum peak performance?)
3- Merge CPU and GPU in one chip (or is it one die?), like AMD's fusion (easier to manage than Cell, and at the same time cheaper than Larrabee?)
In every topic on ray tracing vs rasterization, different types of AA implementations, different types of HDR implementations, all I see is how one technice offers advantages / disadvantagens over another. From my point of view an utopic, ideal, world, would be one where some, intrinsic fuctions, are all automatized. Pick up a single house with an outdoor garden for instance, there is one sun, one atmosphere, one grass, the real world physics are all the same wheter your are in the house or behind a tree. But in the computational world there are like 10 different formats of HDR, 10 different ways of calculating indirect lighting, different ways of culling and so on.
I really don't see *that* much room for graphical leaps, but rather "trade-off" leaps. In some of the most heated debates about trade offs that had to be made, like less resolution and more objects, no AA and better fps, there seems to be some kind of frustration that sometimes two things are desired, but one cancels / limits the other.
I expect the next generation of consoles to be the last to take advantage of 2D silicon basead transistors.