Despite how early it is, there's always work being done on each successive hardware generation, most of the time, even before the next immediate iteration has yet to launch - So it's only natural that they're working on PS4 as we speak, and that MS is working on concepts for the next iteration of Xbox, and Nintendo is working on the successor to the Wii... What will we call that... Wii-ii?
That above, is a given, what is not a given is what form these hardwares will take on.
In another five or six years where will we be, what will realistically even be possible? Right now, we're seeing a shift just a little way out to 65nm, and I can say with a pretty high degree of certainty that we'll be coming out of the 45nm era when these things start to go into production - Maybe these machines will include 32nm parts. How many processor cores can you place on a 32 nano chip the size of CELL or the 360's CPU?
Will another five or six years see CPU-GPU convergence? Will we see homogeneity among code types, a type of ubiquity among command types and accepted programming practices between CPUs and GPUs? Will they both be capable of switching "hats", so to speak? Will my GPU do CPU work if my CPU becomes bogged down with too much, and vice-versa?
Will the next hardware iteration be little more than a cluster of extremely flexible GPUs? Will a cluster of shader units do all the work for the whole system? Will they process all the physics, and all the AI, and draw all the pictures?
This is all hard to say.
What is not hard to say is this - The partnerships formed right now, are not likely to go anywhere for the forseeable future - SONY is likely to stick with Nvidia, MS is likely to stick with AMD-ATI, which adds a new dynamic to the whole thing... Nvidia is likely to try to match the flexibility of their next-next-next-gen GPUs to whatever hybredised monster AMD-ATI come up with, and my own theory goes that PS4 and Xbox 360 will have either Hybredised CPUs (CPUs first and foremost with hardware accelerated 3D cores built directly onto their silicon), Hybredised GPUs (GPUs first and foremost with SPE-like multi-purpose cores built onto their silicon), or both... maybe even multiples... Perhaps a two by two hybred CPU hybred GPU grouping.
Xbox III with an IBM CPU (either their own design or an evolution of that inside the 360 today) and an AMD-ATI hybred GPU sounds plausible. It's doubtful AMD-ATI will allow their tech to be piggy-backed onto an IBM CPU, so maybe it'll be two AMD-ATI hybred CPU-GPUs, with one acting as the designated "CPU" while the other is the designated "GPU"... If the general performance of these plausible AMD-ATI hybreds are competitive with an IBM design around the same time, it may go this way.
PS4 with an IBM CPU (Cell-like... STI's baby) with Nvidia GPU tech built onto the silicon seems plausible, with a possible matching "C-GPU"... While AMD-ATI would have something to lose by allowing their tech to be installed in IBM designs, Nvidia would have only something to gain by it, the flexibility inherrent to a hybred design, which AMD-ATI, their only real graphics chip competitor, will soon have.
A question I have is this - Would these GPUs be be clocked 1:1 with the CPU cores?
Very interresting times lie ahead...
Maybe we'll see no hybredisation whatsoever - Maybe we'll see only forward-looking iterations of what we already have... Maybe we'll see CELL and Xenon mk IIs with four times their current processing units and twice the effective speed or efficiency, and GPUs that are equally as dense, but I don't think so. The benefits of the latter are too great to over-look, I think.
Besides the exotic stuff listed above, the standard list of perks is pretty much a given; Much more RAM, maybe as much as eight times the memory footprint, and the equivalent of GDDR6 by then, which should be inline with the internal CPU data-bus speeds we have today on CELL, only off-chip. Processor caches are likely to swell to several megabytes (ten megabytes is really not that far out, but it'll add bulk to the die) and internal bus bandwidth is likely to go into the terabytes per second range. Much of this bandwidth may be usable in preparation of data for the built-in GPU-like cores inside them.
But as I said... Very interresting times.
Dio