That is way too conservative. Secondly, I don't understand why we should have expected full utilization of these consoles at year 2 or 3 of their lives. That has never happened in any console generation so the expectation that it should happen now is something I don't understand.
I'm not pointing at the state of the art today and saying 'that's it', in terms of development exploitation of the system..it'll obviously get better. But I kind of think it is slower on the CPU side this generation than last..and that ultimately, perhaps very few games will come close to using available CPU resources to the full extent, in a way that's actually appreciable to the game experience.
Even with PS2, mid-cycle, it was obvious what most developers might do with yet more processing power.
But now with PS3, I don't know if there's such an obvious roadmap? Maybe I'm wrong. But it seems like that.
It would be interesting to quiz developers on that. I'm sure Sony, MS et al are.
If you still need to program SPUs and your content pipeline still needs to be able to generate high quality HD assets then getting less processing power or RAM to dump data into isn't a help to your cause at all. The complexity of the task has not been lowered yet the task must be accomplished with less -- which further complicates things.
Are developers on average really making tangibly beneficial use of the SPUs, to a degree that really makes the processor sweat? Some might be, but I think they're probably in the minority..
I might be wrong, though, again, but it's just the impression I get right now..that there's something of a slowdown in how quickly developers are consuming processing resources.
Lesser hardware would help Sony's bottom line but not developers at all. What will help developers most are concrete expectations (for planning and early execution), little need to re-invent the wheel (so that the focus is on making games not learning how to again), and more to work with than they have now in "relative" terms (HW truly capable of handling what is being marketed - HD - and please don't invent some ULTRA HD crap to once again place the expectations beyond what the HW is capable of delivering by the "common" developer).
A mid or even low range GPU in 2011/2012 would likely be very capable of doing 'nice' HD. I don't see them upping resolution..but things like 3D may become next '1080p'. I don't think they'll hinge their 'novelty' on 3D though..unless they find a way to make it work as well on existing displays. I'm personally really excited by 3D, I wish I could see those CES demos for myself, but I have to be realistic about them making that a standard feature on games if it requires a display upgrade.
I also think on the GPU side, processing resources are becoming so abundant that developers don't really know how to make 'non-lazy' use of it. On the PC side, software seems to be lagging hardware more than ever. Consumers are left to up resolution, AA and AF to try and make use of their cards. How many developers know what they'd do with 1Tflop of processing power on the GPU, what to do with it where all the processing is making a tangible difference to a (720p/1080p) image onscreen? (i.e. not doing things inefficiently just to consume resources?)
I'd love to hear developer's thoughts on this, love for someone to lay out a roadmap of a compelling usage of an amount of processing power that would be 10 or 20 or 30 times PS3 or 360, in the same way PS3 was 35x PS2 or whatever. But I feel like in some ways, particularly CPU wise, PS3 even has overshot a typical developer's capabilities..if the next generation were do move the bar up to a similar degree, I think it'd overshoot to an even larger degree.
Again though, just the impression I get right now.. maybe in the next couple of years if we see many more developers using CPU well and hurting for CPU resources, my mind might change on that..
And quite asides from developer capability to consume these resources, there's the question of how much the consumer notices. I know we had these arguments at the beginning of this gen, and I still think they weren's too valid then..I do think Wii undershot in that regard..the consumer
does see a difference there because of processing power vs the other systems..but in terms of what's simply on the screen, that's an argument that may become more convincing next generation. As a matter of strategy I think both MS and Sony have to be considering how they will make their experiences on their consoles universally different from the other systems, and I'm not sure how much that'll have to do with what's on your screen..