I do absolutely agree with that, but the fact that you identify programming practises with CELL to Larrabee draws toward my underlining point that at some point you will be doing any largely parallel work on the GPU (Larrabee et al going forward)
Written in what language?! This theory assumes that in future, writing for GPU will be a lot easier than writing for Cell, but that requires faith that there will be a dramatic improvement in GPGPU development that leapfrogs Cell development. Larrabee doesn't count as a GPU because that's more a Cell-like CPU with x86 instruction set, just with GPU features to facility that aspect. You'll still need Intel to provide some amazing tools though, and you'll still be faced with the same sorts of problems as writing for Cell, only starting from zero code base versus 5+ years of Cell development and reusable code.
So we have the options of :
- Basic PC CPU and hefty GPU for heavy processing
Requires complex GPGPU code, and standard PC code with the problems of parallelising tasks.
- Cell2 and GPU
Uses existing Cell know-how, existing standard GPU use, with the issues being how you parallelise your tasks.
- Larrabee only
Requires a whole new, unused development method, with solid roots in x86, and an unknown set of tools.
- Basic PC CPU and Larrabee as a GPU
A combination of 1 and 3, making the 'GPGPU' aspect easier but still Larrabee development unknowns and general parallelisation task problems.
All of these have the issue of developing parallel code, and unless one system can offer a Magic Bullet, that's not a deciding factor of any. Only one of these options says 'take everything you know now and carry on with it, with a progressive advancement' whereas the other options all have some element of 'learn something completely new'. Now if a developer is faced with a console having not developed on Cell at all, then the option to work with what they know will be more appealing, but I dare say it's no easier. Being able to write x86 instead of SPU code is no advantage as no-one goes that low-level any more. Not having to worry about memory management is a plus, but if you're writing GPGPU code you have all sorts of other concerns. I think if people drew up an honest, comprehensive assessment of what different issues would be faced by the different hardware setups, such as exactly what is involved in writing heavy-lifting code for GPUs, they'd see no easy choices. But one choice has years of solid experience behind it. Those who shied away from getting that experience this gen will be faced with it, unavoidably, next-gen. There's really no point trying to resist change! When it happens, get in quick!
If you recruit someone out of college or out of the PC dev world it's a small step to then develope for X360 or it's likely successor which will still probably use a dev friendly and relatively easy to develope for base.
What you're talking about here is high-level development. Anyone who can come out of college and write low-level, high-performance engine code for multicore x86 CPUs and GPUs is going to have what it takes to learn Cell. What we think of as PC code is just throwing any old thing at it and having the CPU turn that into something fairly quick, but this does not get best performance from the silicon. The only way that approach will work next-gen is if efficiency goes out the window and all devs care about is cost effectiveness, which may be the case. the end result would be a bit like getting current-gen performance but really easily. That is, back in the 16 bit days you'd need to optimise to betsy to get high-performance platformer, that now could be knocked up by a teenage in a couple of days on high-level languages where there's so much processing overhead, a lot of that can be given over to easy development. Developers will be able to release KZ2 quality games using off-the-shelf engines, just filling in the blocks. And as Wii demonstrates, that may be enough. However, if you want to advance things and get better looking games, where NFL looks like EA's CG trailer instead of what we have now, you'll need to hit the metal, and that's something that more intrinsic to the coder than anything taught.
It's why even though 3rd parties have gotten a better grasp on Cell, they still wish Cell had never existed and hope Sony will drop it in favor of a more friendly architechture.
Which is the more friendly architecture and how? Larrabee, that great architecture that no-one has used with unknown tools and all the parallilisation problems of Cell? GPUs and their funky languages and limited data-structures to fit their access patterns?
None of the options are friendly!