MfA said:JF_Aidan_Pryde said:I intend to ask him something regarding Cell. But I guess the big question is whether IBM will have the tools to generate parallel code.
Those kind of tools dont get you very far, you need large granularity parallelism ... if the developer does not take care not to introduce dependencies in his code that kind of parallelism isnt present to begin with, and even if it were it is hell on a compiler trying to analyze wether there is any dependence on that scale. You'd have to put a restrict modifier on every pointer
I have to protest a bit against your initial assertion. Well optimised libraries can help a lot. I'd imagine that most vanilla tasks will be readily available in that form.
Generally speaking though, programming for machines with higher levels of parallellism does require a bit of a shifting of mental gears, even with auto parallellising/vectorising tools. Most programmers think algorithically in an Algol/Fortran/Pascal/C++ tradition, which isn't 100% appropriate. However, unless programmers have grown thickheaded over the years they should be able to adapt, and by appearances this first iteration of the Cell concept won't be massively parallell anyway, so Amdahls law won't be as punishing.