Well, I think they'll be roughly similar. CPU's have the advantage that they've only very recently started to implement thread-level parallelism (for consumer processors). Leveraging this more could prove a tremendous advantage. GPU's have the advantage that they've not yet been engineered as carefully to allow for high clockspeeds. It could go either way, I think, but in very rough overall terms, I expect the gains to be similar.Demirug said:I think it’s still too early to tell who will win the power increase challenge. Maybe in a few years a diagram that compares the number of shading units with the number of CPU cores over the time could be interesting.
Yes! Play UT2007 on Red Storm! HeheI am shortly make a joke about porting the whole thing to Red Storm. Unfortunately the typical player doesn’t have such a system in the attic. And if the have the money investing it in a large grid of multi GPU could give them more bang for the bucks.
More seriously, though, it could possibly be useful. For GPGPU apps, CPU's are much closer to the performance of GPU's, even in the best of cases for GPU performance, than they would be for normal software rendering. With CPU's in your typical system becoming more and more powerful, being able to send threads to both the CPU and the GPU could potentially be useful for supercomputing.
One might envison, for example, a future supercomputer that has, on each node, two GPU's and four CPU's. The GPU's could be there for people who can make their algorithms run well, likely through third party libraries, within OpenGL 3.0 (Windows on a supercomputer seems unlikely), and the CPU's for those who can't. One might gain a maximal amount of power, then, by leveraging both.
P.S. No quotes on links in UBB