Dunno if you guys spotted it yet, but the snippet they have on the page mentions 20Gflops performance, roughly equivalent to 10GHz P4 (though the exact GPU that produced this isn't mentioned).
However, if GPUs are ever going to be useable to run general code, surely we HAVE to get rid of all these god-damned cheating/"optimizing" drivers. It's one thing if there's a little less 'shiny shine' on the graphics than the game programmers intended after the video driver has "optimized" the shader, but if you're doing *real* code that could be pretty much fatal.
Hopefully, the "optimizations" are limited to specific titles, and not general in nature so that not EVERY sequence of xxx, yyy, zzz (etc) instructions are replaced with something fairly similar but not equal.
I saw something a couple months ago that M$ was tightening up the WHQL certification process to put an end to Nvidia's antics re. "optimization" of various softwares, did they actually do this, and what changes were done? Is Nvidia (or others) finding ways to circumvent these new tighter specs - if that's the case - or did the changes have the desired effect?
Anyone have anything interesting to say 'bout this (general code on DX9 GPUs), and the potential issues with "optimization"?
However, if GPUs are ever going to be useable to run general code, surely we HAVE to get rid of all these god-damned cheating/"optimizing" drivers. It's one thing if there's a little less 'shiny shine' on the graphics than the game programmers intended after the video driver has "optimized" the shader, but if you're doing *real* code that could be pretty much fatal.
Hopefully, the "optimizations" are limited to specific titles, and not general in nature so that not EVERY sequence of xxx, yyy, zzz (etc) instructions are replaced with something fairly similar but not equal.
I saw something a couple months ago that M$ was tightening up the WHQL certification process to put an end to Nvidia's antics re. "optimization" of various softwares, did they actually do this, and what changes were done? Is Nvidia (or others) finding ways to circumvent these new tighter specs - if that's the case - or did the changes have the desired effect?
Anyone have anything interesting to say 'bout this (general code on DX9 GPUs), and the potential issues with "optimization"?