Not delusional at all. Software can do many things, in fact, software IS the things that can make a card run at 100% or run at 1%. It all depends on how the software works.
We can break up GPU performance into multiple pieces: on silicion, memory bandwidth and texture filtering throughput are fixed function, so little can be done. Shader performance is pretty much the only variable over which you have major control. The other part is the driver.
In existing GPU's, at reasonably high resolutions, we have seen performance scale almost linearly with the amount of hardware pipes/ALU's/etc. This is only possible if the driver is a small part of the performance equation, but let's assume that, worst case, the GPU has to sit idle 20% of its time, waiting for the driver.
Even with an infinitely fast driver, you'd still only gain 25% in performance.
So your magical 30-40% increase simply
has to come from the compiler. Let's not forget, you were not talking about a specific shader here or there, but about an across-the-board performance increase. Basically, you're talking about a compiler that up a couple of weeks ago just completely totally absolutely sucked and that nobody seemed to realize it. ATI had very good shader compilers in the past. Are you suggesting that their entire, very competent, compiler team resigned and was replaced by a bunch of fumbling idiots?
But could it be that Vista drivers suddenly demand much more CPU cycles than XP?
It's possible that DX9 doesn't fit very well in the Vista driver model, but wasn't everybody raving about the high quality of ATI Vista drivers? Aren't they supposed to be unifed anyway, so only a small part is hardware specific? I have yet so see complaints about overall 40% performance loss for switchers to Vista. If ATI can make efficient Vista drivers for DX9, they should be even more efficient for DX10, unless all the praise for the much higher efficiency of DX10 was just one big lie. Unlikely, don't you think?
Especially with Vista still being finalized, etc, I hardly doubt ATI had their R600 card working at 100% efficiency at the design stage 3 years ago.
Strawman argument. Their compiler had to be efficient enough 2 years ago to start validation of expected performance. 40% off theoretical peak rate is not 'enough' in my book.
During those 2 years, the compiler can be gradually improved to fix corner cases. A process that will continue, as we have seen in previous generations.
If this were the case why do most all driver improvements increase performance over the life of the card? The card's arn't changing.
Exactly my point. In the past, we've never seen across the board 30-40% performance jumps. They were always gradual.
Thank you, you just proved my point and contradicted yourself at the same time.
O tempora! O mores!
Why do you think Nvidia is having such problems with Vista? By your reasoning they shouldn't be experiencing any problems at all and everything should be rosey goings.
Another strawman.
Also, let me clarify something. I think ATI's R600 design was made to beat or be on par with what an 8900GTX would be (performance wize) from the beginning. I never said a software tweak would put them 40% past what their card was originally intended for. For that you'd need some hardware changes of course.
Summarizing my arguments above: that automatically implies horrible compiler performance and staggering incompetence. Yes, I suppose it's possible.