DaveBaumann said:
Simon, read a few posts back - I think that is out the window since work on the compiler has stopped, i.e. its already optimal for "mathematically equivelent" shaders. Its probable that the hand coded shaders are now wholly mathematical equivelent.
Dave, I sincerly question that answer you got from Derek Perez regarding NVIDIA halting work on the compiler.
My understanding of the situation would be that a few things are still going to be worked on, but not all of them. The reason for this is the
unified nature of the compiler. If NVIDIA tells the truth, the whole thing has been engineered with reusage in mind, and that means if the NV40 got certain similar problems, it'll be possible to use the general NV30 algorithms with perhaps a few different factors taken into account.
A good example of this is register usage. AFAIK, register usage is still existant, although to a lesser extent, in the NV4x. I beleive the NV4x is also based around Xx2, so the idea of putting TEX instructions together is also a good one. And so on.
So making the algorithms for these things more efficent is a good idea, and I sincerly cannot believe they're already optimal...
Certain things will most likely never be "perfect" on the NV3x, and not be developped further than now, as the focus should be on the NV4x part now: after all, we're just 3 or 4 months away from launch.
But yes, the idea of trying to make their compiler better than it is, because it'll become that good, is insane. You'll never get 900 points with the little work NVIDIA is likely to dedicate to the NV30 UC path.
Uttar