Pete said:
ondaedg said:
As far as I know, there are no applications that have been proven to be app specifically optimized by ATI, or NVidia for that matter. That is the reason for my posts in this thread. There is no conclusive proof (yet) that any IHV are doing app specific optimizations.
nV has admitted this to Derek at AT, and I believe it was Unwinder who found tons of references to specific games in nV's drivers. So they are doing it.
I also have text from an MSN converstaion with Derek Perez who openly stated that part of their compiler optimisers task is to "replace shaders where they feel they need to" (I wasn't going to get into a discussion with him about what a compiler actually
is), and that they would keep replacing shaders in their drivers every time 3DMark issues another patch to defeat their detections. We also have the case where we found in the intial 5900 drivers merely renaming UT would offer full trilinear. ATI have also openly stated to me that they have some detections in R200 drivers - there are thing like vertex buffers that get reassigned in order to better suit how they handle the vertex shaders.
Over the course of many looks into IQ there were also issues with the lighting being off in Halo in certain releases and Lars did an article pointing out the IQ issues in Aquamark - now these may or may not have been as a result of "optimisations" but becuase of what had already been seen, and Unwinders work, mud was sticking. And because of the retraction Futuremark had to make the word "optimistation" was applied and hence that developed a nasty conertation.
Optimistation, per-se, is never a
bad thing - ask Intel or AMD, as they have been optimising their CPU's and compilers for years. However "optimisation" has a dark side associated with it and it may be all too tempting to stray off and do things you shouldn't necessarily do in order to improve performance where you think people might not notice - however, where does that line get drawn? Without people looking into these things consumers would just have to have faith that what they are looking at is what the developer intented them to look at and whoe can really say they have that faith from any of them?
As for mathematical equivelent shader optimisations, while I'm fairly ambivelent to their use, I would always point out that these tend to be more fragile - its not generic so a user may find good performance one day, download a patch that may have some inocuous shader change and find their performance sucks; alternatively a shader thats used in an engine may be altered in another use of that engine which means that the IHV will need to optimise it again to get back to a similar performance of its initial implementation, and if the title is not a benchmark then the question is whether or not that would get done. If you think about it though - an optimal shader compiler should be able to compile to, or at least very close to, the best case hand tuned shader anyway and while short term, quick, fixes may be good for replaced shaders, in all it would be better to put the main effort into getting your shader comiler hit as close to that theoretical maximum on a generic basis.
Both ATi and nV have admitted to optimizing for 3DM01SE Nature, but I *believe* those optimizations increase speed without noticably impacting IQ in a legitimate way (i.e., no custom clip planes or omitted z clears).
IIRC both used compressed textures in Nature, which will have some affect on IQ.