Clearly our compiler has gotten much better, as image quality remains exactly the same, the only thing that happens is a 10-15% drop in performance.
We're not sure why anyone would want to reduce their performance by 10-15% for the same image quality, but apparently Futuremark feels that is something relevant.
The ATI user in question replied--no matter whether he re-runs GT2 without quitting 3DMark03, after quitting 3DMark03, or even a complete reboot, he never got two identical pictures, smoke-wise. Must be something other, then.andypski said:Sounds like a pseudo-random number reseeding issue. They're probably seeding the generator once and not restoring the seed value between runs to ensure identical output.nggalai said:Correction: it's not different smoke from 330 to 340, but with 340, two different runs. i.e. one run 340, take screenshot, do another run, take screenshot -> different smoke.
The unified compiler a collection of techniques that are not specific to any particular application but expose the full power of GeForce FX. These techniques are applied with a fingerprinting mechanism which evaluates shaders and, in some cases substitutes hand tuned shaders
How can a hand tuned shader be not specific to a particular application?chavvdarrr said:http://www.theinquirer.net/?article=12657
The unified compiler a collection of techniques that are not specific to any particular application but expose the full power of GeForce FX. These techniques are applied with a fingerprinting mechanism which evaluates shaders and, in some cases substitutes hand tuned shaders
The unified compiler a collection of techniques that are not specific to any particular application but expose the full power of GeForce FX. These techniques are applied with a fingerprinting mechanism which evaluates shaders and, in some cases substitutes hand tuned shaders
A benchmark is worthless to me if overnight the results can change by 15% without proper explanation as to why exactly that happened. Futuremark knows very well what exactly has changed with their benchmark but has not filled in the public that pays attention to their tool. That is inexcusable.
What we expect will happen is that we'll be forced to expend more engineering effort to update our compiler's fingerprinter to be more intelligent, specifically to make it intelligent in its ability to optimize code even when application developers are trying to specifically defeat compilation and optimal code generation.
Because it's not specific to a particular application but to a particular shader ... note the differencemadshi said:How can a hand tuned shader be not specific to a particular application?chavvdarrr said:http://www.theinquirer.net/?article=12657
The unified compiler a collection of techniques that are not specific to any particular application but expose the full power of GeForce FX. These techniques are applied with a fingerprinting mechanism which evaluates shaders and, in some cases substitutes hand tuned shaders
Marc said:Because it's not specific to a particular application but to a particular shader ... note the difference
NVIDIA said:What we expect will happen is that we'll be forced to expend more engineering effort to update our compiler's fingerprinter to be more intelligent, specifically to make it intelligent in its ability to optimize code even when application developers are trying to specifically defeat compilation and optimal code generation.
Have a look at that NV document called "Unified Compiler Technology" (IIRC) and search for "intrinsic code". And weep.CorwinB said:Marc said:Because it's not specific to a particular application but to a particular shader ... note the difference
Stop giving them ideas, please...
"Please note that this shader-based optimization is consistent with our guidelines, since it would benefit any application (including games) using the exact same shader."
I remember Luciano Alibrandi is EX3dfx employee?
Guess where I got it from--right, this thread here. Sorry, should have added a link.zeckensack said:I've plucked this from another forum where nggalai posted it. Blame him
Agreed. I wouldn't be surprised if, in the end, it all came down to petty semantics ...zeckensack said:The real disturbing thing here is that NVIDIA's putting the name "compiler" onto something that, in reality, is the same old static application specific shader replacement.
This, ladies and gentlemen, is not a compiler.
nggalai said:Hi zeckensack,
Guess where I got it from--right, this thread here. Sorry, should have added a link.zeckensack said:I've plucked this from another forum where nggalai posted it. Blame him
Too late. The damage has already been done. Those twisted f**ks :?nggalai said:Agreed. I wouldn't be surprised if, in the end, it all came down to petty semantics ...zeckensack said:The real disturbing thing here is that NVIDIA's putting the name "compiler" onto something that, in reality, is the same old static application specific shader replacement.
This, ladies and gentlemen, is not a compiler.
93,
-Sascha.rb
zeckensack said:I've plucked this from another forum where nggalai posted it. Blame him
Assuming this is accurate (extracted w 3Danalyze, I guess), the changes are absolutely trivial. Only register numbers have been swapped.
Any piece of software that deserves to be called a compiler would just proceed as usual, it wouldn't matter at all.
The real disturbing thing here is that NVIDIA's putting the name "compiler" onto something that, in reality, is the same old static application specific shader replacement.
This, ladies and gentlemen, is not a compiler.
An official from NVIDIA Corporation confirmed Mr. Tismer?s accusation that ?patch 340 disables the GPU compiler. The compiler has to run on the CPU instead resulting in code harder to digest and taking away 20% of the performance.? ?Yes, that is actually the case with the new patch 340 that Futuremark posted,? said an NVIDIA spokesperson on Wednesday.
?Few weeks ago we released our 52.16 driver that includes our brand new unified compiler technology. With the new patch the benchmark, our unified compiler gets not used by the app so it goes to CPU and we are definitely slower,? Luciano Alibrandi, NVIDIA?s European Product PR Manager, added.