Help me understand Kyle

Ante P

Veteran
http://www.hardocp.com/article.html?art=NTQ5

Is it OK for Futuremark to say, we like the way ATI does things with their drivers so we are going to leave that alone? Then look at NVIDIA and decide that NVIDIA is doing it "wrong"?

My main problem with the statement is obvious: ATi aren't doing anything. Ergo Futuremark can't produce any "counter measures" as they do to defeat nVidias optmizations.
Thus his whole argument (rather questioning) is just void.

Is it not?

One thing I would like to know is in what games and to what degree does nVidia replace existing shaders with their own.
If this is something nVidia has time and resources to do for each and every game out there: then sure: go ahead and do it in 3DMark03 too. It woudl give a valid impression of how nVidia hardware performs.

The problem as I see it is that they most probably don't sit down and replace each shader by hand in anything except 3DMark03. Which does in every aspect in my mind make the optimizations invalid.
Soon enough we'll have a truck load of DX9 games and they surely can't have time to "hand optimize" more than just a slight fraction of them right?
 
Isn't the critical issue 'Does any putative replacement shader do the same job that the original did?'
 
Dio said:
Isn't the critical issue 'Does any putative replacement shader do the same job that the original did?'

In my opinion: No.

If nVidia only replaces shaders in 3DMark03 then obviously 3DMark03 when using drivers with application specific optimizations won't reflect the general performance of the FX boards and nVidia would be "the bad guys"TM.

But if nVidia does indeed have time to replace shaders in each and every game out there then the performance with the application specific optimizations would indeed reflect general performance and Futuremark would be "the bad guys"TM.

It's fully up to nVidia to provide us with information on how common individual shader replacement is. But as long as we just see that they're doing it to 3DMark03 I think Futuremark are 100 % fair in "defeating" these optimizations.
 
Ante P said:
http://www.hardocp.com/article.html?art=NTQ5

Is it OK for Futuremark to say, we like the way ATI does things with their drivers so we are going to leave that alone? Then look at NVIDIA and decide that NVIDIA is doing it "wrong"?

My main problem with the statement is obvious: ATi aren't doing anything. Ergo Futuremark can't produce any "counter measures" as they do to defeat nVidias optmizations.
Thus his whole argument (rather questioning) is just void.

Same as usual. Kyle lacks understanding of the situation, but still feels he needs to comment. He;s probably still stuck in the loop of hating 3Dmark because that is the stance he has been holding since Nvidia fed him all their BS at the beginning of the year. He's not capable of re-evaluating his position based on new evidence because he thinks that makes him look weak. Unless of course you turn from friend to enemy, in which case he acts like a sulky child.

As you say, the difference is simple. Futuremark is not targeting Nvidia, but is targeting drivers that cheat. Nvidia is trying to cheat the benchmark scores, and ATI is not, ergo all the cheat blockers only address Nvidia drivers, not ATI's.
If Nvidia drivers stopped cheating in 3DMark, they would no longer be targeted.

Parhelia drivers are also marked as "FM-approved" without the need to block cheats.

Now you could argue that what Nvidia does should not be labelled as a cheat (which may be what Kyle is actually trying to argue), but we've been over this ground before, and the concensus is that the only people who believe that what Nvidia do are "valid optimisations" are Nvidia and their apologists.

Ultimately it is up to Futuremark to set the rules of using their benchmarking products, and they've made it pretty clear that they consider what Nvidia does to be cheating, and that is all that matters when it comes to using 3DMark within their guidelines.
 
Kyle clealy don't understand the issue (atleast if we give him the bennefit of the doubt), Futuremark has been very clear on why scores change with v340.


Kyle said:
A benchmark is worthless to me if overnight the results can change by 15% without proper explanation as to why exactly that happened. Futuremark knows very well what exactly has changed with their benchmark but has not filled in the public that pays attention to their tool.

This is simply not true:

Futuremark said:
Parts of the program code have been changed so that possible 3DMark03 specific optimizations in current drivers will not work.

Futuremark said:
The only change in build 340 is the order of some instructions in the shaders or the registers they use. This means that new shaders are mathematically equivalent with previous shaders.

So Futuremark has only told what they have done (prevented application detection, they have also told how they did it. If the scores change it is because the driver has 3DMark03 specific optimizations. But instead of posting the facts he goes on posting the whole "3Dmark has disabled our compiler" lie.

Kyle said:
"Is it OK for Futuremark to say, we like the way ATI does things with their drivers so we are going to leave that alone? Then look at NVIDIA and decide that NVIDIA is doing it "wrong"?

Futuremark has very clear guidelines what is legal and what is not. Application specific optimizations are illegal and they clearly state that the only thing that is changed is application specific optimizations.
 
My god , why are we still debating over Kyle ? I'm seriously starting to think he says asanine things to get his hits up for a couple days .
 
It's "asinine", not "assinine", "assignine", "assanine", etc.

Just wanted to get that off my chest.

We now return to our scheduled programming...
 
Daliden said:
It's "asinine", not "assinine", "assignine", "assanine", etc.

Just wanted to get that off my chest.

We now return to our scheduled programming...

you forgot asanine ;)
 
Dio said:
Isn't the critical issue 'Does any putative replacement shader do the same job that the original did?'

Yes. Iff, it's applied universally through optimizing code, and not by replacing known shaders with handoptimized versions.
 
Humus said:
Dio said:
Isn't the critical issue 'Does any putative replacement shader do the same job that the original did?'

Yes. Iff, it's applied universally through optimizing code, and not by replacing known shaders with handoptimized versions.

Right.

In other words, Dio, if the shader is only "replaced" because the driver is made aware of specific shaders via app detection or empirical shader detection, then while the end result might be OK, the "optimization path" has major issues which makes the overall situation bad.

Are developers going to have to take care to not change register names during their development cycle for fear of plummeting performance?

As a developer, do you want to be concerned that any minute alteration to your shader is going to cause problems?
 
Ante P said:
http://www.hardocp.com/article.html?art=NTQ5

Is it OK for Futuremark to say, we like the way ATI does things with their drivers so we are going to leave that alone? Then look at NVIDIA and decide that NVIDIA is doing it "wrong"?

My main problem with the statement is obvious: ATi aren't doing anything. Ergo Futuremark can't produce any "counter measures" as they do to defeat nVidias optmizations.
Thus his whole argument (rather questioning) is just void.

Is it not?

One thing I would like to know is in what games and to what degree does nVidia replace existing shaders with their own.
If this is something nVidia has time and resources to do for each and every game out there: then sure: go ahead and do it in 3DMark03 too. It woudl give a valid impression of how nVidia hardware performs.

The problem as I see it is that they most probably don't sit down and replace each shader by hand in anything except 3DMark03. Which does in every aspect in my mind make the optimizations invalid.
Soon enough we'll have a truck load of DX9 games and they surely can't have time to "hand optimize" more than just a slight fraction of them right?

In my opinion, Kyle flip-flops as much as he does because he doesn't understand what he's talking about, and hasn't since the beginning. That's the only rational explanation for it.

You can catch him in using phrases like "the way ATi does things in its drivers," and "the way nVidia does things" in its drivers. Operative and key word here is "things." Kyle has no understanding of what the "things" being discussed are, and so he is incapable of understanding the nature of the situation. And so it is that he vigorously contradicts himself often without ever realizing he has done so.

The problem with nVidia PR's "FM's patch breaks our Unified Compiler" is a very simple one to understand.

The first thing that FM did with the patch was to change the 3dMK03 memory-use footprint ever so slightly--just enough to foil driver-detection routines based on specific memory-use footprints for specific applications. Such a change would have no bearing on 3dmk03's performance or IQ charactersitics otherwise. The second thing they did was to make very minor changes in the 3dMK03 shader code--just enough to foil *shader-substitution* code in a set of drivers that depended on the recognition of specific shader code in an application in order to recognize it and to function. Again, those changes also had no affect on IQ or performance in the 3dmk03 benchmark. So, what the "Unified Compiler" actually does is to run 3dmk03-specific driver code when it recognizes that 3dmk03 is the application being run.

Which leads us to the fatal problem in Perez's explanation of the "Unified Compiler" as it is detailed here:

http://www.driverheaven.net/showthread.php?threadid=30794&s=

I'll go ahead and quote it if you aren't registered over there:

Derek Perez said:
With the introduction of the GeForce FX - we built a sophisticated real-time compiler called the Unified Compiler technology. This compiler does real-time optimizations of code in applications to take full advantage of the GeForce FX architecture.

Game developers LOVE this - they work with us to make sure their code is written in a way to fully exploit the compiler.

The end result - a better user experience.

One of the questions we always get is what does this compiler do? The unified compiler does things like instruction reordering and register allocation. The unified compiler is carefully architected so as to maintain perfect image quality while significantly increasing performance. The unified compiler a collection of techniques that are not specific to any particular application but expose the full power of GeForce FX. These techniques are applied with a fingerprinting mechanism which evaluates shaders and, in some cases substitutes hand tuned shaders, but increasingly generates optimal code in real-time.

Futuremark does not consider their application a "game". They consider it a "synthetic benchmark". The problem is that the primary use of 3DMark03 is as a proxy for game play. A website or magazine will run it as a general predictor of graphics application performance. So it is vital that the benchmark reflect the true relative performance of our GPUs versus competitors.

And, while they admit that our unified compiler is behaving exactly the way it behaves in games and that it produces accurate image quality, they do not endorse the optimizations for synthetic use. Hence, Futuremark released a patch that intentionally handicapped our unified compiler.

So, we advocate that when reviewers are using 3DMark as a game proxy, they must run with the unified compiler fully enabled. All games run this way. That means running with the previous version of 3DMark, or running with a version of our drivers that behave properly.

Derek Perez
Director of Nvidia PR

The problem (apart from Perez being congenitally unable to distinguish between a game and a benchmark) is that what happened to the "unified compiler" will of course not ever be restricted to 3dMK03, but will happen as well whenever any game served by nVidia's "unified compiler" (in the same way that the 330 version of 3dmk03 was served by it) is itself *patched* for changes to memory footprint or internal shader code (which is not uncommon at all in 3d game-engine patches)....!....:D Those games, too, will break the 'unified compiler" just as surely as the 440 patch for 3dmk03 has broken it.

In other words, nVidia's "unified compiler" as expressed and described by nVidia PR simply does not exist at all. What it actually consists of are the very same application-specific optimizations within nVidia's drivers that they've been talking about and defending all year long. This is a repeat of the situation which occurred when FM issued the 330 patch. It is an *exact* repeat of that general situation. All that is different are the specific application-detection optimizations in the nVidia drivers which have been defeated.

The difference is only superficial--nVidia PR has come up with a new buzzword for "application-specific optimizations" and that buzzword is now "the Unified Compiler." Ah, it's so very transparent.

If indeed the "Unified Compiler" was actually a single, global driver optimization, then there is no way that a 3dmk03 patch could have broken it. Likewise, there would be no way that shader-code changes to games issued via patches might ever break it. But such is not the case, and so we know now beyond doubt that the "unified compiler" nVidia PR has described simply does not exist in the drivers, and that what is actually occurring is more of the same old application-specific optimizations done for specific applications (3dmk03 330 being only one of them), which *depend entirely* on the ability of the nVidia drivers to be able to recognize and detect *which application is being run* in order for the "unified compiler" to function.

Of course, ATi's drivers do not presently employ application-specific optimizations relative to 3dMk03, and so the 440 patch has no effect on ATi driver performance or IQ.

It also can be safely assumed that the vast majority of 3d games are simply not served at all by the "unified compiler," which of course is the same exact case with nVidia's older term for its driver detection activities--"application-specific optimizations." In short, all that has changed are the buzzwords. Kyle's problem is that he cannot distinguish between the buzzwords and the realities.

While I give nVidia PR an A+ for creativity and imagination, I also have to give them an F for veracity...:)

EDIT: I wanted to add here that I find Perez's comments about 3dmk03 being a "proxy" for game play to be highly amusing...:) If anything, it is nVidia which views 3d games as "proxies" for benchmarks. In fact, you could build a very convincing case that nVidia doesn't care about "3d games" at all (especially considering what they've done with trilinear filtering in recent drivers) and is only concerned with benchmarks, and how "3d games" may be used as benchmarks in order to sell and promote its products. It's really entertaining to watch nVidia PR dance around these issues.
 
I hate reading stuff over there.


Do you throw away your hammer because you cant use it to put in screws?
A synthetic benchmark is just another tool that can provide some very useful IDEAS of how cards will perform in certain situations, given a few caveats.
To simply throw out that possible information for no reason other than

I'm a gamer, and I want the best damn video card that can give me the fastest performance with the best image quality. What better way to see which cards provide that by actually testing it in those games and seeing which ones are better.
So i guess that means gamers are stupid people who arent allowed to be forward looking at all?
The funniest part about this is that the "gamers" calim that synthetic benchmarks are useless because they dont represent games. But they 100% fail to see that games dont represent other games either, and to throw away any information about card performance is stupid! You have to make a judgement at some level - it is impossible to test every game on each card. Ergo, you either assume gameA will play like gameB or that syntheticA gives a rough idea of shader performance, so shader intensive games will give results similar to syntheticA. Now, which makes more sense?
I dont get these guys at all.
 
*sigh* Two steps forward, three steps backward...

I love how he just accepts the "nvidia IQ is the same" comment at face value and doesn't challenge it, despite the obvious differences (e.g. B3D's screenshots). Seems more like an unsupported rant to me.

His conclusion might have some merit if there was anything suggesting that FM was actively sabotaging anybody. There's been many proofs to the contrary.
 
Is it just me or has anyone noticed that all threads with the word "Kyle" in the header tend to grow up to 10 pages in less than a week? :LOL:
 
K.I.L.E.R said:
Is it just me or has anyone noticed that all threads with the word "Kyle" in the header tend to grow up to 10 pages in less than a week? :LOL:

That's the "Guy with a website" syndrome.
 
K.I.L.E.R said:
Is it just me or has anyone noticed that all threads with the word "Kyle" in the header tend to grow up to 10 pages in less than a week? :LOL:

Must be something about fertilizer begetting fertilizer...
 
Back
Top