nVidia discrepancies in FarCry benchmarks?

digitalwanderer

wandering
Legend
Moose over at Rage3D pointed out a German article (English translation via Google) that purports to use 3D analyze to change the device ID on a 9800 XT to trick FarCry into thinking it's an NV40....and it uses PS 1.1 and it's awfully noticeable.

They pulled their FarCry benchmarks declaring them void since nVidia cheated.

If someone here who speaks German could provide their translation I'd be very grateful, I really want to know more about this.

EDITED BITS: Changed the title a bit to make it less flamey since it seems this ain't a cheat and I jumped the gun a bit. :oops:
 
ati,nvidia running different shader paths. It's nothing new isn't it? It's developers decision and not nvidia cheating.
 
Re: nVidia cheating at FarCry benchmarks now too?

digitalwanderer said:
Moose over at Rage3D pointed out a German article (English translation via Google) that purports to use 3D analyze to change the device ID on a 9800 XT to trick FarCry into thinking it's an NV40....and it uses PS 1.1 and it's awfully noticeable.

Not really a cheat, it is the game that uses the NV30 code path on the NV40 and thus lower quality shaders. (How can nVidia cheat, when no nVidia hardware or software is present).

Maybe the NV40 code path requires DX9.0c.
 
Indeed, the dodgy precision on NV40 ATM has already been discussed extensively elsewhere on this forum.
 
Yall forget that money talks. All NV had to do is to say do this or no money for you.
Both Nv and Crytech are keeping there mouth shut on this. I would think a new patch would have come out by now.
 
The Google translation is actually not too bad. (except that is can't make the distiction between 'card' and 'map'; same word in German.)

Just needs a little adjustment:

Here we have the stored shaders, which run on the NV3x and the R3xx into the level shown below. (For clarity, we have omitted the Shaders <2.0)

With NVIDIA's NV3x in some places Shader 1.1 is used, where one gets shader 2.0 on ATi's R3xx. Thus the partial lower image quality on NV3x cards is to be explained.

Both Screenshots come from the Radeon 9800 pro, once by 3D-Aanalyze(beta version, not yet available!) "camouflaged" as a NVIDIA card and once not:

No going to translate the table... :D

Thus the past results of bench mark of NVIDIA and ATi hardware are void, since both cards were fed with different tasks.
 
Something very important to note.... Both images are produced by an ATI card. Only once using the NVidia codepath, and once normal.

Now... AFAIK the ATI 9800 Pro does not understand a partical precision hint. Still, it gives the same degraded image that the NVidia cards produce.

I think that must means that the drop in image quality is NOT caused by a partical precision hint.

The table listing the shader levels, suggest that it is caused by a 1.x shader replacing a 2.0 shader.

It might also be something else ofcourse... Lower quality textures being used for NVidia cards for example.
 
Not exactly, i think that the 6800 si using FP32 when using standard dx9 path so it should explain part of that result.
 
PatrickL said:
Not exactly, i think that the 6800 si using FP32 when using standard dx9 path so it should explain part of that result.

Very possible, and if so... why not do this by default? Surely they must have had a copy of the 1.1 patch before it was released and they could have just asked Crytek to fix the NV40 to use the Shader 2.0 path...
 
Veridian3 said:
PatrickL said:
Not exactly, i think that the 6800 si using FP32 when using standard dx9 path so it should explain part of that result.

Very possible, and if so... why not do this by default? Surely they must have had a copy of the 1.1 patch before it was released and they could have just asked Crytek to fix the NV40 to use the Shader 2.0 path...

Because then at (p)review time the performance delta to their previous and competitor's product wouldn't have been so dramatic. First impressions make or break a product launch.

Edit:At least to joe average consumer.
 
gkar1 said:
Veridian3 said:
PatrickL said:
Not exactly, i think that the 6800 si using FP32 when using standard dx9 path so it should explain part of that result.

Very possible, and if so... why not do this by default? Surely they must have had a copy of the 1.1 patch before it was released and they could have just asked Crytek to fix the NV40 to use the Shader 2.0 path...

Because then at (p)review time the performance delta to their previous and competitor's product wouldn't have been so dramatic. First impressions make or break a product launch.

Edit:At least to joe average consumer.

So your saying they chose to use the lesser IQ path in order to make their product look better over a 6month old card????
 
I guess the question now is how to use Farcry as a benchmark and get "fair" results on ATI and Nvidia cards.

Its too bad that one of the only Ps2.0 games currently out (and the only current game claiming future PS3.0 support) can't be reliably used to benchmark.

Coincidence???? :rolleyes:
 
From this, it seems Farcry mainly relies on PS1.1 on most stuff..
PS2.0 for light or something.. on all DX9 cards..
http://www.firingsquad.com/hardware/far_cry_nvidia/

Also have this question, which I posted in the wrong thread it seems hehe..

Wouldnt any card run worse if it was made to run as something its not?
Or is the "R3xx path" a strict standard DX9 path with nothing specific to the R3xx at all?
If not, will this sort of comparing "work"?
 
ANSWER: The "R3xx path" is a strict standard DX9 path with nothing specific to the R3xx at all!
 
And now you see why nvidia supports SM3. Since their implementation of SM2 is still slightly slower per clock than ATI's (blah blah more complex or whatever, but slower per pipe), they are going to get TWIMTBP partners to buy into SM3 and use SM3 effects where it can improve performance. Sure, there may still be register issues or whatnot with the nv40, but SM3 gives a developer new ways to improve performance on nv cards (with nv's help again...use _pp here will become use some SM3 here). Then the SM2 version will undoubtedly be more complicated/more instructions to compensate for the lack of dynamic branching and then run slower. Voila nv4x competes with r4xx.
 
Driver heaven test is invalid!

From forum on driver heaven

I think changing the device ID is not a correct way to test anything cheated or tuned for performance .

It prove nothing more than NV40 didn't perform well on R3x0 path , or R3x0 didn't perform well on NV40 path

I think what Driverheaven is an invalid test.. one should not judge one card by tweeking it to look like a another card. The cards have different specs and depending on how the game is designed - exceptions could happen and performance could be alter. Besides any specific code for the desired hardware.

Now if the game as a true SM 2.0 and SM 3.0 options as defined by Microsoft for Direct X - doing so will only conpare the difference between code on standard API without any Card specfic changes by game developer.

Also 6800 Ulra will not be in stores until Memorial day... so this tests are done on early versions of both hardware and drivers - so its really not valid anyway.
 
Back
Top