FarCry Performance Revisited: ATI Strikes Back with Sh 2b

If it hasn't already been done, someone needs to reevalute farcry performance on nVidia cards using the 65.73 detonators.

I'd be interested to know if the slight softening of the specular etc is still there.

The 65.73's drastically (obscenely even) improved lighting quality over 65.62 and earlier in 3dmark03 - specifically Proxycon & Trolls Lair.

That isn't the only improvement, but its certainly the one that stood out to me the most when testing.
 
Just for kicks here are the older tests with 61.45:

http://www.mitrax.de/?cont=artikel&aid=24&page=10

1280*1024*32/4xAA, 4xAF

(control panel "high quality", in game 4x trilinear AF, all opt. off)

Code:
Driver____ = ______61.45 vs.______ 65.73

Research__ = ______32.71___________37.74____+15.37%
Regulator = _______17.84___________32.14____+80.15%
Training_ = _______29.57___________36.12____+22.15%
Volcano___ = ______23.79___________36.87____+54.98%

--------------------------------------------------------------

Same settings as above...

(control panel "quality", CP 4x optimised AF, in game 1x trilinear AF)

Code:
Driver_____ = _____61.45 vs.______ 65.73

Research___= ______38.17___________43.09____+12.88%
Regulator_= _______23.23___________37.71____+62.33%
Training__ = ______36.09___________40.65____+12.63%
Volcano____= ______32.62___________42.65____+30.74%

***edit: to complete the picture for the older results...

PCGH Pier demo:

61.45 = 36.17fps
65.73 = 40.31fps (+11.44%)

Instancing:

61.45 = 32.19fps
65.73 = 33.47fps (+3.97%)
 
Thanks for those results Ailuros.

Would you say the IQ is better or worse, especially in the areas people identified in this thread and elsewhere on the 'net?

I have to say nVidia has really suprised me on the driver front with the DX9 generation cards. If there has been any failures at all, it has been in the ability of the drivers to properly harness the hardware IMO.

Would people have been as critical of NV30 and its derivatives as they were if 60.72 or 65.73 were the launch drivers? I think not. Certainly I stayed away from the GF-FX line until 60.72.

You really have to wonder how nVidia allowed such a scenario to happen in the first place, given that nVidia knows its drivers are/were its primary strength vs the competition.
 
radar1200gs said:
You really have to wonder how nVidia allowed such a scenario to happen in the first place, given that nVidia knows its drivers are/were its primary strength vs the competition.

I would guess that it's difficult to do anything if the hardware is a nightmare to optimize for. And have the new drivers really added that much performance for the NV30 series ?

Based on the CS: stress test, the 5950 is slower then the 9600 when running in DX9 mode.

http://www.firingsquad.com/hardware/geforce_fx_half-life2/page7.asp

Though forcing DX9 on the NV30 series might cause it to run code that is far from NV30 optimized, might not be totally fair. But i would say that it's a clear indication that not that much have happened with the NV30x DX9 performance.
 
Would you say the IQ is better or worse, especially in the areas people identified in this thread and elsewhere on the 'net?

In some spots it remained the same and in some spots there have been quite a few welcome improvements. Hit on the link in my former post and look at the last page at the screenshot. No the moire pattern is still not gone in cases like that one (from 61.17 up to 65.73 today). I am actually increasing LOD in D3D to +0.5 while gaming to keep my eyes from popping out with all the texture woobling I get otherwise.

Ok I used a similar sollution too on the R300 (MIPmap LOD to "quality" ie +0.5), but those kind of things really shouldn't be.

The nice part is that some texture filtering optimisations I so far deemed as entirely useless since I could see occassionally some MIPmap banding have improved quite a bit and I really can't detect any more banding with the 65.73 class drivers. In other words optimisations are for the time being in my good book, as long as I won't find a case where they start annoying me again.

Would people have been as critical of NV30 and its derivatives as they were if 60.72 or 65.73 were the launch drivers? I think not. Certainly I stayed away from the GF-FX line until 60.72.

There are no magical drivers that can fix lacklustering arithmetic efficiency. Unless you'd want to convince me that the NV40 isn't lightyears apart in that very department compared to the NV30. Ok the up to 8x times the shader performance is just a marketing drivel, but shader performance is with the NV4x really at the level where it should had been all along.

Performance has not only increased in Far Cry with the SM3.0 path dramatically with the 65.73, but also with the SM2.0 path (just don't ask for more numbers I can't stand those demos anymore ;) ). I had asked Demirug for a quick comment on the game for that write-up above and I was suprised how badly unoptimized the SM2.0 path really is and how much the game actually wastes CPU resources. Needless to say that NV3x class accelerators could obviously make use of a pass reduction path too. That might sound as an oxymoron, but I'd think that the majority of the blame for the specific game should go to the ISV for the highly unoptimized code.

You really have to wonder how nVidia allowed such a scenario to happen in the first place, given that nVidia knows its drivers are/were its primary strength vs the competition.

I disagree here too. The competition's drivers have risen to a highly competitive level. Neither/nor are perfect or entirely bugfree (they really couldn't be either), yet yes NV does keep a noticable lead especially when it comes to OpenGL drivers.

Since I mentioned OGL drivers and since I know that there must be NV employees lurking from time to time here in those forums: kindly enable Supersampling also for OGL. Apart from Doom3 obviously, with the majority of other OGL games I've got fill-rate to waste ;)
 
I wonder if this has to do with shader replacement. I looked at some of the long farcry shaders provided by tb in another thread, and there were some insane inefficiencies there.

I saw a series of instructions that used a cmp to select between two equal values. I also saw them calculating the reflection vector for each light as opposed to just the eye vector. If I could get a hold of Demirug's wrapper, I could probably make performance increase a lot.

Then again, I could be wrong. :)
 
Heh i love the conclusion

In fact, NVIDIA claims that Source Engine-based games with geometry instancing and shader model 3.0 pixel and vertex shaders will be faster on NVIDIA’s current generation, GeForce 6800 hardware than on ATI X800. We also know that NVIDIA will be releasing its next ForceWare driver early next month, although doubling performance in one driver release is highly unlikely (especially when you’re dealing with hardware as old as GeForce FX).

So what the source engine wont take advantage of p.s2.0b or ati's instancing ? Nor 3dc (Which i understand isn't avalible in the stress test) nor will ati ever release newer drivers than the 4.8s ????

Sometimes people make me laugh
 
i find it trés amusé that they started comparing the 9800XT to the 6800 instead of the 5950. :D
 
I really like when people pretend to be entertained rather than upset over percieved "unfair" comparisons... It's so amusing. :)

Come on, it's just A VIDEOCARD.

Anyway, the reason for "comparing" these two cards is likely the feature-set of the 9800XT is closer to the 6800 than to the 5950's. It is a feature-driven article after all, not strictly a performance comparison.
 
Guden Oden said:
I really like when people pretend to be entertained rather than upset over percieved "unfair" comparisons... It's so amusing. :)

Come on, it's just A VIDEOCARD.

Anyway, the reason for "comparing" these two cards is likely the feature-set of the 9800XT is closer to the 6800 than to the 5950's. It is a feature-driven article after all, not strictly a performance comparison.

:LOL: The 9800XT can't come close to the featureset of NV3x let alone NV40. It's performance not features that leads to the comparison.

Anyway, I posted about 65.73 because I was curious about image quality and the effect of drivers on SM3 - was it drivers causing the softer highlights seen in SM3 farcry or SM3 itself? I wasn't at all concerned about performance per se (though its nice if it increases also).
 
Guden Oden said:
How do I get instancing and/or parallax mapping and all that to work on my 6800?

KKTHX for any help. ;)

Parallax mapping?

Anyway for pure instancing just open the configuration application of Far Cry and under "Video Options (advanced)" click on "customize". Under "Environment Quality" change the default value from "1" to "100".

e_vegetation_sprites_distance_ratio = 100

I just wanted to give that thing a shot; frankly I prefer how point sprites look like overall.
 
Ailuros said:
Parallax mapping?

The technique that gives bumpmaps additional depth. This was supposed to be included in the 1.2 patch, but I haven't dared install it yet as it's said to crash on ATi systems - or at least some of them? :)
 
radar1200gs said:
:LOL: The 9800XT can't come close to the featureset of NV3x let alone NV40. It's performance not features that leads to the comparison

Really, features need to be usefull to be considered 'features'. Common sense tip of the day.
Nv3.x feature set offers nothing over a R3.XX, longer shader length is about all it has on a poor shader performer as superior FSAA and support for HDR (in DX) and MRT are all supported on a R3.xx but not on Nv3.xx .
 
I was expecting a reply like this... :rolleyes:

I suggest you compare the DX9 specifications with the capabilities of NV3x and R3xx. NV3x supports far more of the spec than R300.

Mind you your way of thinking hardly suprises me (did I mention I was expecting a reply like yours?) - SM3 is deemed by many to be irrelevant simply because nVidia has it and ATi doesn't, never mind that it is part of the DX9 spec. Takes me back to 3dfx vs NVidia where features din't matter, just performance (according to the 3dfx supporters at least....)
 
Guden Oden said:
Ailuros said:
Parallax mapping?

The technique that gives bumpmaps additional depth. This was supposed to be included in the 1.2 patch, but I haven't dared install it yet as it's said to crash on ATi systems - or at least some of them? :)

I know what you meant. 1.2 doesn't contain anything but performance increasing paths. 1.3 will include IQ improving functionalities like HDR AFAIK, but they haven't even officially re-released 1.2, have they?
 
radar1200gs said:
I was expecting a reply like this... :rolleyes:

I suggest you compare the DX9 specifications with the capabilities of NV3x and R3xx. NV3x supports far more of the spec than R300.

Radar, just admit it... the NV3x sucked. It was not competitive without massive cheating. In fact, in newer games, it can't even run DX9 without being painfully slow.
 
Back
Top