nVidia release new all singing all dancing dets.

Here are some UT shots from a map Chalnoth gave me (thanks).

0X:
http://www.stud.ntnu.no/~vidaralm/UT/0X.jpg

1X (application controlled):
http://www.stud.ntnu.no/~vidaralm/UT/1appcontrol.jpg

1X:
http://www.stud.ntnu.no/~vidaralm/UT/1X.jpg

2X:
http://www.stud.ntnu.no/~vidaralm/UT/2X.jpg

4X:
http://www.stud.ntnu.no/~vidaralm/UT/4X.jpg

8X:
http://www.stud.ntnu.no/~vidaralm/UT/8X.jpg

Rivatuner 8X:
http://www.stud.ntnu.no/~vidaralm/UT/rivatuner8X.jpg

It's obvious that 8X is the same as 4X on these shots. Rivatuner does it right. Trilinear in UT is set to True, but the 1X and Application Controlled shot is the same. (the folder is open for easy viewing: http://www.stud.ntnu.no/~vidaralm/UT/ )
 
Matt Burris said:
Okay, finally... takes forever for me to upload on this blasted 56k modem. :-?

I've taken a near similiar screenshot in Nature, with Detonator 30.82 and 40.41. I've ensured that I clicked on Restore Defaults for the D3D properties page, and made sure everything was at default. I didn't touch anything in 3DMark 2001SE, except to tell it to only run Nature. I used F12 to capture, which creates a .bmp file, and I've converted it to .jpg with the compression ratio at the lowest (1%). Each screenshot is quite large, at about 700kb apiece.

Detonator 30.82: http://www.3dgpu.com/misc_images/det3082.jpg

Detonator 40.41: http://www.3dgpu.com/misc_images/det4041.jpg

Looking at them, and taking into account that the shots are about 0.05 seconds apart, I really can't notice any difference at all. Can you?

I don't see any difference.
 
I've been talking to Brian Burke back and forth today, and here's a bit of what he said that confirms what we've seen:

A lot of the improvements in performance that are seen are due to
efficiencies in the driver code that controls vertex and pixel shader. That
is why the jump in Nature and Aquanox. Those apps also happen to be the ones that are used for benchmarking.
 
Matt Burris said:
I've been talking to Brian Burke back and forth today, and here's a bit of what he said that confirms what we've seen:

A lot of the improvements in performance that are seen are due to
efficiencies in the driver code that controls vertex and pixel shader. That
is why the jump in Nature and Aquanox. Those apps also happen to be the ones that are used for benchmarking.

I like the last quote there ;)
 
Just got another very interesting tidbit from Brian just now. I just posted this on my site, but thought I'd post this here too since some of you never visit 3DGPU. :p

The default setting on the new driver and the old driver are both the same.
The problem that is being identified is a bug that occurs when the AF
settings are changed. It seems that once the AF slider is moved away from 0
to another setting, then back to 0 again, it triggers a bug that causes the
control panel to misread the register setting. That setting is not point
sampling, but it is something less than 0. The only way to "reset" back to
true 0 is by reinstalling the driver or deleting the registry setting.

We are going to fix this bug and others and submit the new BETA driver to
WHQL, before the driver moves from BETA to "official".
 
Morrowind may be a title to look for improvements, playing around with the water rendering settings in the .ini usually showed some modest changes in fps in quite a few of the outdoor environments.

Would be nice to play with the water texture set to 512 vs 256(default if I remember). I'd check for myself but dont own the game anymore -once I hit level 30 the game became ridiculously easy and the fun in just wandering around had long worn off by then...
 
Galilee said:
It's obvious that 8X is the same as 4X on these shots. Rivatuner does it right. Trilinear in UT is set to True, but the 1X and Application Controlled shot is the same. (the folder is open for easy viewing: http://www.stud.ntnu.no/~vidaralm/UT/ )

Actually, NVIDIA's display panel is "doing it right" for the desired effect as well.

While on the surface it may appear that 4x->8x isnt making a change, if you benchmark the two settings, there is a measurable change in performance between the two. This was the answer to the "severe hit with 8x" in past drivers- just slam the 8x IQ down to damn-near 4x at a much lesser hit and watch the smiles at benchmark graphs. Rivatuner's 8x obviously looks more proper (like a GF3 did with pre 29.xx drivers or newer) and should also benchmark accordingly with the pre 29.xx hit at max anisotropic setting.

The question about defaults (be it for AF or LOD) is also a recurring theme... one of which where the same sources provide screenshots of slightly differing default IQ compared to someone else's. Unfortunately with all the tweakers, toys, and whatnot- this makes a true depiction and the arguments cause by them a bit muddled and difficult to pursue.
 
Sharkfood said:
While on the surface it may appear that 4x->8x isnt making a change, if you benchmark the two settings, there is a measurable change in performance between the two. This was the answer to the "severe hit with 8x" in past drivers- just slam the 8x IQ down to damn-near 4x at a much lesser hit and watch the smiles at benchmark graphs.

If you ask me, what we're seeing is the driver improperly turning down aniso on the wrong texture stage. Future drivers should fix the problems without too much performance hit (if any).

Regardless, I truly hope that nVidia's anisotropic optimizations are going the route of detecting how the textures are rendering to decide how to apply anisotropic to them, not game-specific optimizations.
 
If you ask me, what we're seeing is the driver improperly turning down aniso on the wrong texture stage. Future drivers should fix the problems without too much performance hit (if any).

Well if you ask me, it's simply a way to show an "improvement" in performance with max AF from driver revision to driver revision since this was the biggest controversy at the unveiling of the GF4. It was also where the biggest stink was made thereafter with claims of "they fixed AF performance" when nothing of the sort had (has) occurred.

I personally prefer the "big" hit when the IQ justifies the hit. Unfortunately, all the "8x" benchmarks that show changes dont tell the story at all- especially when a setting is superfluous and yields no change from the previous level of AF... yet yields a measurable performance hit to give the illusion of an increased level.
 
Sharkfood said:
Well if you ask me, it's simply a way to show an "improvement" in performance with max AF from driver revision to driver revision since this was the biggest controversy at the unveiling of the GF4. It was also where the biggest stink was made thereafter with claims of "they fixed AF performance" when nothing of the sort had (has) occurred.

Why, pray tell?

Flaws always come out. Why put them in on purpose?

Weren't you one of those people that was saying that the poor quality with the Quack issue was due to a driver bug, not an optimization?
 
Chalnoth said:
Why, pray tell?

Flaws always come out. Why put them in on purpose?

Weren't you one of those people that was saying that the poor quality with the Quack issue was due to a driver bug, not an optimization?

We at NVIDIA don’t make it a practice to optimize our pipeline for specific benchmarks - we want to provide high quality and high performance on a wide variety of useful and entertaining applications, rather than just getting a good score. Ask yourself (or, better yet, ask ATI) why Radeon 8500 performs well on this one test, and poorly on many other 3DMark2001 tests.
David Kirk

http://www.nvnews.net/articles/david_kirk_interview.shtml

Hmmm someone is preaching FUD here ;)
 
Back
Top