When enough is enough (AF quality on g70)

Ailuros said:
It'll rise the transistor budgets and bandwidth requirements even more and thus "free" is correct only in terms of performance (compared to two-cycle trilinear), since it makes obviously quite a few current optimisations redundant.

Unless the filter hardware can do bilinear output using the same transistors, though?
 
Ailuros said:
It'll rise the transistor budgets and bandwidth requirements even more and thus "free" is correct only in terms of performance (compared to two-cycle trilinear), since it makes obviously quite a few current optimisations redundant.
Of course I was speaking of performance! What else does the end-user care about aside from quality? NVIDIA sacrifices trilinear quality be assuring that only a single mipmap is sampled much of the time. Obviously, this boosts performance but at the expense of image quality which is what started this discussion!

-FUDie
 
One interesting point I find unusual for it's absence- what would any educated guess be for the performance 'gain' involved with this situation?

I know my 6800GT had this issue in High Quality mode when I first got it and it was resolved (mostly) in drivers by the next revision... and really had zippo impact on performance, even at 1600x1200 8xAF.

It seems the 'default' train of thought is this issue is some sort of method to squeek extra FPS out of a new videocard entry, but I'm really having a hard time trying to follow or quantify what kind of impact such a 'savings' would actually have, given the architecture of the 7800 GTX.

What would be a realistic estimation of the kinds of 'savings' this would incur? I mean, would it change, say, the HL2 benchmarks of 233 fps at 1600x1200 down to 230fps? Still a comfortable lead over, say, an X850 XT PE at 198fps....

I'm just not buying into the fact this was anything aside from a driver oversight as I truly don't see ANY degree of trilinear/af savings really denting the 7800 much at all, given it's relatively low impact when going up scales in anisotropic filtering.
 
FUDie said:
Oh come on, don't waste our time. I can't believe that you didn't understand what I meant. Several companies (including NVIDIA) have offered single-cycle trilinear in the past. This can be implemented in a manner that makes it nearly "free" and certainly has a much lower impact on performance than two-cycle trilinear.
1. Single-cycle != free.
2. There's always a cost in transistors, memory bandwidth, and/or performance.
3. It's rather silly to shoot for free trilinear when you're typically taking much more than just one sample per pixel, through anisotropic filtering.

...so while some IHV's may choose to increase the texture filtering power relative to ALU power of upcoming processors (exceedingly unlikely), you just won't get free trilinear filtering in conjunction with anisotropic.
 
Chalnoth said:
1. Single-cycle != free.
Oh boy, you're going to nitpick, I love it.
2. There's always a cost in transistors, memory bandwidth, and/or performance.
It's bound to be much faster than 2 cycle trilinear, yes? And what does the end-user care about the cost in transistors? The point is to get fast hardware.
3. It's rather silly to shoot for free trilinear when you're typically taking much more than just one sample per pixel, through anisotropic filtering.
Except that if you could do free (i.e. single cycle) trilinear with anisotropic filtering, then you'd halve the cost of aniso.
...so while some IHV's may choose to increase the texture filtering power relative to ALU power of upcoming processors (exceedingly unlikely), you just won't get free trilinear filtering in conjunction with anisotropic.
Which brings us back to why NVIDIA is being aggressive with brilinear: Nearest thing to free trilinear is bilinear ;)

-FUDie
 
Rys said:
Unless the filter hardware can do bilinear output using the same transistors, though?

What for? Bilinear wouldn't be theoretically at least any faster.
 
Finally read all this, and the nvnews front page description and thread. Goodoh for folks raising the issue. And goodoh for nvnews helping to lead the effort, and in public on this --even tho I also can't help but notice the sizable pucker-factor for them in doing so that radiates from their discussion by the staff guys. Still, they sucked it up and did it, so good on them.

It remains to be seen how long it takes NV to address it, tho obviously it is a step in the right direction that they acknowledge it. Needs to stay on the front burner in the community tho until actually addressed, not just promised to be addressed.
 
I don't think anyone annoyed by the side-effects is going to rest about it in the future.
 
  • Like
Reactions: Geo
Well, hopefully this will be somewhat instructive for ATI as well, as they put the finishing touches on R520 drivers. There *are* people technically good enuf to pull the covers of these things, and you *will* get outed when it happens. Both IHVs appear to need that reminder on a regular basis.

A big part of the problem, in my view, is that over 1/2 of all the reviews of a new gpu appear in the first couple days, and practically all of the ones from the big sites. Add that to the short NDA pressures, and you have a system that incentivizes cutting IQ corners in favor of performance at launch, then "finding and fixing" them later. I'm not saying that every instance is from that factor. . .but I will say that my experience in life is that anything that incentivizes a behavior tends to produce more of that behavior.

It is a pity this didn't flare up prior to 7800GT reviews, as that would have provided another high-profile platform to point at it. I suppose the good news, and I suspect something in NV's mind, is that mid-range G7-class is on the horizon, and a new range of benching the G7 range of cards and pointing at the issue if it isn't resolved by then. And, of course, comparison to R520 reviews as well.
 
FUDie said:
It's bound to be much faster than 2 cycle trilinear, yes? And what does the end-user care about the cost in transistors? The point is to get fast hardware.
Not with anisotropic filteirng. With anisotropic filtering enabled, you'll be averaging more than one texture sample per pixel anyway, so you'll be making use of that second bilinear filter typically required for trilinear. So you'll still get a similar performance hit for trilinear in this case.

But more texture filteirng power really isn't where these devices need to focus. It's more ALU power that's important.
 
geo said:
Well, hopefully this will be somewhat instructive for ATI as well, as they put the finishing touches on R520 drivers. There *are* people technically good enuf to pull the covers of these things, and you *will* get outed when it happens. Both IHVs appear to need that reminder on a regular basis.

A big part of the problem, in my view, is that over 1/2 of all the reviews of a new gpu appear in the first couple days, and practically all of the ones from the big sites. Add that to the short NDA pressures, and you have a system that incentivizes cutting IQ corners in favor of performance at launch, then "finding and fixing" them later. I'm not saying that every instance is from that factor. . .but I will say that my experience in life is that anything that incentivizes a behavior tends to produce more of that behavior.

It is a pity this didn't flare up prior to 7800GT reviews, as that would have provided another high-profile platform to point at it. I suppose the good news, and I suspect something in NV's mind, is that mid-range G7-class is on the horizon, and a new range of benching the G7 range of cards and pointing at the issue if it isn't resolved by then. And, of course, comparison to R520 reviews as well.

There has been plenty of time to catch this bug/optimization. It has been around since the launch of the 6800. When the 6800 was first launched the High Quality mode shimmered even worse than the 7800 HQ mode does now. Also the "Quality" mode has NEVER been fixed on the 6800, and this is the default benching mode. I started a thread here about the issue last year, I shut up about it after the LOD clamp was released and HQ improved, I never felt the issue was completely resolved though as the LOD clamp is a workaround not a fix, and when running HQ there was a huge performance hit in some games. There has been plenty of time for someone to bring the issue up, it has just been ignored for whatever reason. nVidia needs to fix the quality mode on both the 6800 and 7800. I'm glad the issue is finally getting some exposure now, as it has spurred nVidia to finally try to fix the issue once and for all.
 
And they won't fix it in that manner either. Potential can of worms coming up unless they change their minds about how they apply it, IMHO.
 
Do you think the fix will be per game, using the profile - so that each game will have to be user-profiled to get "no shimmering"?

Jawed
 
Chalnoth said:
Not with anisotropic filteirng. With anisotropic filtering enabled, you'll be averaging more than one texture sample per pixel anyway, so you'll be making use of that second bilinear filter typically required for trilinear. So you'll still get a similar performance hit for trilinear in this case.
If this were true, then NVIDIA wouldn't feel the need to crank up brilinear so much.
But more texture filteirng power really isn't where these devices need to focus. It's more ALU power that's important.
This may be true for current games, but old games still can benefit from faster trilinear. Also, if ALU power increases but texture power stays the same, then you still can be texture limited.

-FUDie
 
Rys said:
And they won't fix it in that manner either. Potential can of worms coming up unless they change their minds about how they apply it, IMHO.

Do you think they'll at least return it to G6800 levels/settings? So I gather you're of the "new optimization" school of thought rather than the "oops" school of thought on this one?
 
Just fixing the 7800 to be like the 6800 won't cut it, 3D Center brought up the issue of the "Quality" mode being the default benchmarking mode. They have to fix the quality mode in terms of shimmering on both the 7800 and 6800. I think that they can do this without too much of a performance hit since if you run the quality mode with all the optimizations disabled (the ones you can control anyways), it produces almost as large a performance hit as High Quality, but the Quality Mode with the optimizations disabled still produces shimmering.
 
The fix, as it stands, will only be to the High Quality mode making it (hopefully) return to old 6800 levels, and if we're lucky even better. Quality mode will get no fix (as it stands), which is where I think it should be considered the most for two reasons: a) it's the out of the box mode, and the out of the box experience is a strong one and b) it's in the reviewer's guide (freshly downloaded just now to make sure I'm not misquoting them) as thus (my bold):

Based upon their preference, users should choose one of the previous image
settings before selecting an antialiasing and anisotropic filtering setting. For the
GeForce 7800 GTX testing versus competitive GPUs, NVIDIA recommends
setting the Image settings to Quality mode and leaving Trilinear
optimization and Anisotropic sample optimization On. This will provide
an apples-to-apples comparison versus the competition
.

Image quality settings should contain similar levels of optimizations. When doing
comparisons on GeForce 6 Series versus competitors’ products, make sure the
competitors’ settings are all set to their highest quality settings that contain
optimizations. This setting should be compared to NVIDIA’s Quality setting,
which contains similar optimizations. NVIDIA’s High quality setting contains no
optimizations, and this mode is not offered by our competitors
.


The bits in bold are false as it stands, especially the 2nd one. Even in High Quality mode it's undersampling. So that's where they want to fix it, but that doesn't help their first assertion at all.

Leaving Quality mode as it stands, with undersampling and earlier use of MIPs futher down the MIP chain than before, helps nobody. At the cost of some extra texel sampling on GPUs that powerful, default texel filtering with aniso should be excellent out of the box. They aren't changing that currently, HQ is where the fix will be applied.

The texture stage optimisations need kicking to the curb at some point, since the driver can't tell where the app will leave the primary texture. Trilinear only on the first stage and bilinear on 1-7? That all has to go away. Brilinear on all? That should just go away pretty much on high end hardware, if possible (although I concede that it can look good enough).

We'll see what they spit out though. The clock ticks, thanks to Damien and others :devilish:
 
FUDie said:
If this were true, then NVIDIA wouldn't feel the need to crank up brilinear so much.
Then you're not understanding what I'm saying. Trilinear filtering is nothing more than two bilinear samples averaged together. This is true whether or not anisotropic filtering is enabled. An architecture that could do single-cycle trilinear could also do single-cycle 2-degree anisotropic filtering. But enable trilinear filtering and 2-degree anisostropic, and you'll still get a performance hit from trilinear with this archtecture.

So what you're really asking for isn't single-cycle trilinear filtering, but rather hardware that is capable of more bilinear texture samples per cycle. But that won't solve the issues with trilinear filtering.

Side comment: don't forget that it was ATI that started doing brilinear (albeit less aggressively), and their first implementation of anisotropic filtering didn't even support trilinear.

This may be true for current games, but old games still can benefit from faster trilinear. Also, if ALU power increases but texture power stays the same, then you still can be texture limited.
Old games are high-performing anyway, so you can just turn the optimizations off and have no problems.
 
Side comment: don't forget that it was ATI that started doing brilinear (albeit less aggressively), and their first implementation of anisotropic filtering didn't even support trilinear.

Do you think a consumer who buys a >$400 GPU really cares who started what or not?

Old games are high-performing anyway, so you can just turn the optimizations off and have no problems.

Not the one this whole thread is actually about and that's the real issue here; I cannot switch the shimmering off on the 6800 while using high quality, and I don't think it's any different on the 7800 according to reports either this far. I can only use the LOD clamp and keep some applications from using any LOD value below "0", which is not a real fix either.
 
Back
Top