When enough is enough (AF quality on g70)

WaltC said:
Heh...;) It seems to me that I simply answered your "how do we know..." question very directly. I explained precisely "how we could know..." Look, if you are going to ask a question please don't complain when people answer it...;)

Walt, your unmitigated bias against Nvidia is not evidence that the differences between drivers that do and do not support the G70 chip are solely related to image degrading optimizations. Get over yourself.
 
trinibwoy said:
Walt, your unmitigated bias against Nvidia is not evidence that the differences between drivers that do and do not support the G70 chip are solely related to image degrading optimizations. Get over yourself.
And your rip on Walt isn't evidence to the contrary either. :p
 
digitalwanderer said:
And your rip on Walt isn't evidence to the contrary either. :p

Ummm ok? :???: So my saying that his comments do not cast any light on what went into Nvidia's drivers for G70 support is not evidence that those additions were more than IQ reducing optimizations?

Duh? The only people with "evidence" are Nvidia's driver team. But that doesn't mean I can't point out the fact that he's talking out his ass. :LOL:
 
Another thing I just noticed about NV’s AF, is that the higher levels of AF have lower levels of filtering for the mipmaps. Brilinear is moved closer to Bilinear as one switches from 4 -- > 8 -- > 16AF.

If you load up the shots here at ixbt, and flip through them sequentially, you’ll see what I mean.

Try the colored mipmap shots labeled …

6800, ANIS 4 APP, opt.
6800, ANIS 8 APP, opt.
6800, ANIS 16 APP, opt.

The mipmap transitions get narrower the more AF you apply. This doesn’t happen on the X800 (or 9800, or 6800 with opts off).
 
trinibwoy said:
Ummm ok? :???: So my saying that his comments do not cast any light on what went into Nvidia's drivers for G70 support is not evidence that those additions were more than IQ reducing optimizations?

Duh? The only people with "evidence" are Nvidia's driver team. But that doesn't mean I can't point out the fact that he's talking out his ass. :LOL:

Considering the way you've completely avoided my response to your question, I rather think that accolade belongs to you...;)

It isn't my fault at all that nV recommends to "reviewers" that they please not do frame-rate benchmarks using "High Quality" on the justification that High Quality produces IQ that's just too good for people to actually use. I mean, I'd be happy to quote the appropriate remarks for you again--as they've already been quoted at least once in this article. You asked a question--I answered it--simple, and not my fault if you don't understand what I'm talking about.
 
trinibwoy said:
Walt, your unmitigated bias against Nvidia is not evidence that the differences between drivers that do and do not support the G70 chip are solely related to image degrading optimizations. Get over yourself.

Ah, I see...you don't understand what I'm talking about, after all. I wasn't talking about either "my bias" or a particular set of nV drivers--I was talking about what nV said about how its drivers should be used in the course of a G7x review. What you imagine might be unclear about that I cannot imagine. I didn't say it--nV said it--and it'd be nice if you stopped avoiding it. Mindlessly attacking my comments won't help you answer your own question, trust me...;)
 
Ailuros said:
I frankly don't know whether this was intentional or not, but it doesn't make that much sense either, since the differences in performance - especially on G70 - seem way too small to justify even the thought.

So...that explains why nVidia makes such a big deal about asking reviewers not to use High Quality when benchmarking--because there's such a small performance difference between Q and HQ?

Now, that's what makes no sense at all to me, unless we assume that what is a neglible performance difference to you is a very big performance difference to nV. I'll be willing to bet that nVidia's puiblished recommendation as to how their products should be reviewed (with Quality, not High Quality) was written very deliberately by its authors at nV. I think that's a reasonable assumption, don't you? Otherwise, why make such a statement at all?

The point here is not so much what you or I think about it--the point is how obvious it is what nV thinks about it, isn't it? If nVidia's own public comments don't provide insight then I cannot imagine what would.
 
Ailuros said:
...

You've never sounded to me like you're willing to cut NV a fair chance whatsoever, but it actually might just be my wrong impression. If the amount of criticism towards ATI is at the same level and I've missed it then of course an apology from my behalf ;)

Just to set the record straight...if I hadn't wanted to "cut NV a chance" I wouldn't have bought several nV-based products and used them regularly a few years ago. I bought and used them precisely to give nV that very chance. Prior to R300 I criticized ATi whenever the subject came up because after trying a few ATi products pre-R300 I had reached the conclusion that ATi was completely out of the 3d chip game, outclassed by both nV and 3dfx, and so I found it difficult to even take them seriously in that market.

I did a 180 with R300--as it impressed more than practically any other 3d product I'd ever used--and I used almost all of them at one time or another. At the same time, nV not only had no comparable products and thus wound up being the dog the ATi tail eventually wagged--but the company exaggerated its technical deficiency in the market with a couple of years worth of the worst PR I've ever had the misfortune to see come out of a single tech company. It set back the general state of 3d by years...and even today we are still seeing it in terms of so many people equating "optimization" with "cheating" when prior to nV's self-defensive PR blitz about nV30, which in the end proved utterly fruitless and not worth the time to execute it, nobody *ever* equated optimization with cheating on benchmarks, because they are two entirely different subjects altogether and always have been--and what nV says about it to the contrary makes no difference whatsoever.

As long as the irradiated fallout from nV's PR implosion of '02-'03 remains in the atmosphere of sites like B3d, I guess I'll still be negative about nV--because nV was the *sole cause* of all of it, imo.
 
WaltC said:
So...that explains why nVidia makes such a big deal about asking reviewers not to use High Quality when benchmarking--because there's such a small performance difference between Q and HQ?

No the explanation is that on ATI's accelerators optimisations are enabled too on default last time I checked. What is so irretional exactly at the fact that they don't want no optimisations to be compared to optimisations in the end?

Granted in this particular case "high quality" wasn't "optimisation free", but they'll just turn around and call it a "bug" like any other IHV would too. That's no justification for it at all, but that's what some of us are here for to detect those kind of things and start ringing bells. My bell rang as early as June 2004 about that very same issue when it was still in its infancy.

In the very least a good reviewer knows exactly what he wants to test and how; if he shouldn't be able to see a difference between no optimisations and optimisations he should rather hand in his pen and admit he's to dumb to write reviews. Someone who knows what he's doing doesn't need guidelines in the first place.

Now, that's what makes no sense at all to me, unless we assume that what is a neglible performance difference to you is a very big performance difference to nV. I'll be willing to bet that nVidia's puiblished recommendation as to how their products should be reviewed (with Quality, not High Quality) was written very deliberately by its authors at nV. I think that's a reasonable assumption, don't you? Otherwise, why make such a statement at all?

See above.

The point here is not so much what you or I think about it--the point is how obvious it is what nV thinks about it, isn't it? If nVidia's own public comments don't provide insight then I cannot imagine what would.

The point is that you've found another tidbit to make out of a mouse an elephant. Texture filtering optimisations are common ground for years and no I don't agree with them either at all times and that's why I personally want to have the ability to switch them off.

I did a 180 with R300--as it impressed more than practically any other 3d product I'd ever used--and I used almost all of them at one time or another.

And you weren't the only one back then owning an R300. However I closed one eye to the angle-dependency in full consciensce and had to result to 3rd party tools like rTool to disable texturing stage optimisations and I more than often had the MIPmap bias stuck at +0.5 in order to save myself from eye-cancer from the occasional aliasing I hit onto. That still didn't change my opinion over the R300 which I still consider a momentum for the past few years in the market and of course ATI too.

At the same time, nV not only had no comparable products and thus wound up being the dog the ATi tail eventually wagged--but the company exaggerated its technical deficiency in the market with a couple of years worth of the worst PR I've ever had the misfortune to see come out of a single tech company. It set back the general state of 3d by years...and even today we are still seeing it in terms of so many people equating "optimization" with "cheating" when prior to nV's self-defensive PR blitz about nV30, which in the end proved utterly fruitless and not worth the time to execute it, nobody *ever* equated optimization with cheating on benchmarks, because they are two entirely different subjects altogether and always have been--and what nV says about it to the contrary makes no difference whatsoever.

Yeah of course let's mix everything we can remember about either/or company and hell why not add past horror stories from the 3dfx-era to the mix.....

You seem to forget that I know you for years and aren't just starting to read your input just today. You wouldn't need any specificly sizeable "scandals" to come to any conclusions anyway.

Back to reality though: the G70 is fine with the soon to be released drivers and albeit it was highly annoying it seems to be gone now.

As long as the irradiated fallout from nV's PR implosion of '02-'03 remains in the atmosphere of sites like B3d, I guess I'll still be negative about nV--because nV was the *sole cause* of all of it, imo.

B3D stands fine towards NVIDIA last time I checked; in fact I'd prefer the B3D reviews to analyze in detail filtering related optimisations. With what exact driver settings were those actually written I wonder.
 
Ailuros said:
In the very least a good reviewer knows exactly what he wants to test and how; if he shouldn't be able to see a difference between no optimisations and optimisations he should rather hand in his pen and admit he's to dumb to write reviews. Someone who knows what he's doing doesn't need guidelines in the first place.

Yes, and a good reviewer should also know right off the bat to ignore official nVidia instructions as to how to "best benchmark" a nVidia product, right? Of course, since nVidia goes to so much trouble to spell out a particular method reviewers should use expressly for benchmarking, we have to assume that nVidia doesn't have as high an opinion of reviewers in general as you do...;) (If nVidia didn't believe they could be manipulated they wouldn't even try, right?)

More to the point, you keep talking about optimization whereas nVidia talks only about why its G7x products should only be benchmarked on Quality as opposed to High Quality. The exact role of optimization there is simply inferred by you as opposed to being mentioned by nV.

The point is that you've found another tidbit to make out of a mouse an elephant. Texture filtering optimisations are common ground for years and no I don't agree with them either at all times and that's why I personally want to have the ability to switch them off.

Excuse me? *I* found the tidbit? Heh...;) Did I start this thread? Did I do the original IQ article? Did I write the nV "review benchmarking instructions" nV published for its G7x products?

Since the answer is "no," to all three questions (obviously), then I'm not involved at all, am I?...;)

And you weren't the only one back then owning an R300. However I closed one eye to the angle-dependency in full consciensce and had to result to 3rd party tools like rTool to disable texturing stage optimisations and I more than often had the MIPmap bias stuck at +0.5 in order to save myself from eye-cancer from the occasional aliasing I hit onto. That still didn't change my opinion over the R300 which I still consider a momentum for the past few years in the market and of course ATI too.

I always run AF and FSAA together, now as well as then, and despite the proclaimed advantage to non-angle dependent AF when considered independently, the AF/FSAA IQ of the R300 completely castrated nV25...;) I could see that with both eyes open, and without Rtool.

Yeah of course let's mix everything we can remember about either/or company and hell why not add past horror stories from the 3dfx-era to the mix.....

Yes, its so boring not to be a revisionist, isn't it? I guess I'll have to be boring, then...;)

Back to reality though: the G70 is fine with the soon to be released drivers and albeit it was highly annoying it seems to be gone now.

Wonderful news--and that's yet another reason I don't use nV today--the fact is that everything nVidia has ever made "will be fine with an upcoming driver release"...;) I know that I must be peculiar, but I'm a present-tense kind of guy.

B3D stands fine towards NVIDIA last time I checked; in fact I'd prefer the B3D reviews to analyze in detail filtering related optimisations. With what exact driver settings were those actually written I wonder.

I didn't say anything about B3d reviews and I agree they have always been fine. I was talking about the atmosphere in the B3d *forums* where some people are still confused on the difference between optimizations and cheating on benchmarks...;)
 
WaltC said:
It isn't my fault at all that nV recommends to "reviewers" that they please not do frame-rate benchmarks using "High Quality" on the justification that High Quality produces IQ that's just too good for people to actually use. I mean, I'd be happy to quote the appropriate remarks for you again--as they've already been quoted at least once in this article. You asked a question--I answered it--simple, and not my fault if you don't understand what I'm talking about.

And how does Nvidia's recommendation of using Quality for reviews give any indication of what changes went into the newer drivers. This was my question which you have utterly failed to address.

Nvidia recommends Quality => 71.84 + IQ optimizations => 77.62 ?? :???:
 
Actually, it'll be interesting to see if the image quality of the Quality setting is at all different, or if the latest tweaks are solely for High Quality.

Still, the question remains, why aren't reviewers benchmarking at the same (as practically possible) IQ - in the case of AF it appears that this can only be subjectively confirmed from movement - screenshots are no good.

Similarly, shouldn't AA also be evaluated subjectively under movement. The "shimmer" and "crawl" from AA looks as bad, in my eyes, as texture shimmer - so I don't see why AA should escape investigation.
Jawed
 
Rys said:
78.03 was sent out to a bunch of people yesterday. Definitely makes things much better in HQ mode, at first glance, with pretty much bugger all performance difference.

Were you just beta testing or should we expect a mini-review/evaluation soon now that it's been released to the public?
 
trinibwoy said:
Were you just beta testing or should we expect a mini-review/evaluation soon now that it's been released to the public?
I've tapped something out that I'm just waiting for my managing editor to push the green button on. Hopefully today, but given timing considerations and all the other things I have no idea about when it comes to web publishing, likely tomorrow now.

Cool to see it on nZone. Well, when this page starts working anyway. The 64-bit version is up here for those on the bleeding edge.

Edit: My article is up here, it seems. Says much the same as 3Dcenter did with some performance numbers from 78.03 to boot.

Since publishing, 3Dcenter have updated their article (I think) with some thoughts that Demirug has come up with (they are sampling all the right texels, but not using them properly), which is worth a second read I reckon. Their thing is here (english version, yay tEd!)
 
Last edited by a moderator:
You state there is still some shimmering in HQ mode. do you think there is still some optimization going on, or is it just a hardware limitation?
 
Well, they're free to adjust where they're sampling the texels some more, so there's still some scope for even further tweaks to the output quality. I don't think it'll stop at 78.03 in terms of the adjustment. But the driver is pretty great for 7800 GTX owners just now.

The performance drops I showed might even be lower than what I've shown, too. There's a small chance I've not benched the driver setup in the same way (I might have left gamma correct AA and TAA on from some other testing with those settings). Double checking now.
 
Blastman said:
So am I to understand that the new 78.03 drivers in the Q mode still exhibit bad texture aliasing?

Yeah, the fix is for HQ mode only.
 
Back
Top