When enough is enough (AF quality on g70)

Rys said:
Yeah, the fix is for HQ mode only.

No.
It impacts Q and HQ the same way.
Nvidia said the fix is only for HQ because in Q there is less shimmering but still a lot of shimmering so it's difficult to state that something is fixed.

By the way, the filtering algorithm looks to be the same as the NV40 one. Some small differences but I don't think they could be related to a different algorithm. Probably just some minor hardware tweaks.
 
The quality mode still shimmers an unacceptable amount IMO. All these drivers did were make the 7800 IQ on par with the 6800's IQ. However, the 6800's quality mode has shimmers an unacceptable amount as well. This is why I am dissapointed in these drivers, still no shimmering fix for the quality mode. We are forced to go up to high quality but we can't enable any of the other optimizations in HQ, thus we end up with large performance hits in some situations.
 
Particleman said:
The quality mode still shimmers an unacceptable amount IMO. All these drivers did were make the 7800 IQ on par with the 6800's IQ. However, the 6800's quality mode has shimmers an unacceptable amount as well. This is why I am dissapointed in these drivers, still no shimmering fix for the quality mode. We are forced to go up to high quality but we can't enable any of the other optimizations in HQ, thus we end up with large performance hits in some situations.

Nvidia understood the message. 78.03 is a quick fix. I think we'll see some quality improvements in the coming months (can't be done in 2 days ;) ).
 
Tridam said:
Nvidia understood the message. 78.03 is a quick fix. I think we'll see some quality improvements in the coming months (can't be done in 2 days ;) ).

Oh, I think nV has "understood the message" for years now, and I really don't think nV being hard of hearing has anything to do with it. As I mentioned earlier, with nV, from one product to the next and one driver to the next, the "fix" is perpetually in the future. I think what nVidia has trouble understanding is the message, from those of us who prefer other products, that what we'd like to see nVidia do is place far less emphasis on driver development for benchmarks and far more on supporting the kinds of IQ we'd like to see when running our non-benchmark games and applications. That's certainly a simple enough message to comprehend, isn't it? But not presently owning nV hardware it's indeed refreshing that I don't have to worry about it. It's been refreshing for years, actually.
 
WaltC said:
I find your definition of cheating very narrow...;)

We'll see about that.

Here's the way I define the two terms:

Optimization: Structuring the code to run most efficiently on your hardware. Generally, all pre-nV30 optimization was considered to simply get the most out of the hardware while still delivering the optimal IQ possible.

So far so good.

Cheating: Structuring the code to detect when specific benchmarks are run, and then writing the code so that when that condition is true a *false impression* of the general performance of the hardware benchmarked occurs when the benchmark is run, creating a false impression as to the value of the hardware under the benchmarks run.

Fine; then GPUs are cheating with filtering since the Voodoos. If an application requests trilinear then it should receive trilinear and not something close to it. When you set in an application textures to receive up to 16xAF samples, then it should receive up to 16xAF samples on all angles and not just on two out of four. Shall I go on?

No it's because my definition of optimisation and cheating is extremely "narrow" and you're trying to fine tune reality to match your own logic, that's really all. If you shouldn't however notice angle-dependency or all other forementioned optimisations (or whatever anyone wants to call them) then I'm afraid your eyes are pretty selective as to what they want to notice and what not.

Oh, I think nV has "understood the message" for years now, and I really don't think nV being hard of hearing has anything to do with it. As I mentioned earlier, with nV, from one product to the next and one driver to the next, the "fix" is perpetually in the future. I think what nVidia has trouble understanding is the message, from those of us who prefer other products, that what we'd like to see nVidia do is place far less emphasis on driver development for benchmarks and far more on supporting the kinds of IQ we'd like to see when running our non-benchmark games and applications. That's certainly a simple enough message to comprehend, isn't it? But not presently owning nV hardware it's indeed refreshing that I don't have to worry about it. It's been refreshing for years, actually.

They most certainly got the message and they will get another reminder soon hopefully I can assure you of that.

Pardon my persistance, but that broken record of yours has become tiresome lately and albeit I won't say that you don't have a point at all, you have a tendency of exaggerating way more then it is actually needed.

No hardware ever comes with perfect or issue-less drivers out of the box; never did and never will. NVIDIA definitely needs to "cheat" right now for a couple of pathetic persentages when there's no real answer from the competition yet :rolleyes:
 
Rys said:
I have to say cheers to Chris and Ailuros who gave me a hand with some stuff in the beginning and validated some testing I was doing. Thanks chaps. ChrisAndAilurosDo3D.com would rock ;)

I don't personally recall doing all that much to be honest ;)

Great article by the way; I know I needed another reference link for my humble little write-up :)
 
I'm pretty happy with these drivers. They take care of most of the shimmering arround the mipmapboudries, just a little texture aliasing here and there, but acceptable.
I play with HQ and lod clamp off, lodclamp has nothing to do with the mipmap shimmering (offcourse you get texture aliasing when set to negative).
I good way to test is the UT2004 CBP2 Tylan map, a tiny moire effect is still visible I you use a microscope, but that's when aa comes in. And we won't our textures too blurry offcourse :) .
 
Last edited by a moderator:
Ailuros said:
No it's because my definition of optimisation and cheating is extremely "narrow" and you're trying to fine tune reality to match your own logic, that's really all. If you shouldn't however notice angle-dependency or all other forementioned optimisations (or whatever anyone wants to call them) then I'm afraid your eyes are pretty selective as to what they want to notice and what not.

I tried to point out that with R300 versus nV25, circa 2002, for instance, that regardless of its angle-dependent AF, the IQ level produced by R300 when running with both AF and FSAA enabled (the only way I run those IQ features) was far superior to that produced by nV25. So my point is that your selective argument is actually a non-argument, if what we are talking about is indeed IQ produced--as opposed to somebody's concept of IQ.

Pardon my persistance, but that broken record of yours has become tiresome lately and albeit I won't say that you don't have a point at all, you have a tendency of exaggerating way more then it is actually needed.

It's OK...;) But you must know that to me *you* sound rather like a broken record, yourself...:D

No hardware ever comes with perfect or issue-less drivers out of the box; never did and never will. NVIDIA definitely needs to "cheat" right now for a couple of pathetic persentages when there's no real answer from the competition yet :rolleyes:

What you overlook, of course, is that when it comes to cheating nVidia did so even when cheating did it absolutely no good at all except to get it caught in the act. Nothing nVidia said about nV3x or did with nV3x made it either the equal or the superior to R3x0. That was because there was nothing that could be done in that case. So, why did nV bother to do it then...? It most certainly did not help them any, did it? But they did it anyway, and with gusto--even though there was no practical reason for them to have done so.

When an IHV recommends that reviewers use a shimmering IQ mode to benchmark (Quality), on the grounds that their mostly non-shimmering IQ mode (High Quality) is too good an IQ mode to be attractive to anyone (which is code for "it runs too slowly to produce the benchmark numbers we like")--well, if you can't see what's wrong with that then there's nothing further I can add.

As I hope I have made clear--my objection is not that Quality mode shimmers--I have no objection at all to that--my objection is to the fact that nVidia has specifically directed reviewers to use Quality for benchmarking--and not high quality. Why else would they do this if not from a desire to project a performance level which is *better* than their products can realize when running without shimmering? Why would nV ever assume that users would be satisfied with a shimmering display? The way it appears to me is that nV wants G7x IQ to be thought of as "better than industry norms" (ie, without shimmering) but that nV also wants people to think of G7x performance in terms of frame rates that can only occur with IQ driver settings low enough to produce shimmering in the first place. It's not what the nV drivers do that bothers me--it's how nVidia wants its products promoted that concerns me, and has for a long time.

IE, here's the kind of nV statement I'd not have argued with:

"Reviewers wishing to benchmark G7x with the best IQ possible from the product should select High Quality mode for their benchmarking. Reviewers wishing to stress frame-rate performance above IQ are best served by selecting Quality mode for the G7x, as these driver settings produce better frame rates at the expense of some IQ. In some cases it may be noted that Quality mode produces IQ equal to High Quality but with better performance; in other cases the reviewer may notice that only High Quality can produce the desired level of IQ, and in such cases only High Quality should be used for benchmarking. As always, IQ regardless of driver setting will vary among reviewers and among applications and games used in the reviewing of our G7x products."

To this I'd have had no complaint at all. Had nVidia written its recommendation like this then there'd be no need to talk about "fixes" and the like at all, would there? And of course that is my point.
 
Last edited by a moderator:
WaltC said:
prior to nV's self-defensive PR blitz about nV30, which in the end proved utterly fruitless and not worth the time to execute it, nobody *ever* equated optimization with cheating on benchmarks, because they are two entirely different subjects altogether and always have been--and what nV says about it to the contrary makes no difference whatsoever.
Now we're into the semantic dregs. :p IIRC, ATI cheated fairly early on with WinBench or whatever ZD's early 3D benchmark was called, and I believe ATI also "optimized" for 3DM01's Nature scene. I don't believe nV was the first or will be the last to cheat on a popular test.

Another quibble, minor though it may be: nV's PR blitz may have proved utterly worthless to you personally, but I don't think you can say the same about everyone (be that a good or a bad thing).
 
Pete said:
Now we're into the semantic dregs. :p IIRC, ATI cheated fairly early on with WinBench or whatever ZD's early 3D benchmark was called, and I believe ATI also "optimized" for 3DM01's Nature scene. I don't believe nV was the first or will be the last to cheat on a popular test.

Let's please not go all the way back to who did what with what were primarily *2d* benchmarks, as it isn't really relevant here at all. Can we try and restrain commentary about both companies to 2002 and later? I can't see how talking about either company prior to 2002 is of any value here as both companies have changed dramatically since 2002 from what they were before, as has the entire 3d-chip landscape.

Another quibble, minor though it may be: nV's PR blitz may have proved utterly worthless to you personally, but I don't think you can say the same about everyone (be that a good or a bad thing).

Well, nV's anti-Microsoft, anti-FAB, anti-DX9, anti-DX9 benchmarks (remember TR:AoD?), anti-3dMark, anti-ATi, pro-nV3x PR blitz of 2002-03 resulted in nV losing market preeminence to ATi, losing the xBox contract, losing a *lot* of market good will, and ultimately ended with the statements of JHH that nV3x was both a "failure" and a "mistake."

Indeed, nVidia's real answer to ATi did not come until nVidia shipped nV40, which was sometime later. I would argue that without nV40 most likely nV would not now exist as a competitive 3d-chip manufacturer, and so it was only shipping a competitive product with ATi (instead of trying unsuccessfully to browbeat the markets into accepting what nV found convenient to sell--nV3x) which has helped nV recapture some of the ground it lost from both nV3x and, even more importantly, the way nV handled nV3x in the competitive landscape. IMO, nV40 and beyond owes much more to R3x0 and beyond than it owes to nV3x. In a recent interview with JHH that I believe I read here at B3d, JHH termed nV30 a "mistake" and you don't do that for a product line you consider a success, do you?

I'd also add that nV40 was the ultimate repudiation of all of the anti-3d-progress negatives nVidia PR spun in those days, much to the chagrin of all of us. Now that nVidia has been able to field a product line competitive with ATi's, the PR negativism has stopped, hasn't it? Thankfully, the message that nVidia's PR blitz of those days wasn't working was at long last understood by nV, and so we don't have to endure it anymore.

As to what you imagine nVidia "won" from all of that it's very hard to imagine. What helped nV in the end was changing its track, its attitude, and its product line. I applaud that, and only want to see nVidia continue to get better at providing the products its markets want as opposed to trying to pitch Snake Oil, if you know what I mean...;)
 
WaltC said:
I tried to point out that with R300 versus nV25, circa 2002, for instance, that regardless of its angle-dependent AF, the IQ level produced by R300 when running with both AF and FSAA enabled (the only way I run those IQ features) was far superior to that produced by nV25. So my point is that your selective argument is actually a non-argument, if what we are talking about is indeed IQ produced--as opposed to somebody's concept of IQ.

Performance would be a lot lower on R300 if it would have had non-angle dependent AF. I chose it though back then because I turned a blind eye to angle-dependency in favour of the high sample sparsed sampled MSAA it contained. You can't have it all at once, that's true but that doesn't change in the slightest any point considering optimisations/cheats or what you personally chose to call them.

Sadly enough there are no guidelines for implementing AF possible, which simply means that almost anything seems to be allowed these days and it's only getting worse. Feel free to convince yourself that there's no single sigh of shimmering, moire patterns, texture aliasing, MIPmap banding and what not on anything R3xx/4xx. It's merely in the "slightly" better department these days depending on occasion and application between different GPUs and nothing else.

It's OK...;) But you must know that to me *you* sound rather like a broken record, yourself...:D

I mean to, otherwise I wouldn't even bother replying to your tireless novel-sized drivels with equally sized drivels.


What you overlook, of course, is that when it comes to cheating nVidia did so even when cheating did it absolutely no good at all except to get it caught in the act. Nothing nVidia said about nV3x or did with nV3x made it either the equal or the superior to R3x0. That was because there was nothing that could be done in that case. So, why did nV bother to do it then...? It most certainly did not help them any, did it? But they did it anyway, and with gusto--even though there was no practical reason for them to have done so.

Texture filtering optimisations have nothing to do with shader replacements/cheats in popular benchmarks.

When an IHV recommends that reviewers use a shimmering IQ mode to benchmark (Quality), on the grounds that their mostly non-shimmering IQ mode (High Quality) is too good an IQ mode to be attractive to anyone (which is code for "it runs too slowly to produce the benchmark numbers we like")--well, if you can't see what's wrong with that then there's nothing further I can add.

There we go again. They had a high quality mode in their drivers before the competition in case you didn't notice. The real purpose of high quality is to give the user the chance to switch all optimisations off, one aspect ATI followed up after intense protest from the userbase.

Do you think I'd have a problem if both IHVs would switch all their filtering optimisations off in their drivers? The reason both won't is that they need every other half frame to win benchmarks and that's a reality that goes for both and ATI's default driver settings with which the majority of benchmarks - ie optimisations enabled - get conducted on Radeons. I'd bet you'd love to see GF's being tested without optimisations and Radeons with optimisations but it's not going to happen.

As I hope I have made clear--my objection is not that Quality mode shimmers--I have no objection at all to that--my objection is to the fact that nVidia has specifically directed reviewers to use Quality for benchmarking--and not high quality.

You're off topic in many regards in this thread. While I'm trying to find out what exactly is wrong in the drivers for my hardware and how it can get improved I have to read to any side-pedalling noise, where practically anything you negative you can remember from the past year gets thrown into the mix. I said and I'll say it again: would I myself be called to write a reviewer's guide today and keeping in mind how the situation/trends are in this market, I would too recommend that they test optimisations vs optimisations and not quality vs. high quality.

Have you actually ever read the entire benchmarking guide from NVIDIA?

To this I'd have had no complaint at all. Had nVidia written its recommendation like this then there'd be no need to talk about "fixes" and the like at all, would there? And of course that is my point.

An alternative text in a benchmarking guide doesn't make side-effects automatically go away. However there are a lot of things I'd like to see fixed or added and that irrelevant what kind of GPU I own; sadly it's not always possible.
 
Ailuros said:
Sadly enough there are no guidelines for implementing AF possible
Objectively speaking no, but benchmarking is a great way to spur improvements ... ideally we would use methodical subjective testing on video cards, but that just isnt an option.

So even though there is no absolute correct way of doing things, picking a very good way and measuring deviations while ultimately a flawed way of testing might in the short term actually be a good idea ... if it were to be adopted by review sites. Putting a number to quality is the best way to make it a bigger factor in the development of video cards.

For straightforward anistropic texture mapping using footprint mapping without mipmaps and with a gaussian PSF is close enough to correct to serve as a reference.
 
Do you know how long I've been begging here and there for somebody to find the time and write a sophisticated texel analyzing application that could eventually shed some more light on issues like that? Granted most folks don't have time and I sadly as a layman don't qualify for such tasks.

In short: any proposition for applications/games and specific methodology would be highly welcome from anyone as of course even more an advanced filtering testing application.
 
Sorry, won't be me either ... ideally someone would implement the anisotropic filtering I suggested in MesaGL, swshader might work too but I dont know how many games that can run, then you could compare software output to hardware with an objective quality metric (MSSIM is a decent one IMO). When comparing hardware you'd have to make sure everything was using the same features, this is probably easier with OpenGL.
 
Last edited by a moderator:
Let me be clear, first of all: I still harbor more goodwill toward ATI than NVIDIA, but nowhere near the gulf that opened thanks to NV's "handling" of NV30. And I have to give NV credit for innovating far more since R300 than ATI has, at least in terms of products in the marketplace (Xenos and R520 aren't out yet). That being said, Walt, I think you're carrying on too strongly about the whole NV30 mess, mainly because ATI hasn't acted the saint since, either.

WaltC said:
Can we try and restrain commentary about both companies to 2002 and later?
Fine, so ATI lied about "trylinear," and in writing, not loosely-worded and generously-interpreted interviews. But I'm not sure what we gain from ignoring what happened before 2002. Because ATI has new leadership, and NV has the same? Again, it seems to me ATI tried to slip trylinear past us--post-NV30, no less. Both companies will do what they can to win more money when they hold the upper hand, and wave their hands when they've got nothing to hold. They've proven it repeatedly. Yes, I also feel that nVidia sunk lower with NV30 than ATI ever has, but not low enough to lose sight of ATI's ethical missteps.

Still, I didn't realize 3DM01 was mostly 2D.

Well, nV's anti-Microsoft, anti-FAB, anti-DX9, anti-DX9 benchmarks (remember TR:AoD?), anti-3dMark, anti-ATi, pro-nV3x PR blitz of 2002-03 resulted in nV losing market preeminence to ATi, losing the xBox contract, losing a *lot* of market good will, and ultimately ended with the statements of JHH that nV3x was both a "failure" and a "mistake."
Here's where we differ, and it's possible we do so because I'm not arguing with all the facts. I can agree that nV took all those stances. I can also agree with your looking down on them. I can't agree, however, that those stances resulted in all those problems. Some of those stances looked to me to be reactive, rather than proactive. They probably lost the Xbox 2 bid because they played a game of chicken with MS, a company that typically wins such games. Then, knowing that they'd lose Xb2 anyway and that they were holding a weak hand with NV30, they decided to milk their profit margins on the remaining Xbox sales simply because things really couldn't get any worse. They lost market preeminence and good will due to inferior product, not because of inferior actions or posturing. Finally, I'm not sure what JHH's statements mean other than they reflect the truth--and that's probably not something you'd hold against him.

Indeed, nVidia's real answer to ATi did not come until nVidia shipped nV40, which was sometime later.
Agreed. But couldn't one say the same of R520? Except currently nV doesn't really outperfom ATI in next-gen features as much as they're the only ones who offer them, useful or not: SM3, HDR, and SLI (as useless as I find the latter, from a practical perspective).

I would argue that without nV40 most likely nV would not now exist as a competitive 3d-chip manufacturer
I'm not sure they'd have gone under, but NV40 did seem to save a significant portion of their bacon.

IMO, nV40 and beyond owes much more to R3x0 and beyond than it owes to nV3x.
Hmmm. I'm going to answer without knowing much about GPU architectire or engineering at all. Yes, I suppose NV40 owes much to R300 in that NV temporarily took a step back in terms of fragment shader complexity and went for more, simpler pipes on a more established (and thus potentially costlier) manufacturing process. Then again, they leaped beyond R300 and all subsequently released ATI GPUs by separating ROPs from the fragment pipes, by offering superior stencil performance (double Z and DST/PCF), and by still offering (at least theoretically, though less so than with NV3x) more capable shaders, both vertex and fragment (SM3 and HDR), and now TSAA (minor improvement tho it is). And yet G70 goes back to more powerful pipes with its dual ALUs per fragment pipe.

Yes, if R520 is expected to compete with G70 with only 16 pipes, it's probably even more powerful per pipe--but it ain't here yet.

In a recent interview with JHH that I believe I read here at B3d, JHH termed nV30 a "mistake" and you don't do that for a product line you consider a success, do you?
No argument here. Everyone agrees NV30 was a misstep in view of R300, and JHH can certainly say it was a mistake in that it cost him profit and marketshare.

Now that nVidia has been able to field a product line competitive with ATi's, the PR negativism has stopped, hasn't it? Thankfully, the message that nVidia's PR blitz of those days wasn't working was at long last understood by nV, and so we don't have to endure it anymore.
I'd argue (perhaps optimistically :p) that NV's PR department isn't staffed by idiots, but by calculating realists that realize that soft and fluffy PR is more effective (and far easier) when you have a hardware advantage, however slight. If they'd kept being sneaky hard-asses, surely more grudges would've been held.

As to what you imagine nVidia "won" from all of that it's very hard to imagine. What helped nV in the end was changing its track, its attitude, and its product line. I applaud that, and only want to see nVidia continue to get better at providing the products its markets want as opposed to trying to pitch Snake Oil, if you know what I mean...;)
Not won, but managed to lose less. They kept a stiff, if biting, upper lip. IMO, the change came solely from their engineers in the form of their NV4x product line. Everything else follows from that, and it's obviously not something you can change at the drop of a hat. I'm against snake oil, too, but perhaps I have a more jaded--dare I say, realistic--view of the market after following the event of the past few years (yes, just post 2002). I still can't forgive NV for being shmucks, but I can at least understand it as it relates to their employees and share-holders, and can expect more of the consumer as well as the salesman.

I think we agree more than not, but you're holding onto your NV30-era grudge longer than I have. To put it in perspective, ultimately we're talking about a purchase that tops out at around $500 for the vast majority of people. I'll save more of my indignation for, say, car makers who continue to pump out one SUV after another, and one higher-horsepower engine after another. Wait, what's that--another SUV on TV shown filling up on post-Katrina, $3++ unleaded?

Sweet deity, I can't believe I wrote the whole thing. I guess I keep hoping to solve this debate once and for all. From now on, I take the road of the resigned realist: no more replies that are longer than my screen.
 
Last edited by a moderator:
This has nothing to do with the debate here, but if Microsoft actually negotiated with NVIDIA for the XBox2 (which wouldn't surprise me if they didn't at all), then it's a loss in a relative sense with Microsoft and a design-win with SONY. Both ATI and NVIDIA sold IP for the coming consoles.

***edit: second OT...

I'd argue (perhaps optimistically ) that NV's PR department isn't staffed by idiots, but by calculating realists that realize that soft and fluffy PR is more effective (and far easier) when you have a hardware advantage, however slight. If they'd kept being sneaky hard-asses, surely more grudges would've been held.

I can feel a pleasant change for the better since the last year or so; that still doesn't mean that any PR from any company is what I'd call "innocent".
 
Last edited by a moderator:
now?

Having read only part of this thread I think its safe to say that optimisations in many forms will always be part of the companies strategy to improve performance and try to hold the upper hand (pause for breath). I do think its our place as consumers to hold up the magnifying glass to scrutinise what we get and make our voice heard when we're not satisfied.

As for all the NV3xs rants ressurected, why?
I can see people having issues with Nvidia over the past, but is that really what's its about now, is it? Everyone makes mistakes, som take the noble stance of admitting them. For big companies traded on wall street I dont think thats a real choice, you tough 'em out and smile till you have some leverage.
Obviously that sucks for consumers and fans waiting to buy into what PR we're fed (who here bought an NV3x's card?) , but hey that's life.

As for today, Nvidia have products uncontested by ATi (which is a shame and pretty lame for us as consumers). Any issues with Nvidia drivers should be shouted at them by their own customers, I don't see what ATi can currently say about Nvidia without looking sheepish.

Anyways, as with most things I'm guessing Nvidia will fix this problem- I imagine the latest Beta drivers satisfied some. I guess some things will also never be fixed (like some strange video cutscenes playback on NV45), but it's up to us to complain, bitch, mail/spam the companies untill they supply us with a reasonable solution, or yell mutiny otherwise.

Lastly (as if there is any sense or order to this post), I do hope ATi whips the balls off Nvidia with R520, that way we get more great competition and PRICE DROPS (for @!@!@*!@!&@@&!&@ sake). I want to see prices at last GEN range and MASS availabilty...
 
HaLDoL said:
Well look what we've got here:

Texture shimmer... ATI Edition
http://www.rage3d.com/board/showthread.php?t=33827482
I was definitely seeing something when I was doing my X800GT roundup the other day, but the videos I recorded didn't show what I was seeing inside the game - I must've spent 6 hours running through section after section of HL2 and CS:S (the two games that I really noticed the problems with) and I couldn't come to any kind of conclusion as my findings weren't repeated in uncompressed video captures from FRAPS. Thus, I wasn't happy enough to run with it as a part of that review. There's definitely something not quite right, and it wasn't there in Catalyst 5.7.
 
Is someone from 3dcenter/ computerbase / hardware.fr looking at the ATi shimmering or is not interesting as it is ATI and not nvidia?
 
Back
Top