NVIDIA are NOT going to explain about 2xAA on the FX...

McElvis

Regular
Taken from [H]OCP -

NVIDIA has informed us this afternoon that their GeForceFX does in fact handle 2XAA and QuincunxAA in a totally different fashion than the remainder of their Antialiasing solutions found on their GFFX card. As verified last week, our 2XAA screenshots showing IQ were not representative of actual in-game images. The rest of our images showing 4XAA and better are correct in their representation. NVIDIA did tell us that no more in-depth explanations of their AA would be forthcoming as they intend to protect the specifics as a proprietary technology that they do not want there competitors to have access to.

NVIDIA appear to be keeping quiet on a lot of specifics of the GF FX :?
 
Hmm...this is like building a car engine that is only half as good as anything else on the market, and then trying to "protect" your propriety technology. :rolleyes:

It might be more accurate if they renamed 2X AA and Quincunx to 1X AA.
 
Seems totally dopey -- I am sure that 2x AA is doing something on the GFFX that's not being picked up in the screenshots, and I hardly see how explaining the problem would hurt nVidia, but there you go... Most things about the GFFX launch (note: launch, not necessarily the card itself) seem to have been so stuffed up, why should this be any different?

Lucien.
 
Damage control at its best. Who wants their crappy quinucx lookalike or their 2x aa? Surely Ati doesn't as they blew past them into 8xaa/16af. Nvidia is putting spin on disaster that gffx is. I hug my gf2 but this gffx is just ridiculous, imo.
 
Nagorak said:
It might be more accurate if they renamed 2X AA and Quincunx to 1X AA.
It'd be best if they gave 2x and quincunx one of those fancy marketing names. I suggest BlurVision XtremeFX.
 
nVidia doesn't want it known that their "2xFSAA and Quincunx" numbers are frauds that don't involve FSAA at all--but involve only post filter stuff--which is why the screen shot software doesn't pick it up. There will be some screen shot software coming out (you can bet) that will pick it up just as we saw it happen with 3dfx's V3 when it launched. But the fact is that it isn't FSAA at all--not as anyone, including 3dfx, has ever done it (3dfx never used post filter manipulation to fake FSAA, but for other things not related to FSAA.)

Basically, nVidia is telling you that you either let them shove the product down your throat, or you walk away from it, but they aren't going to admit to trying to pull a fast one. They'll call it a "secret" instead....oh, how pathetic...*chuckle*...this just gets more pathetic as time goes on. I guess nVidia's just been spoiled by having a near monopoly in the performance 3D chip market for so long they don't even know what the truth is anymore, and they especially don't care about it. To be quite honest I am more amazed each day at low nVidia is willing to sink over this thing.

Edit:

nVidia to gaming public:

Suck It Down!

:LOL:
 
Now this is getting a little ridiculous. Maybe I'm missing something, but is there any reason to believe that doing the color blend in a post-filter will necessarily result in an image any different than if the blend was done in the framebuffer? It seems to me this is a simple trade-off of latency vs. framebuffer size, with no IQ impact either way.

Yes, we have the subjective judgement of some reviewers that even in motion GFfx's 2xMSAA does not look as good as R300's, but for all we know that could be just the lack of gamma correction. Or maybe there is some further reason why GFfx's 2xMSAA is inferior to R300's; maybe Nvidia is somehow sacrificing IQ for performance. Point is, we don't know yet. Why not wait a couple measly days for Nvidia or someone else to release a program that can take proper screenshots of GFfx's 2xAA and talk about it then?

Jeez, it's not like there aren't enough legitimate issues with GFfx to complain about that we need to go around looking for problems in places where we don't have all the facts yet... :rolleyes:
 
Dave H said:
Now this is getting a little ridiculous. Maybe I'm missing something, but is there any reason to believe that doing the color blend in a post-filter will necessarily result in an image any different than if the blend was done in the framebuffer? It seems to me this is a simple trade-off of latency vs. framebuffer size, with no IQ impact either way.

Yes, we have the subjective judgement of some reviewers that even in motion GFfx's 2xMSAA does not look as good as R300's, but for all we know that could be just the lack of gamma correction. Or maybe there is some further reason why GFfx's 2xMSAA is inferior to R300's; maybe Nvidia is somehow sacrificing IQ for performance. Point is, we don't know yet. Why not wait a couple measly days for Nvidia or someone else to release a program that can take proper screenshots of GFfx's 2xAA and talk about it then?

Jeez, it's not like there aren't enough legitimate issues with GFfx to complain about that we need to go around looking for problems in places where we don't have all the facts yet... :rolleyes:

Apparently, if what [H] reports is accurate, we aren't going to know, because nVidia is now claiming it as a sort of "trade secret." Attack my hypothesis all you like but clearly whatever nVidia is doing at < 4x FSAA is
(a) very poor on the IQ scale
(b) very different from what they're doing at 4x and up

Add this to the fact that nVidia's been pushing FSAA x2 for the past couple of months, at the expense of every other FSAA mode their card does, and you get something that doesn't smell very good (aside from the way it looks.) Fine, you think it's minor--that's your right. Don't ding me because I have a different opinion.
 
ya i can see it now. ATI , S3 , and Matrox are all gnashing their collective teeth over Nvidias incredible , new , and super-secret fsaa solution. :rolleyes:
 
indio said:
ya i can see it now. ATI , S3 , and Matrox are all gnashing their collective teeth over Nvidias incredible , new , and super-secret fsaa solution. :rolleyes:

What I see instead are web site reviewers mindlessly running "2xFSAA" benchmark scores on their sites without paying attention to image quality (present company of [H] and AnandTech excluded), and saying:

"Oh, wow! Look how the GF FX demolishes the 9700P at 2xFSAA!"

I'm quite confident this is *precisely* why nVidia's been pushing the 2x FSAA thing for months. *chuckle* You're acting as though it's *me* pushing it--which just isn't so. Had nVidia not been pushing it so hard, I doubt it would have interested me at all. It's just one more nail in the coffin of the GF FX, of course, and compared to other things it isn't that important, I have no trouble conceding. But it is an attempt to mislead anyone with enough air between his/hers/its ears to buy into it, and that kind of pisses me off. Sorry if you still can't see it from my point of view, but that's life I guess... 8)
 
What I see instead are web site reviewers mindlessly running "2xFSAA" benchmark scores on their sites without paying attention to image quality (present company of [H] and AnandTech excluded), and saying:

"Oh, wow! Look how the GF FX demolishes the 9700P at 2xFSAA!"

If that's what you see, you need new glasses. Every single one (of course there's only 5 of them) of the GFfx web reviews has concentrated almost exclusively on 4xMSAA when doing AA benchmarking. Yes, Nvidia has pushed 2xMSAA in their own comparisons and in that MaxPC preview of a beta card, but none of the independent reviews has fallen for that "trick". In fact, the only two full-blown reviews to do any 2xMSAA benchmarks at all were Anand and [H], both of which published screenshots of 2xMSAA which make it look worse than it actually is.

To suggest that the meagre "promotion" of 2xMSAA that Nvidia has engaged in will influence anyone's buying decision or make anyone more likely to play at 2xAA for that matter is really silly. The only people who would even be exposed to those pre-release 2xAA benchmarks--i.e. the sort of people who follow the latest news on pre-release hardware--are exactly the sort of people who will have read all the reviews on release, and know all about the 2xAA issue. (Well, they might not be aware of the fact that the screenshots are lower quality than the actual output.) No one is in danger of being fooled here. The worst thing that can be said about it (IMO) is that it may have mislead people into waiting for GFfx instead of buying a 9700 Pro a month ago.

Apparently, if what [H] reports is accurate, we aren't going to know, because nVidia is now claiming it as a sort of "trade secret."

Nvidia claims the method being used as a sort of trade secret. Yes, this is incomprehensibly lame. But it certainly in no way precludes Nvidia from releasing a utility to allow 2xMSAA screenshots to be taken which will match what actually shows up on-screen--which, after all, is the important thing here. I would be very surprised if Nvidia does not do this, because it's obviously possible (as with V3), and ostensibly in Nvidia's interests. If it's confirmed that Nvidia refuses to help with proper screenshots, then I will take your paranoid view on the matter. As the much more likely outcome is that Nvidia releases such a utility in the near future and certainly in time for retail card reviews at the end of the month, I think it's much smarter to wait until then so we can see what 2xMSAA on GFfx actually looks like before bashing it as some Communist plot. (Crazy idea, I know.)

Attack my hypothesis all you like but clearly whatever nVidia is doing at < 4x FSAA is
(a) very poor on the IQ scale
(b) very different from what they're doing at 4x and up

(a) No, it's not clear. Despite what you've implied [H]ocp says about on-screen 2xMSAA quality, what they actually say is that actual in-game IQ is "certainly not as lacking" as they claimed in the initial review. Albeit "not up to par" with R300 2xMSAA. Again, such a characterization is completely consistent with the only difference between the two being that R300's is gamma corrected and GFfx's is not. Or maybe it's more than that. Point is, absolutely nowhere does [H] imply that the IQ is "very poor" or anything close to it.

(b) Actually, there are some indications that it is at least somewhat similar to what they're doing at 4xMSAA. In particular, check out the following bench at in the [H] review:
10436208595cUSd31HIx_2_8.gif

Notice how both 2x and 4xMSAA on the GFfx are getting extremely low scores, presumably meaning they are forced to do AGP texturing because they've run out of memory. The likely explanation for this is that both 2x and 4xAA have a larger framebuffer footprint because at least some sample blending is not done until the post-filter; since R300 blends its samples in the framebuffer, it has a smaller memory footprint and thus everything fits in the on-card DRAM.

BTW - when looking for the above bench, I noticed that [H] has already updated their review with proper 2xMSAA screens. And the image quality is certainly not "very poor", although it is indeed noticably worse than R300's 2xMSAA. To my non-expert eyes, it is abundantly clear that the GFfx is doing something very close to normal 2xAA, and it appears that the only difference is, indeed, the lack of gamma correction.

http://hardocp.com/article.html?art=NDIxLDQ=
 
The only people who would even be exposed to those pre-release 2xAA benchmarks--i.e. the sort of people who follow the latest news on pre-release hardware--are exactly the sort of people who will have read all the reviews on release, and know all about the 2xAA issue.

I think you are giving Maximum PC's readers too much credit.
 
I think you are giving Maximum PC's readers too much credit.

Good point. I forgot about the poor saps who would actually read the dead tree version. :oops:

But I bet most of those who rushed to see the benches when they were posted online have since rushed to read the full GFfx reviews.
 
WaltC,

Let me explain my understanding and why your assumptions make no sense to me:

2x multisampling is done by using 2x the sampling position resolution, and 1x the texture resolution. This is done by, for illustration, rendering exactly the same scene "twice" (the same pixel in each scene is done at the same time), but with 2 different subsample positions for determining the color output in each "scene"l output. Since the texture coordinate resolution is the same as the output resolution, different colors will only occur at edges where each sub pixel "hits" different triangles that are sampling different textures for that coordinate. In this way, there is data for extra positions at edges, but no data for extra positions within textures.

This is 2x multisampling. No blur.

If I understand correctly, Quincunx takes this further by blending with arithmetic weighting the colors of more color outputs into the final output of a given pixel. At an edge, this results in more gradients...and my understanding is that with the model for sub pixel location used by Quincunx, this increases the accuracy of the edge blend by weighting them to more accurately represent the color the "center" of the current pixel should be, based on the extra stored position data. The problem is that when this is done within a texture (not at an edge), since there is no extra position data, it just ends up bleeding the colors of 3 extra pixels into the color of the one texture color it should have been using in the first place.

In both cases, color elements are taken, math is performed, and a new color element is generated. The problem with Quincunx is not that it is doing its "blur" step in a "post filter", but that when doing it in a texture (i.e., where there is no extra sample position data) it is just spreading color information out based on the assumption it is reconstructing positional information..."blurring".

It does not matter if you do the blend in a post filter or not, as long as the data used and the math performed do not introduce error. Since 2x mode for the GF FX is showing a performance hit, it makes sense that more data is being written (with color compression, this need only take extra bandwidth at edges). Since screenshots fit the idea of the sub pixel position being sampled differently than non AA (take a look at the Anandtech shots again), it makes sense that what we are seeing is one set of sub pixel position data before blend. Since the math for doing simple 2x MS AA is less involved than the math for the Quincunx blur step, there is no reason that I see to assume that 2x is not being done and the Quincunx blur is.

Post filter on Voodoo->increase image quality.
Post filter for Quincunx->decrease image quality.
Why assume the worst here right now?

We don't know that it isn't blurring or finding some other way to look bad at 2x mode, but we also have no reason (AFAIK) to disbelieve that it is simply doing 2x multisampling when settings say to do so (it is still just math based on data, either way).

EDIT: Boy, I'm slow...that's what I get for watching TV after starting the post. :-?
 
Hehe, this is silly. If you actually bothered to look at the screenshots, it would be obvious that it is doing 2x AA, and not just blurring the picture...

The reason the original pictures looked like no AA, was because the framegrabbing software only read one of the 2 samples for each pixel. Wasn't there a similar problem with grabbing AA shots from Voodoo5 cards?
 
I wasn't aware that the FX's FSAA was just not showing up in the shots(not that it matters since Im not in the market for it either way but that is useful info)
 
I'm not quite sure why many people started to say that it wasn't doing 2xAA at all and it wasn't a screen capturing error. I had pointed out a few times that the need for speed pictures showed 2xAA and QC fine :|. Presumably if nfshp2 has a screenshot function its taking the screen from later on.
 
Back
Top