Has Nvidia Found a Way to get a Free Pass in AA comparrisons

Simon F said:
I never said it was exactly the same as an Accumulation buffer or else I would have said so.


Ok, let's look at this way: what in your original post did you say to me that would have caused me to think you didn't think it was identical to a software accumulation buffer scheme?

I can't read your mind (from this distance), you know...

It was a way of re-ordering 'accumulation buffer style' AA passes so that they could be done in parallel instead of serially.

It was also a way to do it directly on chip in dedicated hardware. It just seems to me that you are making the error of confusing the principles on which the T-buffer was based with its specific implementation in hardware as found in the VSA-100.

There may have been some automatic offsets (for AA) but they could also do DOF/motion blur effects (not brilliantly well though). That therefore indicates that the N rendering engines could operate independently or, at least, the data could be disabled for particular engines.

Don't really care about other, peripheral uses for the T-buffer that 3dfx postulated could be created by enterprising developers. The main, and highest and best use of the T-buffer, was FSAA in the V5. Everything else was a side-show.
 
Xmas said:
Of course it's important for speed how you store the samples, but not for the result, which is what I was trying to express. We can be quite sure NV3x uses a tiled framebuffer because of color compression.

So, the display buffer is tiled and things like GetPixel() in some poor man's screen grab would only return the left/topmost part of each pixel? Seems to me that given a result that just looks like a regular image, same dimensions, just non antialiased, if would indicate some planar storage and it was returning just one plane, otherwise there would be rather blatant noise. Especially if you were using something better than GetPixel(). There really is no need for adjacent storage even given a tiled setup, you just want to keep your ram pages open properly. Left/topmost of tile one could be in plane one at 0,0, next bit of tile one could be at 0,0 of plane two, how much you read of a page at a time depends on your chip storage.

I half wonder if the so called post filter is not just bad 2D quality filters and reviewer incompetence, hard to take sites seriously lately. :)
 
Xmas said:
....
WaltC,
do you agree that V5 does blending of the AA samples at scanout? If yes, do you further agree that NV25 (only 2xMS) and later chips do blending of the AA samples at scanout?
So would you agree that they do "the same" in this regard?

Well, take a look at the diagram Joe furnished earlier in the thread. The "special video circuitry" employed for the combine operation clearly occurs outside of the RAMDAC, before the RAMDAC--at least that's the way it appears to me. So it's difficult for me to assume that the "special circuitry" in this diagram equates to post-filter blending.

However, we *know* that 3dfx's 16/22-bit mode, in both the V3 and V5, did in fact occur at the post filter stage. In both the V3 and the V5 this post filtering was relevant only to 16-bit display modes and could be forced on and off in the drivers.

I appreciate Joe furnishing the diagram, but I think it's a bit lacking in its descriptive nomenclature as it doesn't even use the word "T-buffer" to describe the on-chip hardware jittering that's taking place, although it certainly depicts it.

More importantly, 3dfx was never shy about talking about the role of "post filter blending" in its 16/22-bit mode for both products, and not shy about revealing that when the V5 was set to 32-bit display "post-filter blending" was turned off by default and could not be forced on. It begs the question why if the "special video circuitry" in the diagram was in fact at the RAMDAC level (however this would be contrary to the diagram), and was in fact "post filter blending", 3dfx did not refer to it as "post filter blending," as it did with the 16/22-bit display mode in both products.

Perhaps they did not because it was not?....;) Just a thought...

That's why I can't say "they are the same", etc.

Also, why is it nVidia employs the post filter in its MSAA modes but not its SSAA modes with nv35 products as reviewed so far? This would seem to me to be a further difference between what 3dfx did in the V5 and nVidia's doing now with nv3x, even if you stipulate 3dfx's "special video circuitry" as post-filter blending (which I am not ready to stipulate)...

At the heart of all of this is simply the need to hear some clear, definitive explanation from nVidia on how it is employing the post filter relative to its MSAA FSAA modes, right?
 
WaltC said:
Simon F said:
I never said it was exactly the same as an Accumulation buffer or else I would have said so.

Ok, let's look at this way: what in your original post did you say to me that would have caused me to think you didn't think it was identical to a software accumulation buffer scheme?
/me goes off to find that old post....
I said:
The T-Buffer technology maintained N buffers, each of which could be rendered independently and then recombined on-the-fly (AFAIAA) in the DAC feed. The idea behind it was to re-order what could be done in an accumulation buffer (see SIGGRAPH in, err, about the early 90s) so that instead of doing N passes serially, the N passes could be done in parallel. Unfortunately, N, was typically rather small which meant than some effects, eg Depth of field and motion blur, weren't done brilliantly well.

That looks clear enough to me. I don't understand why you are confused.

I can't read your mind (from this distance), you know...

It was a way of re-ordering 'accumulation buffer style' AA passes so that they could be done in parallel instead of serially.

It was also a way to do it directly on chip in dedicated hardware.
You can have dedicated Accumulation buffer hardware as well, you know. All you need are more bits of accuracy and a means of performing a weighted sum of one framebuffer into the Accumulation buffer.

It just seems to me that you are making the error of confusing the principles on which the T-buffer was based with its specific implementation in hardware as found in the VSA-100.
It's you who's talking about specific implementions. I was originally describing the generic functionality however, AFAIAA, all physical implementations did use a DAC feed to combine the N buffers which was the original point of the discussion.
 
Himself said:
I half wonder if the so called post filter is not just bad 2D quality filters and reviewer incompetence, hard to take sites seriously lately. :)
I really trust some people to recognize 2x or 4x AA when they see it, even without having to zoom into screenshots. :D
 
WaltC said:
Well, take a look at the diagram Joe furnished earlier in the thread. The "special video circuitry" employed for the combine operation clearly occurs outside of the RAMDAC, before the RAMDAC--at least that's the way it appears to me. So it's difficult for me to assume that the "special circuitry" in this diagram equates to post-filter blending.
[...]
It begs the question why if the "special video circuitry" in the diagram was in fact at the RAMDAC level (however this would be contrary to the diagram), and was in fact "post filter blending", 3dfx did not refer to it as "post filter blending," as it did with the 16/22-bit display mode in both products.
Of course it can't be in the RAMDAC - because the on-chip RAMDAC is not necessarily the output device. The "22 bit post filtering" also isn't part of the RAMDAC for the same reason. The "SSAA Analyzed" paper from Kristof and DaveB clearly states that the downfiltering happens just before the RAMDAC and the downfiltered image is never written to memory. To me this is a clear example of what I'd call a post filter.

WaltC said:
Also, why is it nVidia employs the post filter in its MSAA modes but not its SSAA modes with nv35 products as reviewed so far?
Maybe it's a hardware limitation, but most likely it's a performance consideration. The lower the framerate is, the less sense it makes to filter at scanout.
 
Xmas said:
I really trust some people to recognize 2x or 4x AA when they see it, even without having to zoom into screenshots. :D

I wouldn't. My GF3->GF4 example illustrates this fact.

For the longest time, title after title of seeing the effect in motion and being completely baffled.. then going around the internet and seeing almost a unanimous opinion of this massive improvement, all illustrated with screenshots of QC/xS modes (admittedly, the screenshots were pretty stellar). Then to the opposite effect- websites doing side-by-side comparisons of 2xMS screenshots and condemning it as ineffective or low quality, when the result was actually quite the opposite.

It's as if these people are totally blind to the actual image quality, but instead rely on taking a slew of screenshots, assuming they are representative, then panning around in zoomed regions to make a decision. As it turns out- their methods were wrong when the best representations should have been just simply admiring the output results, in motion, in games- the way it should have been... but wasn't.

----------
On a different note- I don't see NVIDIA ever disclosing the answers we seek concerning post-filter/post-processing and NVIDIA hardware. It just goes against their style and ethic to divulge such things.

I halfway can't blame them. Some amount of "magic" is lost once the true processes are unveiled. It also quickly illustrates any possible flaws or shortcomings this way as well and opens up much discussion and debate.

The way I look at it- it's alot like when a Vegas magic act takes a camera backstage and shows how they actually appeared to cut a woman in half, or quickly change a group of 8 people in a cage to a pair of white tigers with a quick change of a white sheet... from that point forward, you start to notice the flaws and the effect becomes "cheap" or "cheesy"..

But in related news- I think there is more to post-filter/post-processing on NVIDIA hardware than most are aware of. Especially when it comes to things like Digital Vibrance and the like.

A quick experiment for those interested in such things: Digital Vibrance is a post-filter effect for certain as DV does NOT show up in screenshots or in the framebuffer. I have noticed that there is a great deal of artifacting and problems with visuals if DV and anisotropic filtering is used in tandem... which leads me to believe there *may* be some post filter blending used with anisotropic filtering as well, although I havent quite figured out how.

Tests I've used are textured regions with high-contrast fonts/text for the best portrayal. With DV alone, there is no change. With AF alone, there is no change. With the two together, suddenly there are extra pixels- and the coloration shifts (whites become purples/blues.. reds become shifted brighter towards cyan/pink, etc.etc.). It's the extra pixels that interest me and might someday yield insight to some of the effects used on this hardware. It has a similar effect as supersampling has on text, but it doesn't show through on screenshots... only on-screen.

All interesting stuff.. and interesting discussion. I think with enough findings and research, we might get a better feel towards how the hardware works and what processes effect which pieces of the output.
 
Sharkfood said:
Xmas said:
I really trust some people to recognize 2x or 4x AA when they see it, even without having to zoom into screenshots. :D

I wouldn't. My GF3->GF4 example illustrates this fact.
Should have stressed that "some" more ;)

Regarding the post filtering... NVidia does state that they are using downsample on scanout. I don't think they really have to explain how it works, maybe just specify a little more when it works.

And Digital Vibrance is done in conjunction with other color "correction" features just before the RAMDAC, and just like those, another post processing step. Gamma, brightness and contrast are done through the gamma/color LUT. DV enhances the saturation of colors and can therefore not simply be done with a per channel LUT.
 
DV enhances the saturation of colors and can therefore not simply be done with a per channel LUT.

Precisely. :)

But isn't interesting that anisotropic filtering has an adverse effect on output when used in conjuction with DV? This behavior, which can be seen on GF4's (don't know if it's still the case with FX line..) sparked quite some curiosity. There might be more "special circuitry".. or just some unrelated oddness involved.
 
Sharkfood said:
But isn't interesting that anisotropic filtering has an adverse effect on output when used in conjuction with DV? This behavior, which can be seen on GF4's (don't know if it's still the case with FX line..) sparked quite some curiosity. There might be more "special circuitry".. or just some unrelated oddness involved.
You mean, it's visible in 3D, but when you take a screenshot and view it on the same PC with the same DV and color settings, it's not visible? That's really odd.
I haven't seen any artifacts like that on my GF3, and I usually have AF and DV on (DV low setting)
 
Just for the record, the last review in the German C'T comparing the AA modes did note the lack of quality in NVIDIA's offerings ... and they didnt need no freaking screenshots to back up their opinion, for some things reviewers just should be willing to depend on the trust from their readers.
 
Part of the problem is that people are using different language in this thread.

3dfx did NOT pioneer the postfilter. That term has been used in signal theory for years and years and years. It is a very special implementation and circumstance of the general term postfilter that 3dfx used when dithering, and is not the same thing that you will read in a text like Foley and Van Damme.

Other than that, I agree with Russ and Joe, the Tbuffer did the averaging near the DAC stage, and therefore would suffer from poor screengrabs. That was one of its pecularities compared to say Supersampling alla cgi, where the filtering was done into a final framebuffer.

Regardless, take 2 buffers A and B, one element in A has color a, one element in B has color b.
a+b/2 = c. c is an element of C and could be said to be postfiltered, notice I make no mention of what 'C' is.

Its a very general term, kinda misleading, and is just semantics when it comes down to it.
 
You mean, it's visible in 3D, but when you take a screenshot and view it on the same PC with the same DV and color settings, it's not visible? That's really odd.

Exactly! Totally odd. A good example is Everquest- set DV one or two notches from the left, enable any degree of anisotropic filter... and the colors and text are a jumped of mismatched pixels and white are this odd bright purple color, grays are bluish..etc.etc. But screenshots are perfect. (reproduced on multiple Geforce4 Ti-series).

Made me scratch my head to say the least... why would AF effect post-filter? Just seemed very odd since the two alone have no visual anomalies.
 
WaltC said:
It begs the question why if the "special video circuitry" in the diagram was in fact at the RAMDAC level (however this would be contrary to the diagram), and was in fact "post filter blending", 3dfx did not refer to it as "post filter blending," as it did with the 16/22-bit display mode in both products.

Whats really going to blow Walt's mind (If mine is serving me correctly) is that when N buffers (active) was greater than 4 and the color precision was I-16bit, the traditional post-filter was disabled and the card used the blending/averaging of the N buffers as it's inherient precision was more accurate. Thus, to differentiate between them as he's doing while praising the pseudo-"22-bit" filter (when what he's seeing [N samples > 4] is due to what Russ/Joe's arguing) is just insane.

It's been awhile, but AFAIK this is what was done.

PS. Made sense to me Simon.
 
Back
Top