RussSchultz said:
Walt, I think you're off in your own tangent here, confused by semantics.
3dfx used some magical post filter thing on the V3 on a single frame buffer to undither the framebuffer and get "22bit" quality. I'm not talking about this, and neither is anybody but you.
Both the NVIDIA and the 3dfx method of AA center around combining samples from separate frame buffers, mixing them and directly sending them them to the RAMDAC. 3dfx called it "t-buffer", NVIDIA is just calling it "super secret"
Russ, I think you've totally misunderstood the issue, at least as I see it. When nVidia first began emulating 3dfx's T-Buffer FSAA in its GFx products, it used supersampling--rendered to a high resolution and scaled it down. It was very slow, and looked not 1/10th as good as what 3dfx did with the T-buffer. Indeed, 3dfx did *hardware jittering* in the V5, that is, they took more than one frame *at the same resolution* and hardware-jittered the result in the VSA-100 hardware. They blended in the T-Buffer, which was also in the the V5 hardware. nVidia never, ever had anything similar (and the results showed that plainly, IMO.)
The post filter blending is something entirely different and was used by 3dfx to essentially upsample the display output in the V3 to a psuedo 22-bits of accuracy *in the post filter*--it wasn't real 22-bit accuracy, but close, and allowed 3dfx's 16-bit display (non-FSAA) to look much better while at the same time minimizing the performance hit. It was far faster than TNT2 24-bit output, and looked 90% as good. The only place this was carried through in the V5 was in the V5's 16-bit output mode, where it could be selected for as a display output option. In the V5, you could have 16-bits, 16/22 bits, or 32-bits, your choice. It's possible the post filter may have operated on the 16-bit FSAA result of the T-buffer, when the 16/22-bit mode was selected as the display output, but in 32-bits the post filter was not employed at all for any reason (since it wasn't needed, obviously.) Hence the post filter in 3dfx products was never employed primarily for FSAA--and in 32-bit V5 output the post filter wasn't used at all, for either FSAA or normal display.
The difference here, and this is just a theory based on the fact that normal screen shot software wouldn't catch nv30's 2x and QC FSAA modes, just like screen shot software at the time absolutely butchered the V3's normal non-FSAA output until it was written to grab the post filter blending as well, is that while nv30 seems to employ the technique as the primary technology for its 2x and QC modes, 3dfx never used it for FSAA. Since nVidia has decided to call it a "trade secret", *chuckle*, I guess we'll never know.
The strange part about it is that if nVidia had said "We found we could use post-filter blending and get as good FSAA results at 2x an QC FSAA that we'd get with standard FSAA that we decided to use it instead," I not only wouldn't have cared--I'd have thought it was pretty clever (as long as the results panned out.) But the recurring theme from them seems to be to hide and obscure what they are doing--which just doesn't "trip my trigger" so to speak...