RussSchultz said:
The t-buffer method is exactly that: using the ramdac to blend the samples on the output stage.
Sigh, wrong--I see I am speaking to someone who knew nothing about it when it shipped. The T-buffer employed hardware jittering of pixels--the only one of its type to ever have done that--just ask Carmack, as he wrote up a nice little ditty on it when the V5 shipped. Unlike what companies like nVidia did, which later attempted to copy 3dfx's introduction of FSAA into the popular 3D marketplace, 3dfx used multiple copies of pixels *at the same resolution* and blended them for output--and the V5 was a true 32-bit product--unlike the V3--and therefore needed no post-filter blending to approximate higher color accuracy. The only time the the V5 used the post filter for blending was in its 16/22-bit output mode, which was a display option carried forward from the V3. In 32-bit mode the V5 had no need to use the post filter for blending, as was done for the V3's 16/22-bit display mode (no FSAA.)
FYI, it was originally impossible to grab screen shots from the V3--which again had no FSAA at all--which were accurate representations of what you saw on the screen--until screen shot software was used which captured post filter blending. The difference was far more dramatic than what you see with the nVidia 2x and QC FSAA modes, and affected the entire image, which again was not AA'ed at all. No, 3dfx did not use the post filter for FSAA and the post filter was *not* the T-Buffer.
Essentially, what the V5 did was to take a 640x480 image (for instance) in local memory, and double it at 640x480, and then jitter it in hardware , and then blend the pixels together in the T-buffer (*not* the post filter at all)--and it took two VSA-100's to do that. Working at a full 32-bits internally and capable of a 32-bit display, again, it did not need to use the post filter for blending like the V3, which had no 32-bit dipslay capability.