Chalnoth said:Anyway, if it is true that this is a series of 1-bit errors, then what we are seeing is a total lack of error-correction hardware in the blenders. Apparently ATI's engineers didn't feel there was any need to have more than 1-2 stages of blending. Simply making the last bit pseudo-random would fix the 1-bit errors from accumulating (making the last bit random could be as simple as using a flip flop circuit that is swapped every clock time a blended pixel is written, but would obviously look better with a more chaotic function).
No - making the last bit 'pseudo-random' as you suggest would introduce noise, and also result in blending output being non-deterministic between two identical input states by introducing a dependency on the starting state of the hardware - such a dependency is not anticipated by any API, and might also break GL conformance which generally requires exact repeatability. This is not a desirable situation and would also not necessarily look better at all.