This looks sort of similar in some principles to the 22-bit color technique, doesn't it? The idea of more "reduced resolution" data samples representing "higher resolution" color data, and offering performance and storage benefits over fewer "higher resolution" data.
I thought about it, and came up with all sorts of whacky stuff. For your enjoyment and target practice:
Whacky Theory 1:
Would something like a 32-bit primary buffer, with 16-bit multisample buffers, blended on the premise that any stored 16-bit values that matched the primary buffer color value with LSBs dropped off could be treated as the full 32-bit primary buffer value for blending? Of course, the 32-bit/16-bit (8-bit/4-bit) are a bit arbitrary...there is no reason (?) this couldn't be used to reduce the demands of a frame buffer format aimed a bit higher than current...10-bit primary buffer seems pretty reasonable. Color storage solutions should be able to address the challenges that would arise, right?
It would be adding more color information, although less accurate color information. Wouldn't this still serve to successfully add position and color information to a pixel, as samples came from multiple discrete surfaces (i.e., samples at edges), even though less accurate color information is being added? I'd think in such a situation, the benefit of discrete surface sampling would provide benefit even if the discrete surface sampling was at reduced detail (because it is better than no representation at all).
Could the primary buffer sample selection be effictively picked to represent an "interior region", and the other reduced precision samples be picked to represent "exterior"? Trying to presume some connection with the NV40, would a solution that tried to be "stochastic" lend itself to this type of implementation by virtue of flexibility in "primary sample" designation? Perhaps distance based criteria for selecting it?
At the end of the text:
"The enhanced resolution signal is not a significant aspect of one emodiment because in some emodiments the low resolution signal and the high resolution signal are used directly".
It fits my thoughts (assuming they are workable), I think, except it seems to imply that there is an intermediate between my "16-bit" and "32-bit" color storage formats. My outlook depends on that intermediate "resolution reducer/enhancer" being only mathematically expressed by the blending criteria, yet this wording seems to imply there is some process altering values that occurs before the blending for output.