Supersampling usually not only costs a lot of fillrate, but it also means more data per pixel that needs to be written to the framebuffer. Multisampling generates one color value per pixel, supersampling gets you one per sample. But I wonder if it's necessary to keep those sample colors separate in the framebuffer.
Framebuffer compression for multisampling uses the fact that all interior pixels have only a single color. Only for the edge pixels we have to store multiple colors from multiple polygons. Multisampling does nothing to alpha test edges, pixel kill or shader/texture aliasing. But is this really related to how the color values are stored, or just how they are calculated?
Supersampling fixes those issues. Not by accessing stored differing color values in the framebuffer, but by calculating several color values per pixel. Usually, you would write those color values to the framebuffer separately and without compression. But what's the point of storing separate color values in the framebuffer if multisampling is doing fine without? We don't see artifacts where edges of frontmost, later drawn polygons half-cover pixels already in the framebuffer.
Therefore, I think lossy compression for supersampling makes a lot of sense. Instead of writing several incompressible color values to the framebuffer, you can average them beforehands and write just one value per pixel. That is, like in the multisampling case, as long as there is no edge. And for supersampling, an edge could be either a polygon, alpha test or pixel kill edge.
I wonder whether NVidia is using this concept in G70.
Framebuffer compression for multisampling uses the fact that all interior pixels have only a single color. Only for the edge pixels we have to store multiple colors from multiple polygons. Multisampling does nothing to alpha test edges, pixel kill or shader/texture aliasing. But is this really related to how the color values are stored, or just how they are calculated?
Supersampling fixes those issues. Not by accessing stored differing color values in the framebuffer, but by calculating several color values per pixel. Usually, you would write those color values to the framebuffer separately and without compression. But what's the point of storing separate color values in the framebuffer if multisampling is doing fine without? We don't see artifacts where edges of frontmost, later drawn polygons half-cover pixels already in the framebuffer.
Therefore, I think lossy compression for supersampling makes a lot of sense. Instead of writing several incompressible color values to the framebuffer, you can average them beforehands and write just one value per pixel. That is, like in the multisampling case, as long as there is no edge. And for supersampling, an edge could be either a polygon, alpha test or pixel kill edge.
I wonder whether NVidia is using this concept in G70.