I'm not convinced that's the case. In terms of framebuffer compression, an edge is likely an edge, even when the samples on both sides have identical values. I guess it's much easier to implement compression based on whether the coverage mask for a tile is all ones, instead of actually checking whether all samples for each pixel in the tile are identical.
Perhaps there's been a mixup: I was talking about the final phase lighting shader that tries to detect edges in the final 16xMSAA G-buffers, in order to avoid "supersampling" execution across all 16 samples.
An example might be with normals in one G-buffer and material IDs in another, you could end up with different sets of edges, depending upon which G-buffer you use to identify edges. An edge pixel in the normal G-buffer could easily end up as a continous surface in the material ID G-buffer.
It seems to me the only solution is to OR the edge detection across all G-buffers.
(But, hey, I don't program graphics!)
---
As to MSAA compression, this is how I think it works (with a major caveat to come). If you have 4xAA and a pixel consists of:
- 2 samples: A, B are both colour=red, Z=54
- 2 samples: C, D are both colour=blue, Z=77
and then along comes a coverage mask saying that samples A, B are blue, Z=77. Before applying this new coverage mask, which implies an edge, the ROP has to read in the pixel's entire previous samples. It should know to do this because the pixel is already flagged as an edge. When the ROP processes the new samples against the existing samples, all four samples end up as blue, Z=77.
Because this checking is performed in the ROP's buffer cache, the final result written back to the buffer (in memory) can be fully compressed, based not on the incoming coverage mask, but on the resulting coverage mask.
---
Having said all that, I've got to wonder if it's worthwhile for the ROPs to even try to write compressed pixels to the buffer, if the pixel was previously an edge. As frame rendering progresses (normal rendering, not deferred), the amount of fragmentation of (interior versus edge) pixels surely means that once you've taken into account buffer tiling and DRAM burst length, there's no value in doing a compressed write to a few pixels (because if there was an edge nearby, there prolly still is)...
In other words, is AA compression a win only when the framebuffer is fresh and the first triangles hit each pixel? When overdraw is currently 0?
---
So, Xmas, I would tend to agree with you, the hassle of converting an edge pixel into an interior pixel is prolly not worth it.
But I think G-buffers, being independent of each other in terms of their raw 16xMSAA samples, need to be edge-tested as a unified whole within the final lighting shader. If you're even going to bother with edge-testing, that is...
Jawed