supersampling with lossy framebuffer compression?

Xmas

Porous
Veteran
Supporter
Supersampling usually not only costs a lot of fillrate, but it also means more data per pixel that needs to be written to the framebuffer. Multisampling generates one color value per pixel, supersampling gets you one per sample. But I wonder if it's necessary to keep those sample colors separate in the framebuffer.

Framebuffer compression for multisampling uses the fact that all interior pixels have only a single color. Only for the edge pixels we have to store multiple colors from multiple polygons. Multisampling does nothing to alpha test edges, pixel kill or shader/texture aliasing. But is this really related to how the color values are stored, or just how they are calculated?

Supersampling fixes those issues. Not by accessing stored differing color values in the framebuffer, but by calculating several color values per pixel. Usually, you would write those color values to the framebuffer separately and without compression. But what's the point of storing separate color values in the framebuffer if multisampling is doing fine without? We don't see artifacts where edges of frontmost, later drawn polygons half-cover pixels already in the framebuffer.

Therefore, I think lossy compression for supersampling makes a lot of sense. Instead of writing several incompressible color values to the framebuffer, you can average them beforehands and write just one value per pixel. That is, like in the multisampling case, as long as there is no edge. And for supersampling, an edge could be either a polygon, alpha test or pixel kill edge.

I wonder whether NVidia is using this concept in G70.
 
I think the limitation with super sampling is the fact that you are calculating multipule colour values and not the memory bandwidth.
 
I wonder if super sampling then averaging could result in some artifacts. Consider the case where a texture is high contrast and 4x antialiasing is being performed. All 4 fragments are visible with two fragments being black and the others white.

Fragment 1
[0.0][1.0]
[0.0][1.0]

They are averaged together and the value written to memory is 0.5. Now the same object is drawn again with blending enabled. New fragment results are...

Fragment 2
[0.0][1.0]
[0.0][1.0]

Actual result since fragments were not kept
[0.0][0.5]
[0.0][0.5]

Result had fragments been kept
[0.0][1.0]
[0.0][1.0]

This is an extreme case, but wierd cases will show up in games. As you (hopefully) can see the result with a downsampled framebuffer is much darker than the ideal result.

A more extreme case would be if the 0.0 fragments are occluded by a future object. Despite being occluded they would still contribute to the visible image.
 
3dcgi: But in NVIDIA case they are enabling "TAA: SS" mode only then blending is OFF. So it's harder to detect corner cases.
 
DudeMiester said:
I think the limitation with super sampling is the fact that you are calculating multipule colour values and not the memory bandwidth.
Limitations aren't fixed. Just as rendering without AA can be bandwidth bound (e.g. when sampling uncompressed textures), rendering with supersampling can be bandwidth bound as well. Framebuffer compression isn't going to help with the fillrate hit, but it certainly can help.

3dcgi said:
I wonder if super sampling then averaging could result in some artifacts. Consider the case where a texture is high contrast and 4x antialiasing is being performed. All 4 fragments are visible with two fragments being black and the others white.
Sure it can lead to artifacts, as any lossy compression can if you can't limit the possible cases.

Compare your example to what would happen with multisampling. You wouldn't even get those high contrast values because each sample would have the same color. So instead of 0.0 on the left and 1.0 on the right, you get maybe 0.5 for all samples. And after the blending you get 0.25, which is exactly what you get with lossy supersampling compression. Do you see blending artifacts with multisampling?

Perhaps it also depends on how much you increase the LOD for supersampling.
There's also the option of having a contrast threshold up to which the samples get averaged, but that could be too expensive to implement.
 
Xmas said:
Sure it can lead to artifacts, as any lossy compression can if you can't limit the possible cases.
I'm not saying this is a bad thing. Only trying to point out a few possible hiccups. One thing I've learned is programmers do some wierd things that break almost any algorithm at some point. Heck, I suggested this same technique when I worked at Matrox a few years back. Obviously it never made it into a shipping product.
 
Back
Top