Nvidia: Subpixel Reconstruction Antialiasing

Ancient

Newcomer
http://research.nvidia.com/publication/subpixel-reconstruction-antialiasing

Subpixel Reconstruction Antialiasing (SRAA) combines single-pixel (1x) shading with subpixel visibility to create antialiased images without increasing the shading cost. SRAA targets deferred-shading renderers, which cannot use multisample antialiasing. SRAA operates as a post-process on a rendered image with superresolution depth and normal buffers, so it can be incorporated into an existing renderer without modifying the shaders. In this way SRAA resembles Morphological Antialiasing (MLAA), but the new algorithm can better respect geometric boundaries and has fixed runtime independent of scene and image complexity. SRAA benefits shading-bound applications. For example, our implementation evaluates SRAA in 1.8 ms (1280x720) to yield antialiasing quality comparable to 4-16x shading. Thus SRAA would produce a net speedup over supersampling for applications that spend 1 ms or more on shading; for comparison, most modern games spend 5-10 ms shading. We also describe simplifications that increase performance by reducing quality.
 
Sounds interesting in theory, but we still need this implemented in actual drivers before the MLAA vs SRAA wars can begin right?
 
This is very different from MLAA, and should not be compared to it.

First, it is probably impossible to do without some support from the hardware, namely access to superresolution depth and normal buffers. It likely can't be done on current consoles, for example.

Second, it uses superresolution buffers, meaning that it works on subpixel detail before it was lost; it should perform much better on sizzling detail in the distance. On the contrary, it will likely do little for the long almost-horizontal or almost-vertical lines MLAA excels at.

Third, it has the desirable quality of being fixed-cost.
 
question is, will nv patent this and prevent other vendors from using it ?

to yield antialiasing quality comparable to 4-16x shading
bit like saying yeild antialiasing quality comparable to chicken isnt it
 
Eh it's basically saying like MLAA it's better in some cases then others. We're just not sure what those cases are.
 
Now I think of it, it might be _impossible_ to force from the driver due to the requirement for superres depth/normals.

Anyway, it's "NVIDIA" in the sense that it was developed by some researchers at NVIDIA; there's no commitment on their part to have them in drivers.
 
This is very different from MLAA, and should not be compared to it.
This technique has most of the same issues with subpixel geometry, like power lines for example. The system knows that there's something in there (extra depth samples show it), but it doesn't have any calculated color value to perform the blending. So it cannot calculate the correct color any better than MLAA in this case.

First, it is probably impossible to do without some support from the hardware, namely access to superresolution depth and normal buffers. It likely can't be done on current consoles, for example.
No hardware support needed for a basic implementation. Just render your depth + normal pass first using MSAA. Then resolve just one sample of it without MSAA blending (this is straightforward on consoles, but might require DX10.1 on PC). Keep the multisampled depth + normal buffers in the GPU memory for later use. Now continue the rendering normally by using the resolved (non-multisampled) data. Render lights, render colors in second pass (light prepass deferred seems to be a good choice for this kind of rendering). After you have rendered and lit the whole scene, for each pixel you read the multisampled depths and normals to calculate blending weights (this is much cheaper than MLAA blending weights calculation that requires extensive shape analysis). When the blending weights are calculated you do the blending much like you would do in MLAA.

A performance optimized (console) implementation would likely not multisample the normals, and just do a multisampled depth prepass. Aliasing is mostly seen in depth discrepancies.

question is, will nv patent this and prevent other vendors from using it ?
This is not a hardware technique. You can implement basic version of this idea with any current graphics hardware. And this technique isn't exactly a straight "plug in" post process technique like MLAA. The developer needs to implement it themselves, since it must be implemented slightly differently based on engine architecture. I doubt they are going to use this kind of technique in their drivers, but it's a good technique for game developers to implement in their games, if their rendering methods work well with it.
 
Second, it uses superresolution buffers, meaning that it works on subpixel detail before it was lost; it should perform much better on sizzling detail in the distance. On the contrary, it will likely do little for the long almost-horizontal or almost-vertical lines MLAA excels at.
Also it will likely work better than MLAA on vertical and horizontal lines. ;)
 
All we have is that abstract to work with... that's not a lot.

Did any one of you research/college folks had the chance to read the thing yet?

Pictures are required to judge the quality of that solution. Or any other AA solution, for that matter. Pictures speaks louder than words, really.

I just hope we're not in for another blur filter effect for the rendered surfaces.
 
This technique has most of the same issues with subpixel geometry, like power lines for example. The system knows that there's something in there (extra depth samples show it), but it doesn't have any calculated color value to perform the blending. So it cannot calculate the correct color any better than MLAA in this case.
We know that it only has something like every 5th pixel with shading to work with, but if it can tell from subpixel information that the pixel is not meant to be that strong we will get positive impact in terms of image quality.
Pretty much the same thing as CSAA does with an very slim edge, but without the need to render scene from back to front.
 
I once tried implementing something similar to this. I didn't really spend much time on it, and consequently the edge detection/filtering step is about as basic as you can get. I also could have done a much better job if I'd made use of DX10.1/DX11 features (at the time I wasn't familiar with them). But overall the results weren't terrible:

comparison.png
 
Yeah, dont compare it it to MLAA they have nothing to do with each other! Because where as one tries to smooth the artifact of sub pixel detail in an image, the other of course..... well.... look just dont compare them alright!!, they clearly just shouldnt be compared....

:rolleyes:
 
Not perfect, but considering the cost and alternatives it seems very good.

Dimming of subpixel edges is nice, even though the roping artifact is clearly visible due to missing samples.
With enough samples edges might get dimmed enough to be barely visible, I would expect object to seem kind of blend in when it gets close enough. (similar to dissolve LoD , which might look incredible with SRAA. ;))

I wonder how feasible this would be on consoles..
4xSRAA would have additional memory cost of ~22MB to no-aa, but it should be rather cheap alternative to MSAA when considering actual rendering and shading.
 
Last edited by a moderator:
Would of liked to see some pics of what happens when they vary both the shading samples and geometry samples and also some of using a more advanced filter but other that that it looks interesting.
 
Comparing wildly different anti-aliasing algorithms from still pictures is pointless (can't spot temporal artifacts).
 
Back
Top