SMAA = magic bullet for next gen AA?

They are indeed using SMAA in Watchdogs. They confirmed it in an interview.

They are using it with a temporal AA component (like AC4 patched). You could say they are using a customized SMAA 2tx (maybe 1tx not really sure here...).

For the AC4 patch I would say 75% of the improvement came from replacing FXAA by SMAA + temporal AA and 25% from 900p -> 1080p.

Just for clarification; the SMAA variant that Ryse uses is a customized version of SMAA1x correct? And not SMAA1x vanilla
 
http://www.eurogamer.net/articles/digitalfoundry-crytek-the-next-generation

Cevat Yerli: MSAA is quickly getting bandwidth-bound and thus expensive. With a deferred shading based renderer the bandwidth consumption is getting prohibitively high. For Ryse we developed our custom SMAA 1TX, in essence it is a combo of morphological AA with smarter temporal anti-aliasing. It's a new robust and fairly efficient technique which we shared some details about at Siggraph this year. It's a solution which deals with any signal input changes in order to smooth out potential shimmering during movement, while masking out any potential ghosting, and together with shading aliasing solutions it provides a more filmic image quality overall.
 
In a couple years GPU bandwidth will go through the roof. Combined with Mantle and DX12, this should breath new life into real aliasing. This is a good thing seeing as how scene complexity and the efficacy of post-process blur filters have an inverse relationship.
 
I wonder, does the efficiency of SMAA matter if you use a forward renderer or a deferred one? A lot of devs have been dropping deferred because of the AA budget being too high, but i wonder if SMAA might not solve a few of those issues
 
What's the use of MSAA hardware then? Is it now something of a redundant hardware legacy, or does it still have uses in rendering some buffers like particles or shadows?
 
The tone mapping non-linearity issue is pretty easily solved with some smart filtering during the resolve step. A lot of tone mapping curves are invertable (or the inverse is easily approximated), and if you make use of that you can achieve high quality results. I've even experimented with more generalized filtering kernels that will produce high-quality results regardless of the tone mapping being used. Edge issues in depth of field and motion blur are a little trickier to handle since you're relying on depth data, but it's hardly unsolvable. In most cases I would say that the artifacts aren't even very noticeable to begin with, but that's more a matter of opinion and content.

Did you play around with different functions instead of the inverse function? Is it possible to find functions that improve IQ even more compared to the inverse function?
E.g. Find a function that helps to reduce high frequency aliasing in detailed textures?
And a final technical question: is it possible to change this function during time? Say frame by frame...and adapt it to the actual scene?
 
There are also downsides of doing tone mapping in custom resolve step just after the rendering. If you write the tone mapped colors to the render target, it's no longer linear, so you can't use hardware alpha blending anymore for transparencies or particles. You can sidestep this by rendering the transparencies to off-screen buffers and combining the results, but it will cost extra. Also in post process shaders you need to first convert the color value back to linear space and at the end of the shader convert it to tone mapped space. The extra ALU cost can be quite big (especially if you do your tone mapping for luminance and need to separate/rebuild the chroma every time).

Writing linear values to the render target is of course also possible (and I expect this is the way you handle it). Custom resolve: Tone map all MSAA samples, average the tone mapped values, inverse tone map the average value and write it to render target. This is less correct, but doesn't need repeated tone mapping / inverse tone mapping steps later. I suspect this looks good in most cases (especially if the post processing pipeline is simple), but when you start adding lots of atmospheric effects (fog, volumetrics, etc) the error will get worse (as the linear assumption doesn't hold for the MSAA edge pixels). But it shouldn't be that bad, since the absolute worst case is that the object edge gets one pixel narrower or wider (and you lose antialiasing for that edge). Not a biggie, unless it happens often. I would be more worried about losing the antialiased edges because of the transparencies (and volume fog rendering). If you have lots of big flying soft fog particles around, the image will pretty much lose all antialiasing.

I don't personally like modern forward rendering techniques (Forward+ descendants) in general, because you need do a depth pre-pass. This doubles your geometry cost. Our new (64 bpp, full fill rate) g-buffer rendering pipeline is able to render the whole g-buffer approximately as fast as a depth pre-pass would take (it's primitive setup bound). This kind of deferred rendering is very difficult to beat by forward techniques that need to render their geometry twice.

2xMSAA is not enough by itself. If you resolve the MSAA at the beginning of the pipeline, you don't have separated (sample precision) values anymore for the edges, and thus you cannot use this data to improve the PPAA quality. Deferred pipelines can combine MSAA and PPAA better. Pure 2xMSAA isn't enough anymore (it has too many problem cases).

Thank you Sebbbi for sharing with us your expertise on the matter. It's very valuable for all of us amateurs!

Also it must be very hard to write so long posts (not specifically this post but others too) when you have to be very careful not to divulge any NDA stuff...You probably have to read and re-read your posts multiple times on some threads... So the effort you take here, in this forum, is even more commendable. :yep2:
 
Did you play around with different functions instead of the inverse function? Is it possible to find functions that improve IQ even more compared to the inverse function?
E.g. Find a function that helps to reduce high frequency aliasing in detailed textures?
And a final technical question: is it possible to change this function during time? Say frame by frame...and adapt it to the actual scene?

You can certainly design different filters that make various tradeoffs with regards to IQ and aliasing. I think there's still work to be done with finding out what work best different games. You could certainly change your filtering function from frame to frame, but off the top of my head I can't think of a scenario where that would make sense. Unless perhaps you wanted to expose it to artists to let them subjectively tweak scenes based on the art style.
 
A lot of people (me included) think that SMAA 1x is mostly a better AA solution than 2xMSAA.

Would SMAA 1x need more buffer memory (esram memory for XB1) than 2xMSAA? I had the impression than SMAA 1x needed some render targets but is it higher than 2xMSAA memory requirements?

Finally, if SMAA 1x is less memory hungry, and we know it's less GPU expensive than 2xMSAA, couldn't they easily replace 2xMSAA by SMAA 1x in Titanfall (XB1)? Better AA + better performance = better balanced solution overall.

They could even increase Titanfall resolution if SMAA 1x needs less memory.

Have I missed something?
 
Last edited by a moderator:
Would SMAA 1x need more buffer memory (esram memory for XB1) than 2xMSAA?

It should just be one other target, so less memory than 2xMSAA, which has double sized depth + RT (forward renderer).

They could even increase Titanfall resolution if SMAA 1x needs less memory.

That would probably fall in line with their alternatives for investigation

i.e. 1600*900+ post-AA ~16.48MB vs 1408*792 2xAA ~17.02MB vs 1080p ~15.82MB

Meaning, more performance profiling.
 
Seriously, with these options there should technically be no issues with finding the right AA solution. SMAA 1x is just as viable as FXAA, and a much better solution to boot
 
Back
Top