Alternative AA methods and their comparison with traditional MSAA*

What are you basing that on? It needs to be seen in motion to judge subpixel quality and from the demo they posted, it's not even in the same league as 4x MSAA yet.
They havent even shown how it compares with 4xMSAA, only with SSAA x16 and CSAA x16Q and You ask me 'how are You basing that on'? :>
But they wrote about it and show couple examples that fails with high SSAA and CSAA but are still good.
And 4x MSAA is far from great.

On AMD hardware, yes. There's clearly a slow path being hit there. On NVIDIA the hit is around 25% or less IIRC - i.e. similar to forward rendering and quite acceptable.

I've GTX 560 and in MP it cuts from 25 to 35% my fps.
 
Is there anyone that can make a test scene with engine e.g. HUMUS??... maybe in webgl format so people can throw in some shaders and try their AA method right in the browser. I mean..there are standard tests for filtering...why not AA which IMHO is a much worse a problem.
That's a good point. I was just thinking in terms of analysis, but having a test scene for development would be very beneficial too. Currently those developing AA techniques either use existing games or their own tests I guess, but a properly considered, universal test scenario would offer a very convenient platform for testing, especially if we have a high quality reference output to compare AA results to. Oooo, we could produce pretty graphs of AA quality versus processing time. :mrgreen:
 
They havent even shown how it compares with 4xMSAA, only with SSAA x16 and CSAA x16Q
Here's the thing - if you're honestly looking at the quality of the edge gradients on single surfaces you're totally missing the point these days. There have been perfectly acceptable solutions to that for ages. That is what I call the "easy" aliasing.

The only really relevant aliasing to compare these days is flickering and swimming in motion, and these cannot be compared adequately without an uncompressed video/demo. The issue is that this sort of aliasing simply cannot be addressed without either prefiltering the geometry somehow or (adaptively) taking more samples. Thus that's the only really interesting part of these algorithms and most of them simply don't do it, or don't do anything more interesting than MSAA.

So if they have a better technique than what they've shown in the demo, great, but why wasn't it in the demo then? What is in the demo produces inferior overall quality when compared even to basic 4xMSAA (not CSAA).

And 4x MSAA is far from great.
No argument, which casts the post-processing stuff in an even worse relative light.

I've GTX 560 and in MP it cuts from 25 to 35% my fps.
That seems like a perfectly acceptable hit for the quality to that it produces. Like I said, that's in the same range as forward MSAA. Also note that as discussed in the BF3 thread some cases don't get MSAA resolved nicely due to high dynamic range changes between the subsamples and thus MSAA should look even better than it already does. This is just a decision made in BF3 (not an issue with deferred MSAA in general) and I imagine it could be vastly improved with no performance hit as well using the suggestion that I posted in that thread.

Let me reiterate... I'm all for continued research in this area (better reconstruction); I'm just asking for some realism in making claims about how it compares to reference solutions.
 
Last edited by a moderator:
As Andrew points out, ultimately extra samples (MSAA or SSAA like techniques) will be superior then post process (AKA fancy blurs). For a single frame you can use these type of techniques, but they are by design unstable over time due to sample quantization.


Post-process AA is a nice extra technique, but suggesting its a replacement for having extra samples, is fundamentally missing how rendering works.
 
BF3 also doesn't do MSAA in HDR for performance reasons, which severely limits its effectiveness (the most noticeable aliasing on high contrast edges is not affected by MSAA). I believe a method was proposed by Mintmaster or AndyLauritzen to do HDR MSAA at very little cost, but I dunno much beyond that.
 
As Andrew points out, ultimately extra samples (MSAA or SSAA like techniques) will be superior then post process (AKA fancy blurs). For a single frame you can use these type of techniques, but they are by design unstable over time due to sample quantization.


Post-process AA is a nice extra technique, but suggesting its a replacement for having extra samples, is fundamentally missing how rendering works.

The thing is that SMAA is not only smart blur.

Its strange that SMAA isnt implemented into BF 3, because first screens in paper was from BF engine.
Repi can You explain why? And is there any chance for a patch? :>

I'm pretty sure that Crysis 2 or just CryEngine 3 will get SMAA update.
 
BF3 also doesn't do MSAA in HDR for performance reasons, which severely limits its effectiveness (the most noticeable aliasing on high contrast edges is not affected by MSAA). I believe a method was proposed by Mintmaster or AndyLauritzen to do HDR MSAA at very little cost, but I dunno much beyond that.
Tim Sweeney (of all people) described how to do it right in his "next gen" paper ... of course the method has been possible since DX10.1 ...
 
The thing is that SMAA is not only smart blur.

Its strange that SMAA isnt implemented into BF 3, because first screens in paper was from BF engine.
Repi can You explain why? And is there any chance for a patch? :>

I'm pretty sure that Crysis 2 or just CryEngine 3 will get SMAA update.
From the paper itself
"Given that the actual information behind the pixel is already lost, this blending is done effectively with the neighbors, since our visual system assumes that, on edges, they will have a similar color to the actual background."
Thats why its a 'smart' blur, a very good and clever one BUT cannot recover samples that were never recorded. More samples will therefore has more information, more information == better AA.
SMAA 2x and 4x *ARE* MSAA with a smart resolves, which proves the point.
 
From the paper itself
"Given that the actual information behind the pixel is already lost, this blending is done effectively with the neighbors, since our visual system assumes that, on edges, they will have a similar color to the actual background.".
You know that this quote is from MLAA description right, from point '2. Related Work', below You have point "3. SMAA: Features and Algorithm"

I can agree that part of SMAA module is just smart blur [far more advanced than just MLAA or FXAA], but You cant talk about one element of AA technique when there are more of them, its modular solution and thats not a bad thing.
 
I can agree that part of SMAA module is just smart blur [far more advanced than just MLAA or FXAA], but You cant talk about one element of AA technique when there are more of them, its modular solution and thats not a bad thing.
So where's the demo of the actually interesting part of the technique? Why on earth would they only release the boring bit? Like I said, I'm willing to be convinced - particularly by a smarter MSAA resolve than averaging with a box filter - but so far only AMD's initial work on their edge detect stuff really follows that line of research.

MfA you got a link to that "paper" you're referring to?
 
Last edited by a moderator:
You know that this quote is from MLAA description right, from point '2. Related Work', below You have point "3. SMAA: Features and Algorithm"

I can agree that part of SMAA module is just smart blur [far more advanced than just MLAA or FXAA], but You cant talk about one element of AA technique when there are more of them, its modular solution and thats not a bad thing.

Yes they are describing MLAA and there advancement of it, they are quite reasonably saying they technique is a (good) advancement of the idea that was first proposed in MLAA.

And its cool, anything that improves the IQ for pretty cheap is good. The only statement I'm disputing is better to do this than more traditional MSAA/SSAA

More samples, we need more samples!
 
Right actually that's the "slow way". BF3 does that in DX10. In DX11 it does the fast compute shader rescheduling way already. The only thing is that it resolves in the compute shader (to avoid writing out sample frequency data), and it does it in linear space, not resolved color. My suggestion based on Mintmaster's idea long ago was to instead just do something like:

Resolved = InverseToneMap( sum_i( ToneMap( sample(i) ) )

Where the tone mapping operations use the exposure from the previous frame if necessary. This should be a pretty good approximation to resolving in post-tone-mapped space in most cases and shouldn't cost basically anything.
 
My suggestion based on Mintmaster's idea long ago was to instead just do something like:

Resolved = InverseToneMap( sum_i( ToneMap( sample(i) ) )

Where the tone mapping operations use the exposure from the previous frame if necessary. This should be a pretty good approximation to resolving in post-tone-mapped space in most cases and shouldn't cost basically anything.

I've tried this and it works, but the problem is that the inverse tone mapping function can get really ugly for the more complex operators.
 
I've tried this and it works, but the problem is that the inverse tone mapping function can get really ugly for the more complex operators.
Ah cool. That makes me wonder what tone mapping operator BF3 uses... most of the games I've seen use something pretty simple (exponential, linear, etc), but I'm not sure about BF3. Guess I could rip out some shader ASM and try to decipher it... or repi could maybe fill us in :D
 
Ah cool. That makes me wonder what tone mapping operator BF3 uses... most of the games I've seen use something pretty simple (exponential, linear, etc), but I'm not sure about BF3. Guess I could rip out some shader ASM and try to decipher it... or repi could maybe fill us in :D

They mentioned in one of their presentations that they were using "filmic" tone mapping, so I would assume they're doing something based on John Hable's work from Uncharted 2. We use a variant of that as well at my company. It gives nice results since it's highly tuneable by the artists...unfortunately the inverse of it is not so pretty.
 
MJP you just linked to a page that is written in Pig Latin or something.

It's a link to wolfram alpha (attempting) to solve the inverse of John Hable's filmic tone mapping operator. Obviously it's having some trouble. :p

Anyway I had another idea, which was to use a much simpler operator (basic reinhard) when doing the MSAA resolve and use a more complex operator (filmic) when doing the tone mapping for real. It produces results that are somewhere in between a normal resolve and a resolve tone with the same tone mapping operator used during the actual tone mapping step. Here's a picture showing normal 4x MSAA with filmic tone mapping, MSAA with reinhard + inverse reinhard used during the resolve and filmic for the final tone mapping step, and MSAA with reinhard used during both the resolve as well as the final tone mapping step:

msaatonemapping.png
 
Back
Top