Nvidia's TXAA for Kepler

Paran

Regular
TXAA Overview: http://www.geforce.com/landing-page/txaa
Next-Gen TXAA Anti-Aliasing Secret world article: http://www.geforce.com/whats-new/articles/the-secret-world-txaa/ (video inside!)
TXAA Visual Q and A About Stills: http://timothylottes.blogspot.de/2012/08/txaa-visual-q-and-about-stills.html
More TXAA infos: http://timothylottes.blogspot.de/2012/03/unofficial-txaa-info.html

Screenshots from here:
http://www.abload.de/img/thesecretworlddx11201hauwr.png
http://www.abload.de/img/thesecretworlddx11txa74uh8.png

FPS from here:
@1920x1200
FXAA HQ: Still: 59-61 (Vsync 60hz) Moving: 46-54 fps
TXAA 2x: Still: 45-47 Moving: 30-37 fps
TXAA 4X: Still: 35-35 Moving: 22-27 fps
 
Interesting that he compares it to 4xMSAA shots when TSW doesn't support MSAA at all.

I don't have an Nvidia card so I can't see it in motion but the blur is too much from the stills imo.
 
I must admit that I am getting pretty sick-and-tired of seeing screenshots that purport to show the quality of new techniques by comparing them against what appear to be clearly incorrect implementations of MSAA.

Tone mapping must be handled correctly per-sample for MSAA resolves to work correctly with HDR - an easy enough thing to do in DirectX 11 (or 10.1), but I keep seeing comparison screenshots where this isn't being done. It seems extremely intellectually dishonest to me to show comparisons in this way. If a technique has merits then it should stand on its own without deliberately rigging the deck.
 
wow, just wow - this must be the most blurrying AA mode since the dreaded Quincunx :oops:

Theory doesn't meet practice
 
I must admit that I am getting pretty sick-and-tired of seeing screenshots that purport to show the quality of new techniques by comparing them against what appear to be clearly incorrect implementations of MSAA.

Tone mapping must be handled correctly per-sample for MSAA resolves to work correctly with HDR - an easy enough thing to do in DirectX 11 (or 10.1), but I keep seeing comparison screenshots where this isn't being done. It seems extremely intellectually dishonest to me to show comparisons in this way. If a technique has merits then it should stand on its own without deliberately rigging the deck.

I've seen very few correct implementations since HDR and deferred rendering became the norm. Even the MSAA in BF3 is broken and largely worthless.
 
I must admit that I am getting pretty sick-and-tired of seeing screenshots that purport to show the quality of new techniques by comparing them against what appear to be clearly incorrect implementations of MSAA.

Tone mapping must be handled correctly per-sample for MSAA resolves to work correctly with HDR - an easy enough thing to do in DirectX 11 (or 10.1), but I keep seeing comparison screenshots where this isn't being done. It seems extremely intellectually dishonest to me to show comparisons in this way. If a technique has merits then it should stand on its own without deliberately rigging the deck.

You should read through Timmothy Lottes blog, he actually makes a case for why post-tone mapping resolve is The Wrong Way To Do It. It's also not necessarily such an easy or cheap thing to do with a complex post-processing pipeline.
 
You should read through Timmothy Lottes blog, he actually makes a case for why post-tone mapping resolve is The Wrong Way To Do It. It's also not necessarily such an easy or cheap thing to do with a complex post-processing pipeline.

I have read his entries on this - regardless of points he makes there, whether post-tone mapping resolve is "correct" or not, what seems clearly incorrect is to compare the results of some new technique with the results of MSAA done with apparently no consideration at all of any kind of "correct" method of resolving the samples, and then using that to make a claim of fundamental superiority.

While there may well be complexities in handling sub-sample information correctly, it seems pretty obvious that MSAA should look better than the images on the linked page. Fundamentally if the samples are handled (reasonably) correctly then you should be able to produce a relatively linear gradient on the long edges in the final image, with a number of steps equal to the number of samples. Instead of this in the linked example image there is basically no visible blending at all, which would be a classic symptom of resolving the sub-sample information prior to tone mapping.

MSAA's advantage is that you have multiple samples per edge pixel, which fundamentally should never be a bad thing (all else being equal, having more data for each final pixel should result in you being able to generate a better final image than having less data) - if you artificially mess around with the advantage that MSAA accrues from its additional samples by handling the processing of the samples badly then that hardly seems like a fair comparison.
 
The blur is too much, and then again, they dont try to make us believe they have use MSAA in the video for the comparaison lol. + the video ( even in "HD" ) is enough bad ( in full screen ) for see less the blurry texture ( in this case, this is the video who remove the definition ). Look like the algorythm make his job on the textures draw, lines as if it was borders.

I dont say, they will not improve TXAA over time. But this remind me of the "AMD Tent AA", who was getting the same effect when used.. ( And who have disapear from the CCC )

Could be good anyway if you have a 30" or 3x monitors in Surround and you cant use MSAA at all due to low perf. ( i dont know what is the worst, got no MSAA or MSAA at 2x, lower the graphic quality a bit for save performance ( decrease shadow quality, HBAO, AF is not what cost you anything ) or have TXAA + blurry textures )
 
MSAA with custom TXAA downsampling kernel should not affect textures, blur is clearly a temporal AA fault, which is effectively doubling samples count per pixel, but also adding blur to textures. Is it possible to use some MLAA search patterns or use primitiveID for selective temporal AA only on pixels belonged to edges?
 
I dont say, they will not improve TXAA over time. But this remind me of the "AMD Tent AA", who was getting the same effect when used.. ( And who have disapear from the CCC )
Tent filter was far less blurry. Especially 8× MSAA + wide-tent was great combination - the real problem was performance. This reminds me the infamous XGI Volari anti-aliasing.
 
Tent filter was far less blurry. Especially 8× MSAA + wide-tent was great combination - the real problem was performance. This reminds me the infamous XGI Volari anti-aliasing.

Didn't Volaris use supersampling or is my memory weak on that one? Supersampling combined with the correct portion of LOD offset doesn't blur nearly as much; in contrary if you combine it with proper AF you get sharper textures than with just MSAA+AF. But yes Supersampling costs quite a bit in performance.

I'd love to stand corrected, but I think if in the less foreseeable future GPU architectures should move into the micro-polygon direction of architectures it sounds like neither Multi- or Supersampling will be sufficient for such cases. In the meantime it sounds natural that IHVs will result to different experiments based on multisampling principles which might save performance but also at a cost in terms of IQ.
 
Would the limitation of Kepler+ only be artificial to sell some more boards? Of course Timothy isn't responding to those questions due to nda, I'm just dubious about any kind of hardware in Kepler that would be required for TXAA, though maybe for performance.
 
Have people looked at some of his older posts showing the trade off between sharpness and temporal aliasing (see below)? Anything with pixel-wide frequency content is going to shimmer when it moves. I realize that gamers are used to ultrasharp graphics on games but is it really so tough to take a slightly wider than one pixel resolve pass in order to do away with shimmering?

I really don't understand this. I thought eventually we'd start moving to a point where someone can do a proper AA resolve for smooth, aliasing free VIDEO and we wouldn't get a screamfest of NO!!! TOO BLURRY!!!

Maybe in a few more years.

http://timothylottes.blogspot.co.uk/2011/10/sharpness-vs-temporal-aliasing.html
 
If you're computing something at very high definition and then process it back down in order to remove shimmery content (in CG as opposed to video), wouldn't it save a lot of processing power and storage space to use lower definition content that is not as prone to shimmering in the first place?
 
If you're computing something at very high definition and then process it back down in order to remove shimmery content (in CG as opposed to video), wouldn't it save a lot of processing power and storage space to use lower definition content that is not as prone to shimmering in the first place?

I think there's a difference between definition and resolution in the way you're using them. A low definition video source is a heavily filtered capture from a very high resolution scene (reality). There's no aliasing in it because the optics have filtered out high-frequency content in advance of the sampling. Here, again, though, the circle of confusion of the optics is certainly larger than a single pixel of the recorded video.

A high definition source just pushes this concept to higher resolutions.

Gaming graphics on the other hand (and not CG content for movies) have had to contend for the most part with one sample per pixel. We stepped up to MSAA and got a bit more but kept the circle of confusion for the artificial camera system at one pixel which would never happen with a real camera. Now when people try to down sample super-sampled content appropriately, gamers scream of too much blurry!
 
Have people looked at some of his older posts showing the trade off between sharpness and temporal aliasing (see below)? Anything with pixel-wide frequency content is going to shimmer when it moves. I realize that gamers are used to ultrasharp graphics on games but is it really so tough to take a slightly wider than one pixel resolve pass in order to do away with shimmering?

While you are correct in this, but I presume the issue is that we gamers are just used to the oversharp look. This means that it will take some serious getting used to to avoid squinting at the monitor when things are not as sharp as they "usually" are.

I think the other issue compared to movies e.g. blu-ray might be sitting further away from the screen, so the fact that the movie is soft is not an issue as it is still higher than what the eye can resolve. So with a higher DPI display methods like this are probably much more interesting.
 
Back
Top