Nvidia's TXAA for Kepler

Because doing pre-tonemap or post-tonemap resolve is completely independent of MSAA itself as a method. Yes, initially MSAA resolve used purely fixed function hardware and this happened pre-tonemapping, so in the DX9 era you would have fundamental problems with MSAA and HDR, but that hasn't been a limitation for a large number of hardware generations now, so the quality constraint is largely artificial.

If I went away and created, say, a new texture compression technique, and then I wrote a paper for publication comparing the results of my new technique with a set of results that I obtained by running the worst available compressors for existing techniques, and then claimed victory, then during the review of my paper someone would (hopefully) point out that my methodology was fundamentally flawed, and that perhaps a fairer comparison would be in order if I wanted to be taken seriously.

If I then pointed out as justification that a lot of games had used the useless compressors, even though better ones had been available for 5 years then I would also sincerely hope that such an explanation would not hold any water with the reviewers.

I don't see that this is any different, and I don't see why poor comparisons of this nature deserve to get a free ride. In some ways it seems worse if such a comparison is made in a more public, less technical arena like a blog, where people may be less likely to understand what is really going on, and are therefore more likely to take the presented information purely at face value.

I would perhaps not be feeling so critical if it was pointed out in the context of the comparison that it was completely unnecessary for the MSAA shot to look anything like as bad as it does, but there is nothing of this nature.

Your texture compression analogy is flawed because it only considers this issue from one angle, which is quality. As in this scenario different techniques have different tradeoffs in terms of performance and memory consumption, so the situation isn't really as simple as you make it out to be.

Not that I want to be too defensive of some Nvidia marketing piece...I'm just looking at this for the point of view of a developer who might consider using TXAA (or a technique very similar to it), and is already well aware of the problems associated with MSAA and tone mapping.
 
Why tone-mapping after resolve is wrong

b3d21.jpg


From this thread:

http://forum.beyond3d.com/showthread.php?t=22472

What's depressing is that that thread is 7 years old and developers are still mostly fucking clueless on this subject.
 
Yes, that's why I used definition and resolution - you simply cannot apply movie-style techniques to interactive game content except maybe in narrative cut scenes. Movies and Games are two entirely different beasts, like books and a debate - one being static, the other interactive. If you would use writing-style language throughout a discussion, chances are people would consider you quite odd.

I disagree. What we're talking about here is simply optics and sampling. There's never any situation where a image is taken without a lens. All that's been happening with game sampling is that games have been modelling a bad camera system - it wasn't done intentionally but it can be done better if people can get over it. Whether that camera is used for interactive or scripted production doesn't have much to do with the basic sampling theory.

Take for example the defocus/depth of field that directs your attention to or highlights a particular part of a scene. In movies, that works well, since it's literally a rigidly scripted sequence of events that the writer and director laid out that way deliberately. In a game trying to create a convincing environment, however, all this focusing stuff did not yet work (of course I can only speak for my personal impression) quite nearly as well, since the game cannot know where you're looking with a few exceptions.


I agree on this part. The game doesn't know where the person is looking so simulating depth of field is a stylistic choice.

In a game, you use tricks to convey a high definition of detail but your source material is limited in resolution - even the multi-million polygon-models in 3D-modelling programs are. Normally, you derive a fairly convincing overall representation of your object from that plus normal and whatnot-maps, which you apply later on in....maximum details with little or no texture aliasing. When you upsample this, Nyquist shifts in his grave as does his theorem and you have more detail available until shimmering starts. You already paid for that through upsampling, remember. And now you downsample again without adjusting the possibly higher level of detail first, that is, you're using inferior source material.

And that's what "too blurry moaners" - at least I - do not like about this.


All of these are just hacked approaches to the same problem. TXAA will add some blur to an already blurry texture and should be used with a higher LOD bias. But trying to get all of the assets at all of the distances to match up with the frequency they should be in screen space by selecting the right assets at the right place is like balancing on a pin. There are probably better ways to do it and doing the sampling correctly in the AA resolve is a good one. It's what's done in CG movie production for this reason.

With regard to TXAA and the screenshots posted in the opening posts: Assuming they are legit, it seems to me, there's neither super- nor (A2C-) multisample-AA going on, otherwise those fences would not look as broken as they did. As it stands, for me those shots look like there's only an FX/MLAA-like filter and no higher resolution, aka higher quality source material from which the downsampling took place.

I compiled two different parts of the scene from the shots in the opening post to show what I mean - they are enlarged by a factor of 2 with no resampling.
http://imgur.com/TgFL2

There may not have been any supersampling happening on the transparent surface. Not really TXAA's fault.
 
I agree with Timothy that a larger resolve filter is worth doing. If you want more sharpness beyond that, render at a higher base resolution. The box filters we've been using for resolves really are totally trash and there's no reason a modern GPU can't do better.

LOD bias wouldn't totally solve the issue and you could reintroduce aliasing/noise. The issue is that compared to layering it on top of super-sampling, a wider resolve filter is not actually sampling the texture function at a higher rate (like you would by rendering at a higher resolution). Thus with an LOD bias you could easily skip pixels that can be arbitrarily important to the final frame, especially when considering HDR, bloom, etc. i.e. flicker is back!

I would perhaps not be feeling so critical if it was pointed out in the context of the comparison that it was completely unnecessary for the MSAA shot to look anything like as bad as it does, but there is nothing of this nature.
Regardless of intentions, he really should have showed both cases (tone map pre/post resolve). If he still wants to make a correctness argument on the latter he can do that by showing cases in which it fails and TXAA looks better, but comparing just to the tone map post-resolve case and then hand-waving an argument about how pre-resolve is wrong is a bit weak IMHO.

I'd still love to see more detailed information about exactly what this is doing and why it can't be implemented on any DX10.1+ card. What's the hardware-specific feature that is being used exactly?
 
This is no MLAA.

Please remember where we came from:
„Does he not also say that he did the basic implementation for TXAA and has no control over what each developer makes of it?“
That's what he's saying in this blog comment I sort-of linked to and that's all that I'm referring to: He's doing the basic implementation and everything else is up to the developer - including, and that is my speculation as I've said based on what the lesser texture detail in the OP imlies, some sort of post-processing, like MLAA or FXAA are. Not more not less - and Tim's statements do not hint at that this is out the question. It is just not part of the basic TXAA design.
 
The lesser texture details comes obviously from a different filtering method - Gaussian Filter. There is an example on his Blog. It produces similar results to TXAA he told and once again, there is no PP-AA involved in TXAA including Secret World TXAA. I'm not sure how many confirmations you need.
 
Please re-read what I've said:
„…As it stands, for me those shots look like there's only an FX/MLAA-like filter and no higher resolution, aka higher quality source material from which the downsampling took place.“ [my bold afterwards]

I don't need any confirmations for catching the loss of detail that you already acknowledged - and as obvious for a gaussian filter I wouldn't really describe it's visual effect thankyouverymuch.

I'd be happy to see further links though, to assess what's really going on there - probably a bit more substantial than a vague hint to Timothy's Blog in general.
 
Back
Top