How to do Shader AA?

Jawed

Legend
NVidia appears to be "promoting" the idea of shader based anti-aliasing, rather than doing so as a function of the ROPs - David Kirk made some comments along these lines and the Luna demo seemingly uses a shader AA technique (though I'm not absolutely sure).

So how would shader AA techniques work? What would differentiate them from super-sampling?

Also presumably there'd be a need to perform different AA techniques depending on the source of the problem, e.g. differentiating between:
  • polygon edges
  • aliasing due to lighting (e.g. specular)
  • aliasing due to normal-mapping
  • blur (e.g. depth of field) - AA not needed
Overall it seems way too early for these techniques to be promoted too heavily - there doesn't seem to be enough spare "shader power", e.g. FEAR, D3, Chronicles of Riddick.

Jawed
 
Jawed said:
So how would shader AA techniques work? What would differentiate them from super-sampling?
Slower, but doesn't cost more video memory, and the bandwidth cost is lower too. And obviously it won't do much of anything against actual "geometry" jaggies. That is, unless there's another technique I'm not aware of (like, using magic, for example.)

Uttar
 
Uttar said:
That is, unless there's another technique I'm not aware of (like, using magic, for example.)
I have to think more about it but I believe that via MRT one could store some additional coverage per pixel info to be used later in a full screen post processing pass to achieve some kind of AA...
 
Am I right in thinking (as I said in another thread) that as we narrow down to a few engine makers that this problem should be solvable by the engine guys, and that would have broad applicability across the available games from those engines?

Kirk´s answer still infuriated me, as it seemed very fatalistic. I was reminded of Douglas Adam´s "Somebody Else´s Problem" field. What can the IHV´s do to help lead this effort and hardware accelerate the solution?
 
Ahem, the whole shader AA thingy from that interview was about some (distant?) future, definitely not about doing it now.
 
_xxx_ said:
Ahem, the whole shader AA thingy from that interview was about some (distant?) future, definitely not about doing it now.
I agree, but it also appears to be NVidia's current "excuse" for not providing MSAA when an FP16 framebuffer format is being used. While Xenos appears to allow both to work together, and certain people here seem to think that R520 will allow both together, I'm kinda unconvinced (because it seems like an awfully large overhead)...

But anyway, the topic of aliasing generated by shaders has come up a few times over the past few months, so ignoring the specific MSAA problem, what techniques are there for ameliorating shader-generated aliasing?

The cost needn't be massive if only selected surfaces suffer this kind of aliasing.

Jawed
 
Read about texture space lighting in "Advanced Lighting Techniques - Dan Baker (Meltdown 2005).ppt" from Meltdown 2k5 slides
 
Jawed,

Found this for you on emulator forums (though it's not 3d, it's in regard to post processing)

http://www.ngemu.com/forums/archive/index.php/t-57046

Luigi's "Blur AA" shader (http://www.pbernert.com/pete_ogl2_shader_luigi_aa.zip)(2 KByte Zip-File)

- vertex/fragement program from Luigi for a fullscreen smoothing effect

Last shader sound like FSAA... someone have tried it?

ViperXtreme
is it just me or i dont see any difference using "Blur AA Shader"?, others look noticeable though...


SimoneT
You mast edit gpuPeteOGL2.fp (with notepad) and change:
TEX color0, fragment.texcoord[ 1 ], texture[ 0 ], 2D;
to
TEX color0, fragment.texcoord[ 0 ], texture[ 0 ], 2D;

I like to run my GBA games with "Hdrish" :)
 
Slower, but doesn't cost more video memory, and the bandwidth cost is lower too. And obviously it won't do much of anything against actual "geometry" jaggies. That is, unless there's another technique I'm not aware of (like, using magic, for example.)
Like nAo said, it would basically have to be some kind of post-process... you'd have to take a rendertexture and multisample it yourself, essentially. It can be a whole-image process or you can get by with an edge-detection kernel to scale the effect according to where edges are found.

There were some papers way back when about using higher-order interpolative predictors to make a reasonable guess as to what the supersampled image looks like and then downsampling. You can use something like sinc^2 and some isophote smoothing, and get some pretty decent results. I just don't know how suitable that is for doing within a shader when something like a sinc kernel can have a huge number of sample points per pixel. Watch the framerate -- we get 60 fps!!! Well, except for that decimal point in the way...
 
Is he really talking about shaders for edge-AA? I thought the biggest point of Shader-AA is to fix the interior of polygons, which MSAA, HDR or not, does nothing for. As shaders become more and more complex, you're going to see more and more aliasing attributed to them. Either you use supersampling to fix, or you use AA inside the shader, at that point.

I thought gist of the shader-AA idea was that longer shaders + more ALU instructions for AA = less framebuffer bandwidth pressure => implies that simplistic assertions like "You don't have enough bandwidth for HDR" are flawed, since games with these shaders will be ALU bound.
 
Fdooch said:
Read about texture space lighting in "Advanced Lighting Techniques - Dan Baker (Meltdown 2005).ppt" from Meltdown 2k5 slides

This is it:

http://www.cmpevents.com/sessions/GD/AdvancedRealTime2.ppt

(section starts at slide 131, TSL starts at 137)

So as I understand it, render the lighting onto textures, and then apply those textures to the object, so that standard filtering comes into play.

The result looks great.

Then it quickly gets hideously complicated because generating the mipmaps is non-trivial :devilish: still being researched. Ouch.

Jawed
 
Fdooch said:
Read about texture space lighting in "Advanced Lighting Techniques - Dan Baker (Meltdown 2005).ppt" from Meltdown 2k5 slides
Basicly this is how the first Quake title worked, nice :)
 
For reference: in the bit-tech article (David's comments in quotes):

http://www.bit-tech.net/bits/2005/07/11/nvidia_rsx_interview/3.html

For those of you with super-duper graphics cards, you will have come across a problem: you can't use Anti-Aliasing when using HDR lighting, for example in Far Cry. In these cases, it's a situation where you have to choose one or the other. Why is this, and when is the problem going to get solved?


"OK, so the problem is this. With a conventional rendering pipeline, you render straight into the final buffer - so the whole scene is rendered straight into the frame buffer and you can apply the AA to the scene right there."

"But with HDR, you render individual components from a scene and then composite them into a final buffer. It's more like the way films work, where objects on the screen are rendered separately and then composited together. Because they're rendered separately, it's hard to apply FSAA (note the full-screen prefix, not composited-image AA! -Ed) So traditional AA doesn't make sense here."

So if it can't be done in existing hardware, why not create a new hardware feature of the graphics card that will do both?

"It would be expensive for us to try and do it in hardware, and it wouldn't really make sense - it doesn't make sense, going into the future, for us to keep applying AA at the hardware level. What will happen is that as games are created for HDR, AA will be done in-engine according to the specification of the developer.

"Maybe at some point, that process will be accelerated in hardware, but that's not in the immediate future."

But if the problem is the size of the frame buffer, wouldn't the new range of 512MB cards help this?

"With more frame buffer size, yes, you could possibly get closer. But you're talking more like 2GB than 512MB."

Jawed
 
overclocked said:
Anybody knows how demanding it is?
AA in a shader has not a fixed cost, it depends on what your shader is doing and how you want to address aliasing (supersampling, signals prefiltering, etc..)
 
Last edited:
Jawed said:
"OK, so the problem is this. With a conventional rendering pipeline, you render straight into the final buffer - so the whole scene is rendered straight into the frame buffer and you can apply the AA to the scene right there."

"But with HDR, you render individual components from a scene and then composite them into a final buffer. It's more like the way films work, where objects on the screen are rendered separately and then composited together. Because they're rendered separately, it's hard to apply FSAA (note the full-screen prefix, not composited-image AA! -Ed) So traditional AA doesn't make sense here."

So if it can't be done in existing hardware, why not create a new hardware feature of the graphics card that will do both?
And if it is so different from traditional rendering and hard to do in hardware, how on earth is Xenos (supposed to be) doing it? :?:
 
Gollum said:
And if it is so different from traditional rendering and hard to do in hardware, how on earth is Xenos (supposed to be) doing it? :?:
Xenos can't address at all shaders aliasing, not more than any other GPU out there.
Xenos 'just' supports MSAA on floating point buffers.
 
Gollum said:
And if it is so different from traditional rendering and hard to do in hardware, how on earth is Xenos (supposed to be) doing it? :?:
Dunno. I'm still trying to understand why HDR isn't rendered directly into the framebuffer - what is all this compositing stuff?

By compositing does he mean texture space lighting? That doesn't seem to gel with being unable to AA composited "objects". Sigh...

Jawed
 
Back
Top