msaa + alpha to mask

The problem with routing alpha to coverage and using it for cheap order independent transparency is that you only get as many levels of transparency as the number of samples. It can still work well in specific cases like thick strands of hair or string (it was used in nvidias original fairy demo IIRC) and stuff with sharp edges like grass.

I remeber having moaned about this not being supported by anyone many times here in the past, it's neat to finally see it implemented in the driver (although it would be sad if it's only enabled for the 7800 series).
 
With some dithering you can effectively get more levels of transparency. It works pretty darn well unless you zoom a lot.
 
I'm wondering why this functionality wasn't added to d3d API. Shouldn't been to hard to do or i'm wrong here?
 
The next step is to apply TSAA to selectively antalias shaders.

My first thought too, reading about it.


DemoCoder said:
I see many ways it can be done.

1) heuristically. Some operations are known to cause aliasing. Pow(), step functions, branching. You can afford to be conservative in this, since if you get it wrong (false positive), you just apply a little more supersampling than neccessary, but the image looks good.

2) dynamic runtime profiling. Have the driver, on a random basis, for each shader, render some quads of varying sizes, perhaps making use of gradient instructions. Analyzing the resulting data for aliasing. Over time, as you play the game, the driver will more and more learn which shader/pipeline state needs TSAA, and keep a cache. This is a technique to automatically "learn" game profiles. This technique is way too complicated tho, when a simpler technique which will work for most consumers exists:


3) Driver downloads profiles from repository at NV. NV labs run internal profiling on games and determine which shaders/pipeline state need TSAA. The driver uses these profiles to detect when to switch on TSAA.

4) In my dreams, the driver compiler would contain a symbolic math library. The driver performs an analytic FFT on the shader. :) J/K

In my dreams that would be a task developers could cater for. (Honest question) assume applications enable antialiasing within how hard would it be when AA is enabled for the application to determine which parts of a scene will be multisampled and which selectively supersampled? If that's possible than just add another option like "high quality antialiasing" underneath the usual 2,4,6x sample options (in order to not force it down everyone's throat).
 
Well, if the API was modified, it'd be pretty trivial to allow whether to supersample or not on a per-surface level.

But what I'd really like to see is finer control over supersampling. That is, imagine a shader instruction modifier that specifies that if multisampling is enabled, the value of this register is to be calculated separately for each subsample. When such a register is read into an instruction without the hint, inplicit averaging is done. This would be a generalization of hardware PCF (percentage closer filtering: used to improve the edges of shadows when using shadow maps/buffers).

Now, this would allow the developers of shader libraries to supply versions of shaders that would be antialased nicely while not dropping performance too much.
 
Chalnoth said:
Well, if the API was modified, it'd be pretty trivial to allow whether to supersample or not on a per-surface level.
It's possible to do this in DirectX on G70 (RGSS as well as alpha to coverage).
 
Hrm, they exposed it in the API, other than just allowing it to be forced in the driver? I haven't been reading up on it.
 
NVidia Developer Newsletter said:
Tip of the Month: Enabling Transparency Multisampling and Supersampling on GeForce 7 Series GPUs

To enable Transparency Multisampling in D3D first check if it is supported:
(pd3d->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, D3DFMT_X8R8G8B8, 0,D3DRTYPE_SURFACE, (D3DFORMAT)MAKEFOURCC('A', 'T', 'O', 'C'))) == S_OK);

When rendering alpha tested fragments Multisampling can be turned on by setting:
pd3dDevice->SetRenderState(D3DRS_ADAPTIVETESS_Y, (D3DFORMAT)MAKEFOURCC('A', 'T', 'O', 'C'));

To enable Transparency supersampling first check if it is supported:
(pd3d->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, D3DFMT_X8R8G8B8, 0, D3DRTYPE_SURFACE, (D3DFORMAT)MAKEFOURCC('S', 'S', 'A', 'A'))) == S_OK);

When rendering alpha tested fragments supersampling can be turned on by setting:
pd3dDevice->SetRenderState(D3DRS_ADAPTIVETESS_Y, (D3DFORMAT)MAKEFOURCC('S', 'S', 'A', 'A'));

Both modes can be turned off by setting:
pd3dDevice->SetRenderState(D3DRS_ADAPTIVETESS_Y, D3DFMT_UNKNOWN);
 
Heh. . . Kinda makes me wonder if DirectX should have an mechanism for allowing extensions, so that IHVs don't have so revert to such hackery.
 
Xmas:
When you model sharp edged objects with alpha, a scaled alpha->coverage mapping is a good thing. Just as it would be a good thing to do such scaling for alpha blending.

But as you say, it's not good all the time. It should be done dynamically. When minifying, the mapping should be [0, 1] -> [0, 1]. But when magnifying, the input interval should be narrow enough for the fuzzy edge to be ~1 pixel wide. (Interval proportional to texture coordinate derivatives.)

That could give you something rather close to supersampled alpha test.
 
True. Unfortunately, alpha test/alpha to coverage has no direct relation to texture sampling, so it's kinda hard to give these stages a concept of min/mag.
 
Back
Top