Putting color in destination alpha (idea by pcchen)

ector

Newcomer
I'm trying to implement DoomIII graphics (well, at least per pixel lighting and stencil shadows) on GF2 class hardware, and I found this idea by pcchen in an old DoomIII thread:

If it is not possible to compute diffuse in one pass, it will be much more complicated. A method is to use the destination alpha as a temporary storage for intensity of the light, and later modulate it with the color of the light into the color buffer (use "dest_color + dest_alpha X light_color"). This will be required if you want to implement an accurate per-pixel lighting with per-pixel attenuation on a GeForce 1/2 class hardware.

I'm just wondering, how the h*ll would you do it (D3D)? As I see it i'd have to, for each light:
pass 1: Do a dotproduct3, put in destination alpha
pass 2: Modulate material color texture by some attenuation factor (maybe another texture) and blend it in additively, but multiplying by destination alpha

But you can't put the result of a dotprod in the alpha channel! D3D docs about dotprod3:

Also, note that as a color operation this does not updated the alpha it just updates the rgb components.

However, in DirectX 8.1 shaders you can specify that the output be routed to the .rgb or the .a components or both (the default). You can also specify a separate scalar operation on the alpha channel.

So, I'm stumped. Any ideas?
 
Why would it not be possible ? If you can't do it directly (have you tried putting the writemask to the alpha channel ?) you'll just have to use a mov instruction after putting the result in a temp (which the driver compiler might optimise out if the hardware can support it).

Or am I misunderstanding what you are trying to do ?

I found this in the doc :

dp3 can be co-issued as long as dp3 is writing the color channels and the other instruction is writing the alpha channel.

This sounds to me like dp3 can also write to the alpha channel, but obvious you can't use co-issue anymore since the writing of results would clash.

K-
 
I think that in OpenGL 1.4 DOT3/DOT4 operations (in the BLEND stage) are only allowed for color channels, not for Alpha.
 
It's on GF2 hardware, so I have no pixel shaders to put that mov instruction in! Also, I'm in D3D, not OpenGL. Shouldn't matter very much though.
The D3D render states have no way of putting the result of a texture stage dot product in the alpha channel.

Anyway, I think i have "solved" the problem anyway.. By reversing the order of computations, and doing it like this:

Do per light:
draw stencil shadows, and enable stencil testing. then:
pass #1:
* layer 1: Attenuation (texture) -> Dest Alpha
* layer 2: - (maybe some attenuation map)

pass #2: (blended: src*dest alpha, dst*one)
* layer 1: Diffuse * Texture1
* layer 2: Modulate by TFACTOR to colorize

and finally after drawing all lights modulate by material texture.
what do you think? I'm in the process of building up a framework to test this right now.
 
IIRC set ALPHAOP to DOTPRODUCT3 (along with COLOROP) makes the DOT results replicated to alpha channel. The document is not very clear about this, but it works on GeForce 2/3/4 and also on the reference renderer. I used this to simulate a self-multiplication specular effect on D3D. The situation is similar: both diffuse and specular needs more than one pass. I use dest alpha to store the N dot H (H is specified per-vertex but per-pixel normalized).

I am not sure if it also works on other cards. However, IMHO such techniques are only for draw-back on older cards (GeForce 2 or Radeon). On newer cards D3D pixel shaders are easier to use, also more powerful.

I am looking forward to your creation :)
 
nVidia has an old doc on their site on a D3D hack that can be used to access full register combiner functionality on the TNT (using 8 stages).

Does anyone know if this is working to the current Detonators and for the GeForce 1/2 family?
(IIRC GeForce combiners are almost the same except for the dot3.)
 
hey pcchen, thanks a lot for the information :)
I really wish this stuff was better documented...

in your notation, N is light "direction" (vector to lightsource) right?
and you per pixel normalize by putting this in texcoord, then in stage 1 index a cubemap and in stage two dot the result with bump map and put in dest alpha.

that actually seems to make sense :)
And then you do the attenuation and mul by base texture in the second pass..

I haven't thought through the specular yet, but I assume it's mostly the same, the N direction will be computed as a reflection vector and then somewhere do selfmultiplication to get the exponent up..

this will be interesting to implement and get working.. i'm looking forward to all the weird funny bugs i'm sure i'm gonna get before it works correctly :)
 
Back
Top