I'm just learning to write shaders and I have some questions:
I've been trying to write a volumetric fog shader, and I was having a hell of a time trying to get rid of these artifacts:
Then I tried installing the dx sdk and switching to the reference and it worked fine:
I'm using a buffer of type R32f to store the depth values, but support for everything but your standard A8R8G8B8 seems to be quite buggy. Changing to any floating point or 16-bit color format for the depth buffer has problems when trying to use HAL. Is there any hope for this?
Currently to get this effect I am:
1. rendering the back faces to an R32f back depth buffer with D3DBLENDOP_ADD on
2. rendering the front faces to an R32f front depth buffer with D3DBLENDOP_ADD on
3. using front.r - back.r to get the 'fog value'
I am using rendermonkey to write the shader and have a gf7800gt.
Playing around with the render state's rarely seem to work either. I would expect something like this to work on a single buffer:
1. set D3DRS_BLENDOP to D3DBLENDOP_ADD
2. render the front faces using the depth position as the color
3. set D3DRS_BLENDOP to D3DBLENDOP_SUBTRACT
4. render the back faces to the same texture using the depth position as the color
5. now render your depth buffer to the screen, multiplied by some number to make it's values better.
But that has weird artifacts even in the reference renderer, and my card seems to ignore the 'D3DBLENDOP_SUBTRACT' entirely. Any idea why this might be?
I also have some general questions:
RM refuses to compile a pixel shader unless all 4 color components are set. When using a target texture of a format like R32f, why do I still need to set them all? Are the remainder just unused?
I've been trying to write a volumetric fog shader, and I was having a hell of a time trying to get rid of these artifacts:
Then I tried installing the dx sdk and switching to the reference and it worked fine:
I'm using a buffer of type R32f to store the depth values, but support for everything but your standard A8R8G8B8 seems to be quite buggy. Changing to any floating point or 16-bit color format for the depth buffer has problems when trying to use HAL. Is there any hope for this?
Currently to get this effect I am:
1. rendering the back faces to an R32f back depth buffer with D3DBLENDOP_ADD on
2. rendering the front faces to an R32f front depth buffer with D3DBLENDOP_ADD on
3. using front.r - back.r to get the 'fog value'
I am using rendermonkey to write the shader and have a gf7800gt.
Playing around with the render state's rarely seem to work either. I would expect something like this to work on a single buffer:
1. set D3DRS_BLENDOP to D3DBLENDOP_ADD
2. render the front faces using the depth position as the color
3. set D3DRS_BLENDOP to D3DBLENDOP_SUBTRACT
4. render the back faces to the same texture using the depth position as the color
5. now render your depth buffer to the screen, multiplied by some number to make it's values better.
But that has weird artifacts even in the reference renderer, and my card seems to ignore the 'D3DBLENDOP_SUBTRACT' entirely. Any idea why this might be?
I also have some general questions:
RM refuses to compile a pixel shader unless all 4 color components are set. When using a target texture of a format like R32f, why do I still need to set them all? Are the remainder just unused?