Issues with a volumetric shader & general pixel shader questions.

clevijoki

Newcomer
I'm just learning to write shaders and I have some questions:

I've been trying to write a volumetric fog shader, and I was having a hell of a time trying to get rid of these artifacts:



Then I tried installing the dx sdk and switching to the reference and it worked fine:



I'm using a buffer of type R32f to store the depth values, but support for everything but your standard A8R8G8B8 seems to be quite buggy. Changing to any floating point or 16-bit color format for the depth buffer has problems when trying to use HAL. Is there any hope for this?

Currently to get this effect I am:
1. rendering the back faces to an R32f back depth buffer with D3DBLENDOP_ADD on
2. rendering the front faces to an R32f front depth buffer with D3DBLENDOP_ADD on
3. using front.r - back.r to get the 'fog value'

I am using rendermonkey to write the shader and have a gf7800gt.

Playing around with the render state's rarely seem to work either. I would expect something like this to work on a single buffer:

1. set D3DRS_BLENDOP to D3DBLENDOP_ADD
2. render the front faces using the depth position as the color
3. set D3DRS_BLENDOP to D3DBLENDOP_SUBTRACT
4. render the back faces to the same texture using the depth position as the color
5. now render your depth buffer to the screen, multiplied by some number to make it's values better.

But that has weird artifacts even in the reference renderer, and my card seems to ignore the 'D3DBLENDOP_SUBTRACT' entirely. Any idea why this might be?

I also have some general questions:

RM refuses to compile a pixel shader unless all 4 color components are set. When using a target texture of a format like R32f, why do I still need to set them all? Are the remainder just unused?
 
... Some more questions

Is there a way to render _just_ a depth buffer using the built-in z-buffer logic without also drawing color values?
 
I'm pretty sure that no cards out there have any support for blending on 32-bit float surfaces. Try 16-bit float.
 
I'm using a buffer of type R32f to store the depth values, but support for everything but your standard A8R8G8B8 seems to be quite buggy.
It's not buggy, it's limited. Your card (or any other currently available) does not support blending on FP32 surfaces. Check D3DUSAGE_QUERY_POSTPIXELSHADER_BLENDING with CheckDeviceFormat before you try to use blending on anything else than the valid backbuffer formats.

RM refuses to compile a pixel shader unless all 4 color components are set. When using a target texture of a format like R32f, why do I still need to set them all? Are the remainder just unused?
Because the compiler doesn't know anything about the render target bound. You don't have to recompile the shader when you switch render targets.

Is there a way to render _just_ a depth buffer using the built-in z-buffer logic without also drawing color values?
Yes, but on NVidia cards you can use that Z-buffer only for shadow maps, because sampling from it will automatically apply a comparison. You cannot read the actual depth value from it.

Anyway, I don't think you really need FP32 precision for this.
 
Yes, but on NVidia cards you can use that Z-buffer only for shadow maps, because sampling from it will automatically apply a comparison. You cannot read the actual depth value from it.
True, but this is more an API limitation than an hardware one.
 
And not even a real API limitation, just map it as a color texture with the appropriate depth and it'll do what you want.
Can you do this on D3D9? PS3 is deleting my D3D memories.. :)
 
And not even a real API limitation, just map it as a color texture with the appropriate depth and it'll do what you want.
I presume that filtering is not possible with that approach?
 
Well you could filter it..but I doubt the outcoming would make sense
:)
BTW...how would you filter muliple z samples? min(), max()? :)
As the OP is doing fog, I'd imagine that you'd need a smooth function and so bilinear would seem to be a likely choice. <shrug>
 
I presume that filtering is not possible with that approach?

Sure it is (assuming the hardware supports filtering on the format) it just behaves as a standard texture, with worse cache performance, because it's likely tiled and not swizzled.
 
Can you do this on D3D9? PS3 is deleting my D3D memories.. :)

Actually I'm not positive, I'm assuming so.
It's been a whicle since I did serious work in D3D on a PC.
I'm working on a PC project currently, but not graphics.
 
And not even a real API limitation, just map it as a color texture with the appropriate depth and it'll do what you want.
How (in D3D9)?

As the OP is doing fog, I'd imagine that you'd need a smooth function and so bilinear would seem to be a likely choice. <shrug>
Interesting idea, using a low-res fog thickness map with bilinear interpolation. Adding some cloudy noise texture on top of that might give quite convincing results.
 
How (in D3D9)?

As I said above I'm not positive this is possible in D3D on a PC.

I would try calling GetSurfaceLevel on an appropriately formatted texture, and passing that into SetDepthStencilSurface. But that's probably trapped in D3D, so you might be SOL.

in which case youir down to rendering Depth to the color buffer which sucks.
 
As I said above I'm not positive this is possible in D3D on a PC.

I would try calling GetSurfaceLevel on an appropriately formatted texture, and passing that into SetDepthStencilSurface. But that's probably trapped in D3D, so you might be SOL.
What is "an appropriately formatted texture"? R32F? That would only work if the hardware used FP32 Z values (which it doesn't).

in which case youir down to rendering Depth to the color buffer which sucks.
Indeed, but the OP needs blending (and Z in view space). And he can probably do with lower precision.
 
What is "an appropriately formatted texture"? R32F? That would only work if the hardware used FP32 Z values (which it doesn't).


Indeed, but the OP needs blending (and Z in view space). And he can probably do with lower precision.


NVidia used to support both 32 bit fp Z and 16 bit fp Z. did they remove those formats?
 
NVidia used to support both 32 bit fp Z and 16 bit fp Z. did they remove those formats?
As far as I know NVidia never supported 32 bit Z (only 16 and 24), and the last ATI chip that did was R2xx. Though that will change with D3D10.
However, using IEEE754 floats at least two bits go to waste since screen space Z is always between 0 and 1.
 
As far as I know NVidia never supported 32 bit Z (only 16 and 24), and the last ATI chip that did was R2xx. Though that will change with D3D10.
However, using IEEE754 floats at least two bits go to waste since screen space Z is always between 0 and 1.
It wasn't on Dreamcast - IIRC it could use any positive float.
 
Back
Top