View Full Version : read depthstencil in pixel shader
Basically that... on DX11 how can I read the depth buffer in the pixel shader?
1. Create the depth buffer's Texture2D using the appropriate typeless format for the desired depth/stencil format (DXGI_FORMAT_R24G8_TYPELESS for DXGI_FORMAT_D24_UNORM_S8_UINT, DXGI_FORMAT_R32G8X24_TYPELESS for DXGI_FORMAT_D32_FLOAT_S8X24_UINT, DXGI_FORMAT_R32_TYPELESS for DXGI_FORMAT_D32_FLOAT)
2. Create a depth stencil view for the Texture2D using the desired depth/stencil format
3. Create a shader resource view for the Texture2D that uses an appropriate texture format based on whether you want to read the depth values or the stencil values. So for instance if you want to read depth from a DXGI_FORMAT_D24_UNORM_S8_UINT depth buffer, you would use DXGI_FORMAT_R24_UNORM_X8_TYPELESS.
4. Bind the shader resource view to the pixel shader stage, and sample it like any other texture.
thank you!.. that worked like a charm!
uhmm... If I bind the DepthStencil only as a SRV (not as DSV) and do a ClearDepthStencilView() to some value (say 1.0f), and then read in the pixel shader the value of the DepthStencil, and writing that value into the RenderTarget using:
Texture2D depthStencil : register( t0 );
SamplerState ss : register ( s0 );
float PS( float2 tex : TEXCOORD0 ) : SV_TARGET
return depthStencil.Sample( ss, tex );
I always get 0.0f
for simplicity I have configured the DepthStencil to be a DXGI_FORMAT_D32_TYPELESS, and the RenderTarget with a DXGI_FORMAT_R23_FLOAT type.
am I doing something wrong?
I have found the problem, all the values I was clearing the depthbuffer with were beyond 1.0f, which DirectX resets back to 0.0f.
vBulletin® v3.8.6, Copyright ©2000-2013, Jelsoft Enterprises Ltd.