smooth particles + half-size render target

gjaegy

Newcomer
Hi,

I am experiencing an issue due to MSAA, and I am not sure how to properly resolve it.

What I am doing:

- render the scene normal objects using two full size render targets:
- RT_col = color RT
- RT_camZ = camera depth (camera Z)
- resolve RT_camZ to a non-MSAA texture (RT_camZ_no_aa)
- set a 1/2 screen size render-target (RT_col_half)
- bind "RT_camZ_no_aa" as a texture
- render the particle system to this render target, and use the ps load() instruction to sample "RT_camZ_no_aa", compare this with each particle fragment cam_Z_depth (computed by VS and passed to the PS stage) to compute fragment opacity
- finally blit "RT_col_half" on the main full size color RT (RT_col)

this allows me to save some bandwith/fillrate when rendering the particle system.

However, I am experiencing some issues at the edges of the normal objects, see the screenshot below (red sections):

fx.png


I am not completely sure how I could resolve this, any idea ? I guess this has to do with MSAA, but I have no idea how to properly solve this !!

Thanks for your help !
Greg
 
- render the scene normal objects using two full size render targets:
- RT_col = color RT
- RT_camZ = camera depth (camera Z)
Ewww ... using XNA? :)
- resolve RT_camZ to a non-MSAA texture (RT_camZ_no_aa)
Why exactly? The opacity calculation is highly non-linear, blended Z values won't give correct results AFAICS. Still doesn't explain why the edges get more smoke than the background though ... that just seems a bug in your code rather than the approach.
 
Last edited by a moderator:
I am using my own engine, which is based on D3D10.

Why exactly? The opacity calculation is highly non-linear, blended Z values won't give correct results AFAICS. Still doesn't explain why the edges get more smoke than the background though ... that just seems a bug in your code rather than the approach.

Opacity is non-linear I agree, but this is not what is causing this effect.
I guess this is due to two factors:
- first of all, when rendering the particle system, I get the scene camera depth by sampling the first sample of RT_camZ render target. This means, the opacity is computed by fragment, where it should eventually rather be computed by AA sample. However, I don't know how/if I can compute this for each individual fragment sample ?

- then, maybe this effect is increased by the fact I am only rendering the particle system to a render target that is half the size of the back buffer, and then upsample this rendertarget when blitting over the back buffer. Not sure this has an influence or not...

Below the code I use to render the particles (linear opacity for simplicity):

Code:
float ComputeAlphaFromDepth(float2 vScreenPos, float fZCamSpace, float fCameraZFar, float fDistanceDivider, Texture2DMS<float,imNUM_SAMPLES> oDepthTexture)
{
	float fSceneDepth = oDepthTexture.Load((int2)vScreenPos, 0) * fCameraZFar;
	float fDistCamToScene = fSceneDepth;
	float fDistObjectToScene = fSceneDepth - fZCamSpace;
	float fAlpha = saturate(fDistObjectToScene / fDistanceDivider);
	return fAlpha;
};


// diffuse texture
Texture2D<float4> basetexture0;

// scene camera depth render target
Texture2DMS<float,imNUM_SAMPLES> basetexture5;

FragmentOutput mainfrag( VertexOutput IN)
{
	FragmentOutput OUT;
	
	// sample diffuse texture
	float4 cColor = basetexture0.Sample(SamplerDiffuse, IN.vTexCoord0.xy);

	// particle camera Z distance for current fragment
	float fZCamSpace = IN.vCamSpacePos.z /  IN.vCamSpacePos.w;
	// screen coordinate for current fragment (= scene camera depth RT sampling coordinates)
	float2 vScreenPos = IN.vProjectedPos.xy / IN.vProjectedPos.w;
	// opacity for current fragment
	float fAlpha = ComputeAlphaFromDepth(vScreenPos, fZCamSpace, im_camerazfar, fDistanceDivider, basetexture5);

	OUT.cCOLOR = IN.cColor * cColor;
	OUT.cCOLOR.a *= fAlpha;	
		
	return OUT;
}
 
What's basetexture 0?

Any way, with the way you are doing this now you will always be blending foreground edges with fog densities calculated for the background ... even if you took into account all the sub-samples that still would be true.

The problem is described here :

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch23.html

They have some solutions ... not the solution I would personally try, but their solution has the advantage of being tried and tested (I would try to segment blocks of subsamples into two regions with their own Z-values and then create 2 separate particle buffers, one for each region per pel, during compositing you check how much of each region should contribute based on how many subsamples belonged to which region).
 
Thanks for your great hint, I am currently implementing the suggested solution ;) Actually I even have the book and didn't remember this article ;)

Have you tried it already ? I am not sure which source render target they use in order to extract the edge using the Sobel filter: the main render target (color buffer), the depth-buffer (or similar separate depth render target) ?

Cheers,
Greg
 
Nope, it has my interest but I don't do a lot of 3D programming. So take everything I say with a grain of salt, regardless I do think you should run the edge detection on the depth buffer.
 
Back
Top