HLSL thickness shader question

J.I. Styles

Newcomer
Hi, first post in these parts. I'm a professional games artist, and I've been playing with hlsl for quite a few months now -- so far I've made a skin shader I'm happy enough with to release publicly which you can find on my site here.

The translucency/SSS effect in that skin shader is a cheapy kludge, so I want to go one better. To do that I'll need to author a thickness shader. The thing I'm stuck on is how to determine the depth on a per pixel or vertex level.

I'm thinking I'll need to cast a ray from all the back facing polys along the light vector, and test when it intersects a front facing poly, then record the distance. I've no idea how to implement ray casts like this in code however. If someone could point me in the right direction, I'd be very grateful :)

cheers!
-Joel
http://www.jistyles.com
 
Not at all sure if I understand what you're after, but I will guess that it is an estimate of the thickness of your model as seen from the light source. In that case, my first idea becomes something as follows:
  • Set up an empty, high precision render-target, and render your object as seen from the light source.
  • For back-facing polygons, compute the distance to the light source and add the result to the render-target contents. For front-facing polygons, again compute the distance to the light source, but this time subtract the result from the render-target contents.
Once done with this process, you can bind the render-target as a texture that you then warp around your model, supplying you with approximate thickness values. This process is however likely to fail if the model is self-shadowing as seen from the light source (which might be fixable with some variation of the depth peeling method, but I am not guaranteeing anything), and you may be running into under/over-sampling issues similar to those seen with shadow mapping.
 
arghh, been trying to plug away at this, but keep hitting walls. Thanks for the help arjan, helped me aim in a better direction... I've been able to mock it all up into working methodology, just need to code the damn thing :D

What I'm stuck on at the moment which I'm having real difficulty with is understanding how to set up and render buffers that I could put each pixels position into, then use that data and convert it to texture uv space.

would I use a pass script with RenderDepthStencilTarget and multiple passes? If so, what other script options would I need to take into account? how do I feed this data back in to the pixel shader so I can actually use it?

Any info, code samples, or urls to info on this would be really appreciated -- hell, if I got some good hands on help with this, I'd feel obligated to give some art assets in return for payment of services :)
 
oh yeah, just thought I'd share too - here's an update to my skin shader:
hlslskinwip05.jpg


That's still with the older SSS kludge... really want to push this to a volume thickness type of translucency, hence why I need to get the thickness/depth
 
Rendering the distance to a high precision target has many flows especially with non-concave objects.
The better solution would be to render the front faces into an FP16 target using additive blending, then render the back faces using substractive blending (in OpenGL you need to modify the glBlendEquation from the default GL_ADD to GL_SUBTRACT). Blur that target few times to reduce high-frequencies and you get yourself a decent thickness map.
This technique has been used in NVIDIA Luna's demo.
Hmmm come to think of it, I should write a demo like that sometime soon :)
 
wheres the best place to read up on setting up render targets and MRT's? Any good code chunk examples of setting up render targets? I'm just getting confused with the actual code side of rendering it then how to pass that texture into the next pass.
 
Maybe the SDKs from nVidia, Ati and the DX sdk from MS? I haven't used D3d so I don't know if there is a better place to look. Google search for render to texture + d3d gave this : http://www.riaz.de/tutorials/d3d16/d3d16.html

Also I think Abba Zabba meant render the back faces with additive and front faces with subtractive blending. But maybe it could also be computed by simply using additive blending and multiplying by -1 if the fragment is front facing.
 
Hi new around this place as well, but couldn't help but post a reply.
-You could render a depth map with the back faces from the light's point of view ( ala shadow mapping ) - that's pass 1
-Then render a second depth map with front faces - that's pass 2
-Then project this two textures onto your model just like you would with shadow maping, but instead of doing shadows compute the difference between the values and that's how thick your model is....
- then by using gray scale texture you could encode how much light you want to see through your model, and using a fresnel effect you can have this filter out the light from the center of your model and have it more translucent toward the edges...

that's how I'm doing it on my engine any ways....

Hope that helps!
 
Felipe Orellana, if you just want the extents of an object in a direction i.e. just the min max even for concave objects, you could get this in a single pass where you render the depths of all faces (depth testing disabled) using min blending for RGB and max blending for Alpha for example.You just need two components of the texture Red and Alpha. You have to take care of clearing the the colors to the right values and ignoring negative values.. but it can be done and might be faster.

I'm interested in knowing if there are any problems with this? such as only fp16 blending supported currently etc..?
 
Well, if you wind up with precision issues with FP16 blending, you could always make use of the other two channels as well to improve precision by a little bit. You should be able to get fairly close to an effectively 16-20 bit mantissa that way.
 
Hi krychek
Thats a pretty good idea, but if you want your light to be occluded by other objects behind the sss object doing 2 passes is the only way I see....

I have been thinking about using the the face register in PS_3_0 to find the direction in which a pixel points to, then render to multiple texture( 2 tex here) one for front and one for back pixels, all in one pass.. I have not tested this, but it may work... not sure though
 
Okay I am not really sure what you need to store in your two depth textures. Are these depth textures to be used only with the translucent object or are they used for shadowing the scene too?

Here is what I did to get the thickness buffer ( I think it should work for getting the extents too)

1. Render the scene depths to a texture.
2. Enable additive blending and render the "scene-clamped" signed depths of the translucent object using the scene depths (single pass) with no culling of faces. By scene-clamped depths I mean min(fragmentDepth, sceneDepth_at_current_location). For front faces the depth is negated.

To get the extents of the translucent object rather than the thickness modify step 2 as:

2.Disable depth testing, depth writes. Render the "scene-clamped" depths of the translucent object to component R or component A depending on whether it is front facing or back facing (single pass). And set up the blending to MIN for RGB and MAX for Alpha.

Doing it this way would give you an extents buffer where the scene can occlude the translucent object and would also handle the translucent object intersecting the scene. It also gives you the correct extents for concave translucent object, though using just the extents would give incorrect result. Though I am not certain if this method will really improve performance!

Chalnoth:
Yes ofcourse, we could always use the other components, but it would be nice to not resort to that!
 
Last edited by a moderator:
Back
Top