Subsurface scattering and partial translucency...?

Recently a well known 3D game engine developer said in his annual speech that he'd put subsurface scattering and partial translucency into his next generation engine.

Now, the question is, could a shader which does the two above mentioned things be written for Doom 3 or would the 1 interaction program limitation limit this from happening?

Thanks!
 
Depending on how it is accomplished, the Doom 3 engine may not be capable of the effect. Effects such as sub-surface scattering may require plenty of additional data.

Also, using a single interaction shader would not be the best of ideas. Not all materials will behave the same way, most probably not requiring sub-surface scattering. You could potentially do it in a single shader with dynamic branching, but there's really no point unless there's a variety of effects in different places on the surface. Even then, that's a special case scenario.
 
well if Carmack writes his own PRT engine for Ogl (dx its already been made for you its part of the dx sdk) then yes these effects can be done on pretty much any graphics card I would say gf fx 5900 or ATi 9800 and up.

But writing a PRT engine is not a small task probably will take a good 3 months, pretty much you replace the light caluclations per vertex with the precomputed SH values.
 
Hey guys, im just kinda thinking aloud here and this is from the perspective from a 3d artist rather than a coder but:

I had an idea for a possible SSS shader hack, the amount of light bounced through a surface should be linked to its thickness relative to the lightsource. So in theory couldnt you shoot out a bunch of samples from each light in the scene which calculate the differences in Z depth between the entry and exit point of the geometry. Surely with this thickness info you could add some kind of "inner bounce colour" to the objects ambient colour levels.

Now I know this would require a load of calculations per light source but I was just wondering if the coders amongst you could tell me how unfeasible this is.
 
well really can't do that in real time cards arn't powerful enough to cast enough rays to do that. This is why the SH values have to be calculated beforehand. Even precalucating SH at a pixel level right now is too much :)
 
Matt B said:
Now I know this would require a load of calculations per light source but I was just wondering if the coders amongst you could tell me how unfeasible this is.
It would basically double the number of shadow mapping passes you have, at least in one implementation. Some effects already use this kind of pass structure, I think. Now, I've only seen the stuff on places like nVidia's developer site, but it may be possible. Two depth passes (cull clockwise, cull counterclockwise) and then subtract the two values in the final pixel shader.

Found it (skip the glow): http://developer.nvidia.com/docs/IO/8230/D3DTutorial_EffectsNV.pdf Is that like what you're talking about? They use it for volumetric fog and translucency.

And here's another (very similar) presentation where they apply it to subsurface scattering: http://developer.nvidia.com/object/real_time_translucency_gdc2004.html
 
You need lots of samples for SSS

I did a bit a reseach on SSS for a ShaderX3 proposal before I gave up (and got a job instead).

The maths pretty much boiled down to this:

-For each texel, grab all the texels within a radius r. Where r is the radius where SSS has a visible effect, measured in the order of millimetres.
-For each texel within radius r, calculate its SSS effect via some sub surface scattering transfer function.
-sum all SSS values to find the output SSS value.
-And for added complexity, take into account if the neighbouring pixels are in shadow or not.

Of course, lots of optimisations could be made here and there. And this doesn't take into account light transmission from the other side of the material. However I have yet to see (or think of) how to do this in real time, even on SM3.0 hardware. As the big problems are:

-Uh, multiple samples per pixel, it can easily get > 32 unless you start a optimising.
-You need to know the normal of each texel you sample (in order to determine the % of light entering.
-You need to know where your neighbouring texels are, makes stuff hard if you hit a texture boundry.

Now, granted I have done a google on SSS for over a year now, and with all things 3d, its not how you implement an algorythm, its how you hack something up that looks and acts like said algorythm, but in realtime :)

Oh, this is doing it in realtime, instead of using PRT. So maybe this post is for naught.
 
The key word in PRT is "Precomputed"!

Yeah, the pre-computation procedure could be like what you describe, but really it could be any global illumination or lighting algorism. All you need to know is how much light comes from each direction on a target pixel. Then using that information you calculate coefficents to a mathematical equation that approximates that lighting distribution. In the Reality Engine, and most other implementations, this is done with spherical harmonics, that is the equation they use is the spherical harmonic equation. So after you've done all this computation and generated your coefficents, you can use them in the real time engine and quickly determine a lighting factor that tells how much light from a given direction the pixel will recieve, by evaluating the SH formula with the supplied coefficents and the direction of the incoming light. Then this factor is simply used to scale the lighting contribution from that light source. That's the basic idea anyways.
 
Haha, walking home from work I realised I didn't quite get my point across.

SSS and self shadowing, and all other stuff can be done nicly using SH or even PRT, however from what I understood, it all broke down if the mesh wasn't static. We at least it did for SSS.

Unless I missed a paper explaining how to use SH on dynamic meshes.
 
SH works fine in a dynamic world. But can't be done in real time yet. PRT breaks down on animated objects. LPRT works on semi animated objects (great for leaves, trees etc).

It is possible to do PRT or LPRT for animated objects pretty much you pre compute the PRT data for each key frame of animatiom and save it into a 3d texture, its possible to do it with a 2d or 1d texture also but artifacts might start showing up. Anyways doesn't really matter because this uses so much video ram its useless at this point.


typo
 
afaik, it should also work for realtime if you limit the kind of information you store. For example, if you only store localised SSS information, then animation shouldn't be much of an issue. That is to say, if you store the SSS of say the skin of a model, then it doesn't matter how the model is animated too much because the information is relative to the surface of the skin not the whole model.

btw, SH can be done real time no problem. See linkies above, they use it to store the PRT information. Or are you talking about another application of SH?
 
I'm talking about SH not precomputed radiance transfer, and as you said it works well for a fairly static scene. This is because you treat the precomputed SH values just like you do vertex lighting calculations for today's games. Its just a final value where you calculate the realtime values from. The main problem is SH values do concider the entire world lumination, so thats why they can't really be computed in real time on animated objects since the vertex normals change that means all the vertici SH's on the animated object have to be recalculated.

In PRT you precalucate and save the SH values within the mesh vertex data, similiar to a precompile phase of lightmaps. Which is then used instead of the vertex light calculations.

http://www.cs.ucf.edu/graphics/RXU/CASA2004.pdf

This is similiar to what I was talking about. This is doable but takes up a good deal of video memory.
 
Back
Top