What is SubSurface scattering?

for who is interested:

a video that explain something:

http://graphics.ucsd.edu/~henrik/animations/BSSRDF-SIGGRAPH-ET2001.avi

what wikipedia says:

Subsurface scattering (or SSS) is a mechanism of light transport in which light penetrates the surface of a translucent object, is scattered by interacting with the material, and exits the surface at a different point. The light will generally penetrate the surface and be reflected a number of times at irregular angles inside the material, before passing back out of the material at an angle other than the angle it would reflect at had it reflected directly off the surface. Subsurface scattering is important in 3D computer graphics, being necessary for the realistic rendering of materials such as marble, skin, and milk.

it's awesome that is now possible in realtime, I think that this is the second big step after hdr lightining with tone mapping
 
Subsurface scattering occurs in all non-metallic objects. In past computer graphics literature, subsurface scattering has been approximated with a Bidirectional Reflectance Distribution Function or BRDF. The concept of a BRDF comes from physics (i.e. classical geometrical-optics radiometry). It relates the irradiance incident (light) from one given direction to its contribution to the reflected radiance (light) in another direction. For the BRDF it is assumed that irradiance incident on a surface location is reflected at the same surface location. The BRDF describes the local illumination model. If we know the incident radiance field at a surface location and have a well-defined BRDF then we can compute the reflected radiance in all directions at that surface location. This is accomplished by integrating the product of the BRDF and the incident radiance over the hemisphere of incoming directions at that surface location. You have been using a BRDF every time you created a shader, whether you knew it or not. When you first started writing shaders you might of wrote something like this:


Ci = surface_color * Oi * (Ka * ambient() + Kd * diffuse(Nf));


The diffuse statement here is actually making a call to:


color diffuse( normal N ) {
color C = 0;
illuminance( P, N, PI/2 )
C += Cl * normalize(L).N;
return C;
}


The illuminance statement here controls the integration over the hemisphere at point P with the north pole N (Figure 1.) of a procedural reflectance model (i.e. the product of the BRDF and the incident radiance) over the incoming light at point P.

surface_shader_state.gif


Inside illuminance blocks two additional variables are available: Cl or light color, and L or light direction. So, Cl is the incident radiance at point P in the direction L and normalize(L).N is the BRDF. This BRDF describes a very special case of reflectance called Lambertian or ideal diffuse reflectance. This type of reflectance is very uncommon in nature, but has been used extensively in computer graphics because it is relatively inexpensive to compute. Figure 2 demonstrates graphically a Lambertain reflectance model on its left and a more common general reflectance model on its right.

brdf.gif


The BRDF is actually an approximation of the much more complicated Bidirectional Reflectance Subsurface Scattering Distribution Function or BSSRDF. When a beam of light strikes a given surface it is either scattered or absorbed. When a beam of light is scattered by a surface it usually enters the surface at a point then scatters around before leaving the surface at a different point (Figure 3.)—sometimes it does leave through the same point it entered, but this is extremely rare. Subsurface scattering happens for all non-metallic surfaces, but is visually more noticeable for translucent materials such as wax or skin. The BSSRDF relates the reflected radiance at an outgoing point and direction, to the irradiance incident at an incoming point and direction.

bssrdf.gif




more at http://www.rendermanacademy.com/docs/SSS_depthmaps.htm
 
Obviously games like Brothers in Arms and other real-time aplications are not using real subsurface scattering, because that requires raytracing. They may be using a cached dataset that does not update, or - what's far more likely - they are using a fake effect that looks a bit like SSS, but isn't as precise and realistic.

ATI's Radeon techdemos, for example, have been using the following trick: lighting of the skin was rendered into a texture, then they've blured the shadow/light borders in image space, and mapped the result to the object.
See this quite old presentation: http://www.ati.com/developer/gdc/Gosselin_skin.pdf

Unreal Engine 3 has a similarly fake skin shader that we've seen in the various GoW and UT2K7 screenshots. Brothers in Arms is most likely using the same shader...
 
Laa-Yosh said:
Obviously games like Brothers in Arms and other real-time aplications are not using real subsurface scattering, because that requires raytracing. They may be using a cached dataset that does not update, or - what's far more likely - they are using a fake effect that looks a bit like SSS, but isn't as precise and realistic.

ATI's Radeon techdemos, for example, have been using the following trick: lighting of the skin was rendered into a texture, then they've blured the shadow/light borders in image space, and mapped the result to the object.
See this quite old presentation: http://www.ati.com/developer/gdc/Gosselin_skin.pdf

Unreal Engine 3 has a similarly fake skin shader that we've seen in the various GoW and UT2K7 screenshots. Brothers in Arms is most likely using the same shader...

Laa-Yosh, if you're familiar with the Cell based "Alfred Molina" demo Sony ran at their E3 presentation, any idea if they were they using a similar technique to the ATI demo or something a little different?
 
liverkick said:
Laa-Yosh, if you're familiar with the Cell based "Alfred Molina" demo Sony ran at their E3 presentation, any idea if they were they using a similar technique to the ATI demo or something a little different?

To say the molina demo is far more advanced than ati demo would be a gross understatement.Molina demo is much closer to the real subsurface scatering.
Btw ,Ati demo would not even be good enough for a cartoon game.
 
_phil_ said:
To say the molina demo is far more advanced than ati demo would be a gross understatement.Molina demo is much closer to the real subsurface scatering.

Well I wasnt inferring one way or the other, more curious where it fell in the "approximation" scale (for lack of a better term). But if it was actual real-time raytracing like Laa-Yosh explained I imagine the processor was pretty taxed in that demo.
 
Basically what happens in SSS is that when light hits a model that should be translucent in real life, the light changes color depending on where it hit on the model. This happens in real life with any non-metallic object.
 
liverkick said:
Laa-Yosh, if you're familiar with the Cell based "Alfred Molina" demo Sony ran at their E3 presentation, any idea if they were they using a similar technique to the ATI demo or something a little different?

It could not have been ray-tracing in realtime IMHO, there's far tom much going on in there to spare performance for something as computationally intensive. Our renderfarm's dual Xeons take several minutes to render a character's head that's about 150-200 pixels high on the final image - and Molina's face was shown in full screen HD res shots.

Some guys here seem to have more insider knowledge about the demo than I do. Nevertheless, it's logical to assume that Sony's own VFX company that did the original digital double for the Spiderman movie has supplied the techdemo's creators with the model and lots of textures; and probably a precalculated representation of subsurface scattering data. Something like a 3D irradiance cache, or spherical harmonics, or such, I'm not familiar with the programming but there's been a lot of research on this. Then the demo has simply used the static data, and noone was able to see the small glitches.

Now, I've already had some small debates with afew guys here about wether or not this solution would be good enough for actual in-game implementations. I don't think so, because even simple facial animation would distort the face too much and ruin the effect; and storing variances for the SSS data to compensate it would consume a lot of memory.

ATI's approach is indeed a lot more simple, but I'd give it a little more credit than Phil. After all a similar method has been used on the Matrix movies, even though the results weren't convincing enough - but it's still better looking than a simple Phong shaded human face. However, I'm sure that there are better fake solutions, and as I've said we've seen something on the various UE3 game screenshots already. And BIA3 is using UE3 too, so it's logical to assume that they're simply using the shader that was supplied with the engine...
 
Yeah, that Nvidia techdemo is another possible fake approach.

Note that by 'fake' I don't neccessarily mean 'bad' - we fake as much as we can in CGI as well, and only do the 'real' thing when we really have to.
 
The fact that Sony have been talking about raytracing is not necessarily relevant, IMHO. They`re free to talk about anything, but I think we`re fairly far away from actually doing realtime raytracing(if we`re ever going to do that, as some alternate solutions for realistic rendering exist). 3D graphics are about smoke and mirrors mostly, creating an illusion, if you can fake something good, then by all means do it:)

I`m curious though as to how the UE3 shader looks. The effect that they achieve is quite good, IMO.
 
Laa-Yosh said:
Obviously games like Brothers in Arms and other real-time aplications are not using real subsurface scattering, because that requires raytracing. They may be using a cached dataset that does not update, or - what's far more likely - they are using a fake effect that looks a bit like SSS, but isn't as precise and realistic.

ATI's Radeon techdemos, for example, have been using the following trick: lighting of the skin was rendered into a texture, then they've blured the shadow/light borders in image space, and mapped the result to the object.
See this quite old presentation: http://www.ati.com/developer/gdc/Gosselin_skin.pdf

Unreal Engine 3 has a similarly fake skin shader that we've seen in the various GoW and UT2K7 screenshots. Brothers in Arms is most likely using the same shader...


http://www.ati.com/developer/eg05-xenos-doggett-final.pdf

Page 5, Slide 10

Under the features of Memexport: Pixel and Vertex Shaders - Ray Tracing acceleration structure
 
Back
Top