Sub-surface scattering for graphical realism. Can it be pulled off in real time?

babcat

Regular
Hello Everyone,

For a long time I did not understand why some games had characters that looked "plastic." Now I realize that the reason many game characters look that way is because sub-surface scattering is not being used to give the skin a real look.

I have also read that true sub-surface scattering is very expensive because it is basically a form of ray tracing. As we all know, even the latest generation of consoles really don't have the power to do full blown raytracing.

But I have heard that you can use certain shader programs to simulate sub-surface scattering.

My question is this. Do you think simulated sub-surface scattering (using shaders) will be pulled off in real time on this generation of consoles. (PS3 and 360).
 
Hello Everyone,

For a long time I did not understand why some games had characters that looked "plastic." Now I realize that the reason many game characters look that way is because sub-surface scattering is not being used to give the skin a real look.

I have also read that true sub-surface scattering is very expensive because it is basically a form of ray tracing. As we all know, even the latest generation of consoles really don't have the power to do full blown raytracing.

But I have heard that you can use certain shader programs to simulate sub-surface scattering.

My question is this. Do you think simulated sub-surface scattering (using shaders) will be pulled off in real time on this generation of consoles. (PS3 and 360).

ATI's Ruby demo is almost entirely about the SSS effect on charater's skin. You can check out their articles and presentations here: http://ati.amd.com/developer/techreports.html
and the demo here: http://ati.amd.com/developer/demos/rx800.html
In the Ruby demo, they rendered the facial lighting into a lightmap in each frame, and apply some filters on it to simulate SSS effect. In another demo, they used PRT to handle SSS.

UE3 has included this kind of SSS algorithm and it has already been used in a lot of titles.
 
simulated? Sure. Crackdown is simulating raytracing right now tru shaders.

question is if it will look any good :p

That is not the raytracer you've seen in a CG software. It's a ray-heightmap intersection test in Relief Mapping shader. Any game featuring relief map like techs has alreay used it.
 
That is not the raytracer you've seen in a CG software. It's a ray-heightmap intersection test in Relief Mapping shader. Any game featuring relief map like techs has alreay used it.

+ it's been debunked already since it was only for precalculated sceneries (CG).
 
Can anyone link me to an image that shows a character with sub surface scattering in real time in an actual game?
 
Can anyone link me to an image that shows a character with sub surface scattering in real time in an actual game?
You could look at the early Project Offset stuff. The dwarf model they showed off some time back exhibits a limited model. That limited model at least lifts a good part of the "plastic" look and gives you more of a solid-block-of-cloudy-gelatin look (see, for example http://www.armchairempire.com/Previews/multi-platform/project-offset.htm). Bear in mind that what they're doing, as well as most any game engine that simulates it is doing... is only part of the more generic "SSS" idea, but it's enough or at least looks good enough. The most common approach is just to render lighting intensities including shadows into a render target in UV space and just blur the result so that it appears as if illumination is bleeding. Not really physically accurate, but it sort of works most of the time. Real pain in the neck with this is the number of extra passes made -- fine for something like the Ruby demos where only a handful of characters and a handful of lights were active at any given moment, but unacceptable for a larger scale game. The more physically passable approach usually involves some sort of distance between the pixel being rendered and the nearest pixel in the shadow map for a given light and attenuating based on that distance. Problem with this is more information needs to be generated in the shadow passes.

See slides for general ideas, --
http://developer.nvidia.com/object/real_time_translucency_gdc2004.html

In any case, the real difficulty isn't simulating it. It's integrating it into an otherwise working render pipeline.
 
Hello Everyone,

For a long time I did not understand why some games had characters that looked "plastic." Now I realize that the reason many game characters look that way is because sub-surface scattering is not being used to give the skin a real look.

I have also read that true sub-surface scattering is very expensive because it is basically a form of ray tracing. As we all know, even the latest generation of consoles really don't have the power to do full blown raytracing.

But I have heard that you can use certain shader programs to simulate sub-surface scattering.

My question is this. Do you think simulated sub-surface scattering (using shaders) will be pulled off in real time on this generation of consoles. (PS3 and 360).

There are probably solutions that can be practically indistinguishable for the nonexpert, but It only depends on how creative and passionate these people are. I mean theoretically I could see a game with 10k FATE(chrono cross psx) characters, and that'd probably look better than some of what's been passed off as nextgen models for massive wars. I mean individually round animated fingers, 3d facial features, all exquisitely animated... all on a single psone model, look at some of the square rpg summons/bosses. Many look bad, but some hold up gphx or even exceed some of the models that are being used for next gen games. There is just no excuse, no fckng excuse for that, a psone model should be practically free resource wise in these machines. And if you can do one, or 100s of such with ease, at virtually no cost, I don't see why you can't have ingame models with thousands of fully animated fingers, faces, details, dynamic clothings, etc(assuming each one individually takes psone lvl of resources). Combined with the creative use of High-end shaders(which could be reserved for close-up surfaces, and replaced with indistinguishable cheap single textures at a distance), voila, magic!
 
You could look at the early Project Offset stuff. The dwarf model they showed off some time back exhibits a limited model. That limited model at least lifts a good part of the "plastic" look and gives you more of a solid-block-of-cloudy-gelatin look (see, for example http://www.armchairempire.com/Previews/multi-platform/project-offset.htm). Bear in mind that what they're doing, as well as most any game engine that simulates it is doing... is only part of the more generic "SSS" idea, but it's enough or at least looks good enough. The most common approach is just to render lighting intensities including shadows into a render target in UV space and just blur the result so that it appears as if illumination is bleeding. Not really physically accurate, but it sort of works most of the time. Real pain in the neck with this is the number of extra passes made -- fine for something like the Ruby demos where only a handful of characters and a handful of lights were active at any given moment, but unacceptable for a larger scale game. The more physically passable approach usually involves some sort of distance between the pixel being rendered and the nearest pixel in the shadow map for a given light and attenuating based on that distance. Problem with this is more information needs to be generated in the shadow passes.

So do you believe that utilizing shader approximations or even "tricks" to simulate some degree of sub-surface scattering in full-scale games will not be practical this generation?
 
So do you believe that utilizing shader approximations or even "tricks" to simulate some degree of sub-surface scattering in full-scale games will not be practical this generation?
Basically, I think it will happen, but not without some sacrifices. I mean, having to do more passes, or not being able to take advantage of the higher speed of "Z-Only" passes for shadows means you're going to lose performance in your renderer, and you're also going to have to do more work per frame, so you're going to have to cut something somewhere. I also think that this will not be a rule of thumb just for this generation but for every hardware generation. It'll just be a question of how acceptable that is.

I think the biggest problem is the public. Too often, they'll look at one game that does some cheap approximation and say "Game A did it. Why can't Game B, C, and D?" because they're all so hopelessly stupid they think that's all there is to it. Or they'll look at some demo online showing a single 100,000-poly marble statue with an SSS approximation running at 60 fps and say "that proves it's feasible for a game." Uuuuhh... noooo... If you can show me the same demo with 15 marble statues running at 250 fps, then I'll consider it feasible... and even then, that's assuming I'm willing to throw away certain optimizations I might have been using which the demonstrated algorithm may or may not break.
 
Whats wrong with using 'non glossy normal map' to achieve these results?

BIA:HH uses that, along with Gears of War, and they seem to achieve ultra realistic skin detail IMO. I think most of the plasticky looks come from the developers that don't have a good grasp on visual quality, or realism. EA for example.
 
Light transfer .PTMs should bring better skin without the need for SSS.
Not so much *without the need for* SSS, but without the need for simulating it in realtime. You still need to simulate it offline when capturing your PTMs, but at least the offline solution can be as exhaustive as you want. The only thing you can't do is simulate view-dependent lighting components (assuming that you're actually going to move the camera in your game, that is) since the view direction is not a parameter in the polynomial.
 
IMO SSS for skin isn't really needed for non-close-ups. For close-ups during cutscenes it makes more sense and is also more practical in terms of resources.
 
Not so much *without the need for* SSS, but without the need for simulating it in realtime. You still need to simulate it offline when capturing your PTMs, but at least the offline solution can be as exhaustive as you want. The only thing you can't do is simulate view-dependent lighting components (assuming that you're actually going to move the camera in your game, that is) since the view direction is not a parameter in the polynomial.

yes,tha's right ,specularity will follow another way.PTM can still give a lot of life in light behavior ,like SSS.It looks like it can be a good lead (and cheap ) for RT skin.
 
yes,tha's right ,specularity will follow another way.PTM can still give a lot of life in light behavior ,like SSS.It looks like it can be a good lead (and cheap ) for RT skin.
About the only real pain with PTMs is the art pipeline. It's easy on the programming side of things, but having artists tweak materials for an offline renderer and then wait out exhaustive and slow renders from which the textures can be extracted and only really one of them is very "artist-editable" which is essentially the diffuse albedo texture they had to create in the first place. It also doesn't integrate too well with complex texture blend operations or similar things.
 
PTMs look like it could be used in many ways...interesting. I wonder how much it costs computationally for dynamic objects.
 
Back
Top