Experiments with local object indirect lighting

Graham

Hello :-)
Veteran
Supporter
Recently I've doing a few experiments with indirect lighting.

Initially, this started as a per-vertex occlusion baker.
To do the occlusion I am using some rather funky projection magic, getting the GPU to render a view from the surface of each triangle (with a near 180 degree FOV). Where the background was 1, and all geometry was 0. Taking the properly weighted average to get the linear occlusion term for that triangle, etc.

This was fairly simple, but worked surprisingly well; So I started to experiment.

Starting with an occlusion value for linear light...
An easy extension was to generate the average direction of this incoming light. This is a worldspace vector, but the interesting thing is the length of the vector represents how 'diffuse' the incoming light is. If properly averaged, a length of 1 represents light coming from a single direction, while a length of 2/3 represents light coming evenly over the entire hemisphere.

This vector isn't especially useful during normal rendering (perhaps ambient specular modification?).
However, it got me thinking about reusing this data in a second pass;

So more experiments!

I setup a second pass to render the per-triangle step again - this time the projection wouldn't display a black model on a white background - it'd display the results of the first pass (average incoming light direction) and the albedo of the model. This is (once again) averaged up.

Before you know it, I had the average direction, colour, intensity and 'diffuseness' of 1st pass bounce lighting for that triangle.
So I did what you naturally do next; I reversed the projection. I did the exact same thing for 'subsurface' light. Applying an exponential distance falloff, I then had a nice approximation to subsurface scattering.


So, basically; I've been messing around with a tool that generates per-vertex linear occlusion, indirect bounce lighting colour+direction and scatter colour+direction for a model. And it works surprisingly well, and is really cheap at runtime (but rather slow to compute :).

skin_test2.png

^ The ambient occlusion term + the indirect terms


It comes down to 15b of data per-vertex;

2x sbyte4 direction vectors (xyzw)
2x byte3 colour values
1x linear occlusion

And the runtime cost is bugger all;

Something like this in a vertex program:

Code:
float3 gComputeIndirectLight(float4 indirectDirection, float3 indirectColour, float4 scatterDirection, float3 scatterColour, float3 lightDirection, float3 lightColour)
{
    float3 light = max(dot(indirectDirection, float4(lightDirection, 1), 0) * indirectColour;
    light += max(dot(scatterDirection, float4(lightDirection, 1), 0) * scatterColour;

    return light * lightColour;
}

Basically, I've found it's more costly getting the various directions into the same space than actually computing the contribution.

Pretty simple really. While it's certainly no where near accurate, it's 'good enough' because as it's so subtle. Any errors (and there are many) are hidden by the general ambient / direct lighting intensity (provided you are gamma correcting / tone mapping - and actually have ambient light ;)).

It's good enough that even dramatic animation doesn't really matter much. It still looks plausible.



I'm currently working on sorting out a proper computation tool. (Instead of the hacked up mess I have now :mrgreen:)

Anywho. Thought I'd share :)
 
Looks like you forgot to close the dot() function.

I'm guessing it should be
Code:
float3 gComputeIndirectLight(float4 indirectDirection, float3 indirectColour, float4 scatterDirection, float3 scatterColour, float3 lightDirection, float3 lightColour)
{
    float3 light = max(dot(indirectDirection, float4(lightDirection, 1)), 0) * indirectColour;
    light += max(dot(scatterDirection, float4(lightDirection, 1)), 0) * scatterColour;

    return light * lightColour;
}
 
The tip of the nose, right ear and right-hand side of his neck are pretty darn good. I can't see the movie on this 'puter so I don't know if it's shown and you say dramatic animation isn't a problem but could you comment on how bad/good it looks on your standard humanoid figure twisting torso 90deg to the side?
 
:mrgreen:

Now I'm back at work this is very much on the back burner. However once I get my new laptop I'll get back into it (right now developing on an atom netbook in VS2010 is an incredibly painful experience - even typing lags :oops:).

I have been doing a bunch of work making sure the it was correctly averaging up the weighted hemisphere, etc, so it looks a bit different from what is in the video. Additionally, I've been toying with some other models, and (I'll be honest) the subsurface effect doesn't work very well if the geometry is especially complex. Internal geometry can be masked out, but there is only so much that can be done with a single pass.

The thing is, when you see the lighting on its own, it looks a bit rough - but when combined with far more intense direct/ambient light, the inconsistencies tend to blend away.

Rotation of the model doesn't really matter too much - as the vectors are all originally calculated in the world space of the model. Provided they are in the same space as the light direction, then they will be OK. You won't get any 'looking left but light is scattering right' effects.
However you do get issues where geometry was close during computation, but moves apart when animated. If the model isn't computed in the T-pose (like this one) then, for example, the sides of the torso bounce onto the arms. This obviously makes no sense if the arms are being held above the head of the character - but at the same time, the vertices are storing the rough direction the bounced light originally came from (in the original calculated space) so it shouldn't be blatantly wrong - just slightly wrong :mrgreen:

In any case, it's a fun experiment - and it can look really good sometimes. : )

skin_test4.png


In this case, there is subtle bounced light on his chin, above his eyes and on his left collar - this is all added by the approximated bounce (although - to be fair, the ambient SH here is adding a lot of subtle detail too). The tonemapping is effectively hiding any errors due to the intensity of the general ambient/direct light.

For interest sake, the preprocessor is currently taking about 2 minutes to process this model (yeah, I know - slow). But this is on an atom netbook with ION in XNA. : P

I am intentionally making the subsurface leak more than the bounce, which does have some side effects; then again, it is a *massively* inaccurate approximation.

For example;

glitch01.png


Here it's pretty obvious there are something things going wrong. The neck is recieving too much bounced light, the complex geometry around the nostrils is confusing the system somewhat (this could be remedied by clipping out the internal nostril geometry - which can be done with alpha) and really obviously, the ear is getting subsurface lighting from nothing.

Not briliant, but with ambient it still looks pretty good.

However, rotate the light around a bit more, and you start to see where the results made sense:

glitch02.png
 
Back
Top