Baked Global Illuminati in games*spin*

The whole system was essentially greyscale; casting colorful light on the scene was a pretty easy modification, since it's basically just doing the shading separately for each color channel and dumping the results into different channels in the image.
But handling chromaticity in the scene, so that the shading produces different results in different color channels from the same light stimulus, and/or so that it responds differently to differently-colored light, would require more than one SH vector at each vertex.

Also, as I mentioned, I never got around to baking light bounce into the SH vectors. The rendering setup is able to produce GI results with as many bounces as you want with no additional runtime cost, but the actual image I showed is purely direct-lit.

Well, you have to be paying for creating the SH in the first place (i.e. setup). I would assume if you ran N number of bounces to be stored in the SH, you wouldn't be updating it in real-time.
 
Well, you have to be paying for creating the SH in the first place (i.e. setup). I would assume if you ran N number of bounces to be stored in the SH, you wouldn't be updating it in real-time.
Right, but that's the point. Calculating more bounces requires more work when you bake the vectors, but you don't have to update them in real time (as long as you don't change the scene geometry). In real time all you have to do to relight the scene is produce a single SH vector for the new infinite-distance light distribution, and take the dot product of that with the baked SH vector at each vertex.

The SH at each vertex is encoding a spherical transfer function that takes incoming light direction as its parameter, and spits how strongly the shading would react to light from that direction as its output. If you take the product of that transfer function with the spherical light distribution, you get a function that takes incoming light direction as its parameter, and spits out the actual strength of the shaded response due to the light from that direction as its output. If you then integrate this product over the sphere, you accumulate the total shaded response. Doing this integration-of-product is equivalent to taking the dot product of the SH representations of the two functions.
 
Last edited:
Right, but that's the point. Calculating more bounces requires more work when you bake the vectors, but you don't have to update them in real time (as long as you don't change the scene geometry). In real time all you have to do to relight the scene is produce a single SH vector for the new infinite-distance light distribution, and take the dot product of that with the baked SH vector at each vertex.

The SH at each vertex is encoding a spherical transfer function that takes incoming light direction as its parameter, and spits how strongly the shading would react to light from that direction as its output. If you take the product of that transfer function with the spherical light distribution, you get a function that takes incoming light direction as its parameter, and spits out the actual strength of the shaded response due to the light from that direction as its output. If you then integrate this product over the sphere, you accumulate the total shaded response. Doing this integration-of-product is equivalent to taking the dot product of the SH representations of the two functions.

And that's the big caveat. Of course you want to be able to change the geometry. This is equivalent to the radiosity technique in film. Something no one uses anymore because of it's strict requirements. I remember my old R&D manager bragging about it back in 2000 when working on the first Ice Age film.
 
Back
Top