Reverend said:
DeanoC said:
HL2's radiosity light maps are cartesian formulation of 1st order diffuse irradiance maps which spherical harmonics were original used
Um, could you repeat that?
I can try ;-)
Note: I know nothing more about HL2 radiosity mapping than what I read in their paper and discussion with various people (all who are much better at this stuff than I am). Peter Pike Sloan has some interesting observation on GDAlgorithms. Also I don't feel too comfortable discussing this, as I don't that much about the subject BUT anyway...
Diffuse irradience maps are based on the idea that a cubemap surrounding an object can be seen as an infinite approximation of the incoming light. So you can put a box around an object, render in the incoming light (via drawing circles etc.) and the just use the normal to look up into the cube map.
Problem is that cubemaps are big and relatively expensive to update (6 renders currently), so Ramamathi et al showed that a function approximation can produce very close results (3% error in the diffuse case). But there are lots of function approximations and lots of basis system to base your function approximation on. The order is how high frequency data your capturing, 0th order is ambient, 1st order is enough for low frequency diffuse, 2nd and higher is needed for specular and higher frequency changes.
HL2 uses a 1st order linear cartesian basis (good ol' fashioned X,Y,Z). Geometrically you can think of as, that they basically store 3 light maps per area, these maps are treated as infinitely far away along particular vectors.
These vectors are 'special' and are constructed in such a way that that it all works(tm). The coefficients are in the paper they presented.
Runtime is just dotting the normal against each light map vector and multiply that by the light map colour and then accumalate the results.
The other commonly used basis are spherical harmonics, this a basis that is mapped on a sphere. Its harder to think about as the geometric 'map' itself is curved, but its basically the same thing.
At the level used in HL2 and most games, they are just alternative representation of the same thing (simlar to storing things in cartesian or polar coordinates) but SH (and some other basises) have certain mathetical properties than are required as you move to higher frequency (precomputed radiance transfer etc.)
Thinking about HL2 system a bit...
Effectively your doing PCA (Principle Component Analysis) compression on the cube map then decompressing per pixel. PCA is a method of compressing correlated values, the 3 light maps are obviously correlated as they are all generated from the same lighting data (a light will in most cases affect at least 2 faces). Now PCA wasn't actually used, the basis was picked as a good average but I wonder if you PCA'ed the data for every area you would get better lighting...
The more I look, the more PCA is looking look another 'magic' operator in CG. Similar to how everything can be solved with a matrix inversion, it seems PCA can reduce any problems complexity...