a question about Valve's Radiosity Normal Mapping

shuipi

Newcomer
"The core idea of Radiosity Normal Mapping is the encoding of light maps in a novel basis which allows us to express directionality of incoming radiance, not just the total cosine weighted incident radiance, which is where most light mapping techniques stop." -- quote from Valve's Source Shading paper

What I don't understand is, if the traditional light mapping techs record "the total cosine weighted incident radiance", then it must already be using a normal when it's doing the say, radiosity calculation, or else there would be no normal to "cosine weight" against. What is this normal? The interpolated normal from vertices? If you can used that, why not go a step further and sample the normal map and use that normal for the radiosity calculation? Then there would be no need for calcluating and storing the results for the 3 basis and at run time calculating the lighting based on the normal map value? Am I missing something here?
 
Classical radiosity applied to BSP structures like in HL1 used the normal that is always pointing perpendicular from the face.

The lightmap is at much lower resolution than the normal map, therefore you can't just sample the normal map and use that in your radiosity calculations. To make this look good you would need a lightmap texel size smaller than the normalmap texel size.
 
Back
Top