"The core idea of Radiosity Normal Mapping is the encoding of light maps in a novel basis which allows us to express directionality of incoming radiance, not just the total cosine weighted incident radiance, which is where most light mapping techniques stop." -- quote from Valve's Source Shading paper
What I don't understand is, if the traditional light mapping techs record "the total cosine weighted incident radiance", then it must already be using a normal when it's doing the say, radiosity calculation, or else there would be no normal to "cosine weight" against. What is this normal? The interpolated normal from vertices? If you can used that, why not go a step further and sample the normal map and use that normal for the radiosity calculation? Then there would be no need for calcluating and storing the results for the 3 basis and at run time calculating the lighting based on the normal map value? Am I missing something here?
What I don't understand is, if the traditional light mapping techs record "the total cosine weighted incident radiance", then it must already be using a normal when it's doing the say, radiosity calculation, or else there would be no normal to "cosine weight" against. What is this normal? The interpolated normal from vertices? If you can used that, why not go a step further and sample the normal map and use that normal for the radiosity calculation? Then there would be no need for calcluating and storing the results for the 3 basis and at run time calculating the lighting based on the normal map value? Am I missing something here?