Does anybody have a link to a whitepaper/presentation of the method used by Insomniac to implement radiosity normal mapping in the Resistance: Fall of Man game?
Radiosity normal mapping, as implemented in the Source engine, uses three lightmaps, calculcated along three perpendicular axis. In Resistance, they do away with a single texture which isn't really a lightmap, as it stores in the texels the direction of the light rays. What about light color? Do they use a second texture, which is just a matrix of RGB scalar values? Any info would be appreciated.
PS: Sometimes I feel I'm still living in the stone age of 3D computer graphics (and, on top of that, I haven't really coded anything since the days of the fixed functions cards)
Radiosity normal mapping, as implemented in the Source engine, uses three lightmaps, calculcated along three perpendicular axis. In Resistance, they do away with a single texture which isn't really a lightmap, as it stores in the texels the direction of the light rays. What about light color? Do they use a second texture, which is just a matrix of RGB scalar values? Any info would be appreciated.
PS: Sometimes I feel I'm still living in the stone age of 3D computer graphics (and, on top of that, I haven't really coded anything since the days of the fixed functions cards)