Do Irradiance Volumes combined with PRT imply HDR?

Jawed

Legend
Reading (and, ahem, only moderately understanding the pages with no maths and no code on them!):

http://www.ati.com/developer/gdc/GDC2005_PracticalPRT.pdf

makes me wonder if, effectively, a mesh of pre-computed irradiance samples with PRT for the objects in the scene, mean that high dynamic range lighting is implicit.

What I'm thinking is that since the irradiance samples are derived by pre-computation, high dynamic range (specifically over-bright light sources) is baked-in to the sampled lighting.

Jawed
 
The technique described in the slides is very compatible with HDR. My demo app didn't actually capture HDR irradiance volumes but it would have been trivial to do (just switch the source art to use HDR light maps).

--Chris
 
Wow, the technique described in that paper is almost exactly the one I described in my MIT application 2 years ago only the one in the application was intended for fully dynamic interactions.

'course I never got around to implementing it..
 
Jawed said:
What I'm thinking is that since the irradiance samples are derived by pre-computation, high dynamic range (specifically over-bright light sources) is baked-in to the sampled lighting.

Some confusions here.. irradiance sample is not different with regard to overbright light sources as other forms of lighting. You can have an overbright light source as a point light, directional light in the regular OpenGL model.

What HDR really means : all calculations including intermediary steps are conducted as to preserve the highly dynamic range, and make usually some steps to adapt to the low range/low precision display the high range/higher precision resulting image (tone mapping).

So the fact that the irradiance samples method preserves the high dynamic range or not is really a matter of implementation which doesn't make it so different in this regard to every other form of lighting.. (including reflection cube maps, semi-transparent transmission, point light, directional light, lambertian and blinn, etc.).

See :

Code:
float red = float(dot4(floatcoefs1, floatcoefs2));
this form may preserve the range and some precision (if we are not over/underflowing the float range).

Code:
int8 red = int8(256 * max(1.0,dot4(floatcoefs1, floatcoefs2)));
This one doesn't preserve the range if result of dot4 is more than 1.0. And does cut the precision grossly as the result nears zero. Plus you don't have necessarily the leisure to store float coefficients.
Of course you can arrange the inputs so the result fits which reduces the "dynamic" side of things. (and which is increasingly more difficult as you add more steps to compute the final image).
 
I'm presuming that this technique means that the only time at which int8 clamping occurs is when the final radiance transfer calculation is performed on the surface of the object.

So it seems to me that lighting resolution and range are maintained from pre-computation until final lighting. The final step is the PRT which is, in effect, performing a tone mapping.

Well that's my understanding. But I'm just an on-looker, not a coder...

Jawed
 
This technique should be HDR compatibly but its only part of the rendering pipeline using this tehcnique on a R420 wouldn't magicly make it be able to do float blends.
 
Back
Top