The possible future of 3d engines !

That spatial sampling of the scene volume from multiple points is curious.
I wonder how much demand will this method have on the depth buffer sampling performance?
 
It appears caustics don't occur in reflections, possibly indirect shadow won't occur in reflections either.

It looks like a bitch to control for desired quality. Reminds me of AR (one) GI radiosity from Cinema 4D, spending 8 hours just to make the approximation behave good enough (for the given single static scene) to enter post-processing. :cry: Let's not talk about animations.
 
It looks like a bitch to control for desired quality.

Doesn't look so bad to me. Their heuristic approach to spatial sampling seems pretty fast and robust; from there it's just making the performance tradeoffs for number of rays, environment map size, and reconstruction quality.

This raises a number of questions for me for realtime applications, though. How does performance depend on number of spatial samples, environment map resolution, and number of rays? Would it be possible to use shadow maps for primary lighting and use a small number of low resolution environment map samples and a small number of rays to do a low-frequency diffuse approximation of GI? It also seems like the depth map creation would depend strongly on the scene complexity, but graphics hardware is well suited to that.

Still, though, seeing such high quality caustics and occluded, multi-bounce GI being calculated at interactive frame rates is very cool.
 
The devil is in the detail. :) And he mentions the cruft of the problems in the paper, which you can't get rid of easily, raising the number of environment-maps let's you fall into a performance hole.

Maybe one could use hemispherical depth-maps instead of environmental light-maps ...
 
The devil is in the detail. :)


You're probably right. I was thinking about it more and ensuring good coverage with the environments maps in a large complex environment would be a pain. And you're right to point out the issues they demonstrate. I was thinking they were a reasonable tradeoff for speed but situations may pop up where they're unacceptable and it would be a pain to adjust things to fix it.


Maybe one could use hemispherical depth-maps instead of environmental light-maps ...

I figured from the wording of the presentation that they were using cube depth maps, but I might be wrong. A hemisphere map would be interesting, though. It may give more uniform errors in coverage, but possibly the angular representation would complicate ray traversal?
 
Back
Top