All the so-called realtime global illumination methods are just too limited to be of any use in practice in actual games. They work in confined spaces with few lights (typically short-range local lights), small scenes of relatively few elements, scale poorly with scene complexity (particularly when number of dynamic objects grows), rely heavily on a priori knowledge (whether by precomputation or hand-placed markups), and without exception, have failure cases which are all too common in real world usage.
There are plenty of tricks out there we all use to get GI-ish effects in realtime, and they work beautifully for the limited range of cases that they're capable of solving. Step outside of that just a little bit, and all bets are off.
Is the radiosity propagation done with the GPU? How do you eliminate the n^2 dependency on vertex count when you have dynamic objects? Is this similar to the radiosity demo on NVidia's site?
Also, do you take occlusion into account with the radiosity propagation?
Great... examples built on dated tech. None of those are "2007" games. The closest thing you've got is a mod project started in 2001 that was finally rolled out as freeware in 2007. And it's still an example that uses pretty slim geometric complexity by today's standards, and uses cramped spaces and high contrast between surfaces (which happens to be very convenient for GI since you don't need to consider many bounces to get some pretty significant results).You say it's not viable for games, but it DOES run in original unmodified level from 2007 game (World of Padman), without any human help, without handmade tricks. It's not documented, but demo can load any Quake3, Collada or 3ds scene and many 3ds objects without big performance hit. I'd add lots of cool stuff and document built-in editor, if I have more time, but I'm not cyborg
So you do shadow mapping to determine the direct illumination at each vertex, and then read it back to the CPU to do the radiosity propagation?I have to make some measurements, it's pretty difficult to deduce time complexity... but it's done by CPU. I calculated I can do similar work by bigger GPU, but with energy crysis behind door, I think in perf per watt
Thank you for that video. It looks very nice, but also shows the limitations of your technique.I respect only the most important light paths, so if you look closely, you can find some paths ignored, but global picture should be correct, see http://www.youtube.com/watch?v=lB5_x2BVRH0 approx 15sec from the beginning.
Damn SMM, now tell us how you REALLY feel!I'm still waiting to see the scene with ~800k *visible* tris per frame that covers a square mile of pretty wide open space with 100-200 moving objects of all scales and 4 or 5 dynamic lights plus maybe one or two static lights (which still cast realtime shadows). If you can do that as well and still get 240+ fps (note that I say that because there's more than rendering going on in a frame), that's what I'd consider a starting point of in-game viability in this day and age. Ask me again in a couple of years, and I'd give you an even bigger set of figures. The numbers thrown around on your site... They're certainly better than anything that an academic research project would yield, and I'd have been very interested if you could produce those kinds of results in 2002.
Secondly, there's just the failure cases that would present themselves very easily in a game. Anyone who says that their "realtime GI" approaches are without limitations is either an utter liar or someone who never even bothered to consider a meaningful range of use cases. After a few escapades with various "GI" middleware providers, it's almost always the latter.
Great... examples built on dated tech. None of those are "2007" games. The closest thing you've got is a mod project started in 2001 that was finally rolled out as freeware in 2007. And it's still an example that uses pretty slim geometric complexity by today's standards, and uses cramped spaces and high contrast between surfaces (which happens to be very convenient for GI since you don't need to consider many bounces to get some pretty significant results).
You mentioned yourself that the static parts of the scene rely on precomputed form factor relationships, which is again knowledge prior to render time. That, to me, says that performance wouldn't necessarily scale up all too well for number of dynamic objects. Where's your "Gears-Of-War-scale" example, for instance?
I'm still waiting to see the scene with ~800k *visible* tris per frame that covers a square mile of pretty wide open space with 100-200 moving objects of all scales and 4 or 5 dynamic lights plus maybe one or two static lights (which still cast realtime shadows). If you can do that as well and still get 240+ fps (note that I say that because there's more than rendering going on in a frame), that's what I'd consider a starting point of in-game viability in this day and age. Ask me again in a couple of years, and I'd give you an even bigger set of figures. The numbers thrown around on your site... They're certainly better than anything that an academic research project would yield, and I'd have been very interested if you could produce those kinds of results in 2002.
Secondly, there's just the failure cases that would present themselves very easily in a game. Anyone who says that their "realtime GI" approaches are without limitations is either an utter liar or someone who never even bothered to consider a meaningful range of use cases. After a few escapades with various "GI" middleware providers, it's almost always the latter.
Couldn't get it to work, cat 7.10, XP, x1900gt. :\
"DOS terminal" pops up for a split second and closes, nothing happens.