Lightsmark - new realtime global illumination demo

Dee.cz

Newcomer
Hello, I just uploaded Lightsmark, new realtime global illumination demo.

Stepan

Lightsmark1.jpg
Lightsmark2.jpg
Lightsmark3.jpg
 
That looks fantastic.

57 fps at 1600x1200 X1950Pro (no-AA/no-AF in CP).

I think the music is prolly a bad idea, it's just going to annoy on repeated runs - and there's no obvious volume control.

Jawed
 
Interesting. Care to share any of the details with us? Is it fully compatible with dynamic objects?

I noticed some substantial temporal "shivering" of the light now and then. Other than that it looked pretty decent.

It seems like you have a good base to build on for photorealism. You should try making some reference scenes with a good offline renderer to compare with your realtime engine and tweak it to match. Then you can really make some noise with your work.
 
Sorry for the music ;)

Shivering is unfortunate result of driver bug, I had to decrease quality so some lower end Radeons don't crash while compiling shader.

Technique
-measure per triangle direct illumination in scene
-propagate it using radiosity -> result is per vertex indirect lighting
-shoot rays from character(s) -> result is reflection map(s)
-render final image with indirect lighting, reflection map
I posted more about it with previous demos. Major difference this time is that form factors in static part of scene are precomputed.

Demo can work with any number of dynamic objects. Lighting is compatible with any form of deformation/skinning, but I don't have skinning implemented.
 
Is the radiosity propagation done with the GPU? How do you eliminate the n^2 dependency on vertex count when you have dynamic objects? Is this similar to the radiosity demo on NVidia's site?

Also, do you take occlusion into account with the radiosity propagation?
 
All the so-called realtime global illumination methods are just too limited to be of any use in practice in actual games. They work in confined spaces with few lights (typically short-range local lights), small scenes of relatively few elements, scale poorly with scene complexity (particularly when number of dynamic objects grows), rely heavily on a priori knowledge (whether by precomputation or hand-placed markups), and without exception, have failure cases which are all too common in real world usage.

There are plenty of tricks out there we all use to get GI-ish effects in realtime, and they work beautifully for the limited range of cases that they're capable of solving. Step outside of that just a little bit, and all bets are off.

You say it's not viable for games, but it DOES run in original unmodified level from 2007 game (World of Padman), without any human help, without handmade tricks. It's not documented, but demo can load any Quake3, Collada or 3ds scene and many 3ds objects without big performance hit. I'd add lots of cool stuff and document built-in editor, if I have more time, but I'm not cyborg :)
 
Is the radiosity propagation done with the GPU? How do you eliminate the n^2 dependency on vertex count when you have dynamic objects? Is this similar to the radiosity demo on NVidia's site?

Also, do you take occlusion into account with the radiosity propagation?

I have to make some measurements, it's pretty difficult to deduce time complexity... but it's done by CPU. I calculated I can do similar work by bigger GPU, but with energy crysis behind door, I think in perf per watt ;)

What Nvidia demo, link? I know only that this is completely different from AMD demo.

I respect only the most important light paths, so if you look closely, you can find some paths ignored, but global picture should be correct, see http://www.youtube.com/watch?v=lB5_x2BVRH0 approx 15sec from the beginning.
 
Pretty cool demo!!

But.... it's 24 FPS at 1680x1050 on mobility FireGL V5200 (x1600) with my NW8440, core duo T7200 cpu. The driver is only 6.14 as it was supplied by HP :mad:
 
Last edited by a moderator:
You say it's not viable for games, but it DOES run in original unmodified level from 2007 game (World of Padman), without any human help, without handmade tricks. It's not documented, but demo can load any Quake3, Collada or 3ds scene and many 3ds objects without big performance hit. I'd add lots of cool stuff and document built-in editor, if I have more time, but I'm not cyborg :)
Great... examples built on dated tech. None of those are "2007" games. The closest thing you've got is a mod project started in 2001 that was finally rolled out as freeware in 2007. And it's still an example that uses pretty slim geometric complexity by today's standards, and uses cramped spaces and high contrast between surfaces (which happens to be very convenient for GI since you don't need to consider many bounces to get some pretty significant results).

You mentioned yourself that the static parts of the scene rely on precomputed form factor relationships, which is again knowledge prior to render time. That, to me, says that performance wouldn't necessarily scale up all too well for number of dynamic objects. Where's your "Gears-Of-War-scale" example, for instance?

I'm still waiting to see the scene with ~800k *visible* tris per frame that covers a square mile of pretty wide open space with 100-200 moving objects of all scales and 4 or 5 dynamic lights plus maybe one or two static lights (which still cast realtime shadows). If you can do that as well and still get 240+ fps (note that I say that because there's more than rendering going on in a frame), that's what I'd consider a starting point of in-game viability in this day and age. Ask me again in a couple of years, and I'd give you an even bigger set of figures. The numbers thrown around on your site... They're certainly better than anything that an academic research project would yield, and I'd have been very interested if you could produce those kinds of results in 2002.

Secondly, there's just the failure cases that would present themselves very easily in a game. Anyone who says that their "realtime GI" approaches are without limitations is either an utter liar or someone who never even bothered to consider a meaningful range of use cases. After a few escapades with various "GI" middleware providers, it's almost always the latter.
 
I have to make some measurements, it's pretty difficult to deduce time complexity... but it's done by CPU. I calculated I can do similar work by bigger GPU, but with energy crysis behind door, I think in perf per watt ;)
So you do shadow mapping to determine the direct illumination at each vertex, and then read it back to the CPU to do the radiosity propagation?

I respect only the most important light paths, so if you look closely, you can find some paths ignored, but global picture should be correct, see http://www.youtube.com/watch?v=lB5_x2BVRH0 approx 15sec from the beginning.
Thank you for that video. It looks very nice, but also shows the limitations of your technique.

It looks like you don't have any occlusion of indirect lighting. IMHO, this is a very important effect. For example, if a character or object is near a wall, it should be darker on the wall behind that object because it can't receive as much indirect lighting.

Nonetheless, it's very good work. It shows that much of the radiosity effect can be achieved without complicated occlusion tests, and I've been inspired to pursue some ideas of my own.
 
I'm still waiting to see the scene with ~800k *visible* tris per frame that covers a square mile of pretty wide open space with 100-200 moving objects of all scales and 4 or 5 dynamic lights plus maybe one or two static lights (which still cast realtime shadows). If you can do that as well and still get 240+ fps (note that I say that because there's more than rendering going on in a frame), that's what I'd consider a starting point of in-game viability in this day and age. Ask me again in a couple of years, and I'd give you an even bigger set of figures. The numbers thrown around on your site... They're certainly better than anything that an academic research project would yield, and I'd have been very interested if you could produce those kinds of results in 2002.

Secondly, there's just the failure cases that would present themselves very easily in a game. Anyone who says that their "realtime GI" approaches are without limitations is either an utter liar or someone who never even bothered to consider a meaningful range of use cases. After a few escapades with various "GI" middleware providers, it's almost always the latter.
Damn SMM, now tell us how you REALLY feel! :oops:

;)

Seriously, that's about the best brainiac putdown I've seen since the FX era. Kudos. :cool:
 
Couldn't get it to work, cat 7.10, XP, x1900gt. :\

"DOS terminal" pops up for a split second and closes, nothing happens.
 
Great... examples built on dated tech. None of those are "2007" games. The closest thing you've got is a mod project started in 2001 that was finally rolled out as freeware in 2007. And it's still an example that uses pretty slim geometric complexity by today's standards, and uses cramped spaces and high contrast between surfaces (which happens to be very convenient for GI since you don't need to consider many bounces to get some pretty significant results).

You mentioned yourself that the static parts of the scene rely on precomputed form factor relationships, which is again knowledge prior to render time. That, to me, says that performance wouldn't necessarily scale up all too well for number of dynamic objects. Where's your "Gears-Of-War-scale" example, for instance?

I'm still waiting to see the scene with ~800k *visible* tris per frame that covers a square mile of pretty wide open space with 100-200 moving objects of all scales and 4 or 5 dynamic lights plus maybe one or two static lights (which still cast realtime shadows). If you can do that as well and still get 240+ fps (note that I say that because there's more than rendering going on in a frame), that's what I'd consider a starting point of in-game viability in this day and age. Ask me again in a couple of years, and I'd give you an even bigger set of figures. The numbers thrown around on your site... They're certainly better than anything that an academic research project would yield, and I'd have been very interested if you could produce those kinds of results in 2002.

Secondly, there's just the failure cases that would present themselves very easily in a game. Anyone who says that their "realtime GI" approaches are without limitations is either an utter liar or someone who never even bothered to consider a meaningful range of use cases. After a few escapades with various "GI" middleware providers, it's almost always the latter.

Hey, I can't imagine how would you respond if someone says he's going to replace rasterization in 2 years and shows you video ;) I just demonstrated RTGI in this game level, I never told it's for outdoor. Some features were not demonstrated but I know they work and the only quality issues are related to ugly tessellation (can be easily fixed by game makers), so IMHO it is viable future for indoor games. Why don't you realtime render everything flatshaded? Because textures, although 2x more expensive, look better than flatshaded scene with 2x more polygons. The same applies for lighting. At some point, it's better to improve lighting than add polygons.
 
Couldn't get it to work, cat 7.10, XP, x1900gt. :\

"DOS terminal" pops up for a split second and closes, nothing happens.

What CPU? (only historical one without SSE would be problem)

Do you see initial window with several buttons? If you do, try clicking image and then select penumbra quality = "4 samples", then START. Is it better? Maybe driver is upset that I'm trying to compile long shader...

Anyone else with successful run on x1900?
 
Back
Top