Lightsmark - new realtime global illumination demo

Hey, I can't imagine how would you respond if someone says he's going to replace rasterization in 2 years and shows you video ;)
How are you replacing rasterization? You're still putting the task of drawing on the GPU. Hell, if there was someone *actually* replacing rasterization, I'd be quite happy.

I just demonstrated RTGI in this game level, I never told it's for outdoor. Some features were not demonstrated but I know they work and the only quality issues are related to ugly tessellation (can be easily fixed by game makers), so IMHO it is viable future for indoor games.
Not future... not present, either. The kinds of levels you demonstrate are all dated by today's standards. The kinds of numbers I threw at you were not throughput demands 2 or 3 or 10 years down the line, but the kinds of things that are perfectly ordinary right now. You don't have to look to an extraordinary or even necessarily outdoor scene to find stuff like that. That's what makes it not viable right now.

Also, just out of curiosity... when you mentioned those polycounts, did you mean that the whole scene is about that size, or that the number of tris actually processed in a given frame is about that many?

Why don't you realtime render everything flatshaded? Because textures, although 2x more expensive, look better than flatshaded scene with 2x more polygons. The same applies for lighting. At some point, it's better to improve lighting than add polygons.
Don't pretend that the difference is as small as 2:1.
I never said better lighting isn't important. I'm saying it's not a reason to step backwards. When we went first went from flat shading to texturing, we did it with the same geometry throughput as before, and that's what made it acceptable at the time.

One little thing you seem to be missing is that there are a wide array of fake GI hacks and tricks that people use already. I myself wrote a system for irradiance volumes w/ gradients (and it takes advantage of baked radiosity lightmaps on static geometry) -- is it limited in its use? You bet it is (although it does work equally well for both indoor and outdoor ;) ). But since it's only applied to a specific subset of the overall problem space, it works well, and it's hundreds of times faster than any of the middleware "GI" solutions out there, which means it has minimal impact on the time available for other work. And this is the sort of thing that everybody and his brother is already doing if not looking into. Sure it's not as flexible as a totally dynamic thing, but the point is that the cost is small for the effect.

Sure we're at a point today where trying to throw on more polygons would be pointless in many cases... that doesn't mean we should feel free to drop polycounts to 1/4 of what would be considered average (not even particularly high), and also have to cut back on dynamic objects in the scene, and still also have to sacrifice a pretty sizable chunk of frametime which could be spent on other non-graphical work. Explain to me how this is reasonable.
 
How are you replacing rasterization? You're still putting the task of drawing on the GPU. Hell, if there was someone *actually* replacing rasterization, I'd be quite happy.

Sorry, I was just making fun of 3rd party subjects with revolutionary plans :)

Not future... not present, either. The kinds of levels you demonstrate are all dated by today's standards. The kinds of numbers I threw at you were not throughput demands 2 or 3 or 10 years down the line, but the kinds of things that are perfectly ordinary right now. You don't have to look to an extraordinary or even necessarily outdoor scene to find stuff like that. That's what makes it not viable right now.

There's no law that indoor game must have tons of polygons and primitive lighting.
People's wishes are created by marketing machines.

Also, just out of curiosity... when you mentioned those polycounts, did you mean that the whole scene is about that size, or that the number of tris actually processed in a given frame is about that many?

Simply WoP map+robot, not multiplied by multipass.

Don't pretend that the difference is as small as 2:1.
I never said better lighting isn't important. I'm saying it's not a reason to step backwards. When we went first went from flat shading to texturing, we did it with the same geometry throughput as before, and that's what made it acceptable at the time.

Sometimes it is 2:1, but mostly it's worse. Depends on CPU/GPU speeds.
I wanted to say it is similar in principle, not in exact 2:1 ratio (I don't even insist that texturing is 2x slower than flat).

One little thing you seem to be missing is that there are a wide array of fake GI hacks and tricks that people use already. I myself wrote a system for irradiance volumes w/ gradients (and it takes advantage of baked radiosity lightmaps on static geometry) -- is it limited in its use? You bet it is (although it does work equally well for both indoor and outdoor ;) ). But since it's only applied to a specific subset of the overall problem space, it works well, and it's hundreds of times faster than any of the middleware "GI" solutions out there, which means it has minimal impact on the time available for other work. And this is the sort of thing that everybody and his brother is already doing if not looking into. Sure it's not as flexible as a totally dynamic thing, but the point is that the cost is small for the effect.

So what's your trick to fake indoor light controlled by player?
I haven't seen Crysis yet but I guess it was pretty difficult for designers to mask all ambient map artefacts.

Sure we're at a point today where trying to throw on more polygons would be pointless in many cases... that doesn't mean we should feel free to drop polycounts to 1/4 of what would be considered average (not even particularly high), and also have to cut back on dynamic objects in the scene, and still also have to sacrifice a pretty sizable chunk of frametime which could be spent on other non-graphical work. Explain to me how this is reasonable.

Dynamic objects are cheap. Indoor game with simpler geometry but better RTGI lighting is possible, but how to sell it is question for marketing guys. They would have easy work with me, I like WoP and I'd buy it if it has RTGI.
 
I feel sorry for this guy, his uploaded a decent bit of work and it gets shit on by all and dog. Ok so its not replacing the crysis engine, which is built by many over a long time. But this is an individuals work correct?

Can some of the guys who slated this, please show me their better more flexible, more expansive, open area GI engines please because Im excited by the fact that this is actually poor and there are so many better systems out there !!
 
No one's shitting on his work. SMM is rather critical of it, but I don't think it's fair to say he's shitting on it. Everyone else has been rather complimentary, I'd say.
 
I really liked the amazingly not overdone lighting...

/all modern HDR games wave hello...
//seriously, I feel like I'm going to a tanning salon when I fire up anything recent.
///how hard can it be?
 
So what's your trick to fake indoor light controlled by player?
I haven't seen Crysis yet but I guess it was pretty difficult for designers to mask all ambient map artefacts.
The problem is that this isn't a common problem for game designers. Your demos looks great for this situation, but usually lights don't move. Only a few games spend much time with a flashlight.

Normally the light is stationary and the objects move. Unfortunately, your method only produces a reaction in lighting if the moving objects are well lit and/or cast big shadows. Occlusion caused by the objects isn't accounted for.

Dynamic objects are cheap. Indoor game with simpler geometry but better RTGI lighting is possible, but how to sell it is question for marketing guys. They would have easy work with me, I like WoP and I'd buy it if it has RTGI.
It's not just marketing. Many people complained about low polygon models in Doom3, despite the better lighting compared to earlier games.

By the way, would you mind answering the questions above in my previous post?
 
The problem is that this isn't a common problem for game designers. Your demos looks great for this situation, but usually lights don't move. Only a few games spend much time with a flashlight.

It's partially chicken and egg problem, lights don't move because GI would be wrong. With GI solved, lights would move more often. Of course I agree there are many scenarios that don't need it. But if it looks good and engine can do it, why not use it even if it's unexpected? I remember some fantastic movie scenes created this way, e.g. two guys in small room talking, one hits light bulb on long cable between them and makes it swing... it's not important for movie, but movie supports it. (this one is from Alphaville, Godard)

Normally the light is stationary and the objects move. Unfortunately, your method only produces a reaction in lighting if the moving objects are well lit and/or cast big shadows. Occlusion caused by the objects isn't accounted for.

By the way, would you mind answering the questions above in my previous post?

You are right. But you inspired me, I can fake this type of occlusion :)

It's not just marketing. Many people complained about low polygon models in Doom3, despite the better lighting compared to earlier games.
Ok, some balance is important. But it's difficult to satisfy everyone. History is made by brave developers :)
 
So you do shadow mapping to determine the direct illumination at each vertex, and then read it back to the CPU to do the radiosity propagation?

It's detected in 10 or 36 samples per triangle. For now, it's only 10 because Radeon X300 driver crashes while compiling 36 sample version. Then it's read to CPU - yes.
 
Sorry, I was just making fun of 3rd party subjects with revolutionary plans :)
Ah... yes, well, that's a different problem altogether. Misinformation can often be the best marketing tool.

There's no law that indoor game must have tons of polygons and primitive lighting.
People's wishes are created by marketing machines.
Yeah, well, one vicious cycle which can never be escaped is the fact that once something is shown to be technically feasible, the consumer sets that as a minimum expectation. If somebody demonstrates absurd polygon counts, then the bar is raised to that level, and not meeting that is commercial suicide in 95% of all significant cases (leaving aside things like XBLA/PSN games which are their own microcosm). If somebody else demonstrates RTGI, everybody will want it. And in a major title, the law will thereafter become that both must be achieved. It's not a pretty thing, but you can't escape the fact that people are just that stupid and at the same time unforgiving since you're talking about something they spend money on for leisure. Because of that, the consumer feels that any level of self-centeredness is justifiable.

Simply WoP map+robot, not multiplied by multipass.
Are you propagating/accumulating on every surface in the scene, or just on those which are visible for that frame?

So what's your trick to fake indoor light controlled by player?
Depends on which project you're talking about (I happen to be supporting development on 3 at the same time -- at some point it will probably be 5). In one case, we're not particularly worried about it because most all dynamic lights are specifically very localized in their impact, and we have illumination models that have shadow-escape components to fake some bleeding. If we really need it in some cases, we do have some cases with fill lights which are fast since they don't cast shadows. That project is not really intended to be "photorealistic" to begin with, but more of a case of wanting an evocative palette.

In another case, we have things like grouping mechanisms for dynamic lights, and multiple lightmap layers and irradiance volume layers (and doing things like shooting out individual lights introduces negative lights which locally attenuate the lightmap and irradiance volume contributions). Obviously, we don't have many groups, but another big reason for having groups in the first place is so that levels can be reused in radically different lighting conditions and the scene only need be built and lit once (i.e., the concern is workflow, not really performance per se).

Very cheap hacks and such, and all very open to change in the long run, but as long as it works and artists know what they're doing with it. That's one of the upsides of having a lot of artists who eat multivariable calculus textbooks for breakfast like the rest of us geeks.

Dynamic objects are cheap.
I haven't seen evidence of that. Seeing one or two dynamic objects move doesn't do much to convince me of that. You mentioned needing to precalculate form factors for the static geometry. That to me says either that form factors for dynamic objects is ass-slow, or you're severely simplifying what you do on dynamic objects and what their contribution is... which is a bit of a difficult thing to see when you're showing a dynamic object that is basically chrome (why not chrome Hitler? :p). For that matter, cubemaps don't come cheap and never will.

Indoor game with simpler geometry but better RTGI lighting is possible, but how to sell it is question for marketing guys. They would have easy work with me, I like WoP and I'd buy it if it has RTGI.
Especially for people who are trying to work on major titles. It'll fly for a little downloadable title with a below-$1million-budget that people will pay $10 for and pick up at random. Not likely at all for a title that, even at $60 a pop, needs to sell 2 million units off store shelves just to break even. People with a geeky curiosity in RTGI may be intrigued, but there aren't 2 million more people like that.
 
Can some of the guys who slated this, please show me their better more flexible, more expansive, open area GI engines please because Im excited by the fact that this is actually poor and there are so many better systems out there !!

I would also like to see their GI engine solutions and the framerate to. :smile:
 
That is absurdly too cynical.
Okay then, 92%.

Obviously, when I say "meeting" I don't mean exactly -- there's an acceptable range, and you generally need to stay within 1 standard deviation. It isn't necessarily genre-agnostic (though sometimes it can be, and as stupidly as possible), but that doesn't mean the bar isn't raised -- it just means there are several bars, and an individual project has to worry about a specific set of them.
 
Can some of the guys who slated this, please show me their better more flexible, more expansive, open area GI engines please because Im excited by the fact that this is actually poor and there are so many better systems out there !!
I would also like to see their GI engine solutions and the framerate to. :smile:
Wow, that's only completely missing the point and further presuming it to be the exact opposite thereof. It's not about there being better GI engines, but that RTGI frameworks are simply not ready/feasible for real in-game application. The best you can do is cheap hacks that work in a limited bunch of cases, but at least have an effect for very little cost.

I think games like Little Big Planet show you can buck the trend.
No, it most most certainly does not. It only shows that the trend is not necessarily at the same distance along the infinite spiral for every genre of game.
 
Wow, that's only completely missing the point and further presuming it to be the exact opposite thereof. It's not about there being better GI engines, but that RTGI frameworks are simply not ready/feasible for real in-game application. The best you can do is cheap hacks that work in a limited bunch of cases, but at least have an effect for very little cost.

Once upon a time, I marvelled at the bump mapped torus demo that came with original geforce 256 (If memory serves). That was hardly achievable in gaming environments at that time, and also for the cubic map reflection demo. It was certainly limited in scope. But what was great was seeing a taste of the future and knowing that progress is in the pipeline.

So the fact that in the last year RTGI has begun to have more focus, and demoed in limited scenarios and is making progress is great. We know it will be solved in real time one day, half the fun is seeing the journey unfold.

So just how much power and flexability will be needed before it is part of full gaming environmental rendering? 5 - 6 years ...maybe more?
 
How about WoW, EQI, Nintendo GBA, DS, Wii, the PS2, GoW2, System Shock2, PSP, Guild Wars, Final Fantasy XI ffs, etc...

I find it strange all the stats that indicate the vast majority of gamers are playing on outdated (at least 1 generation old) midrange or lower hardware at resolutions of 1280x1024 or lower sans HDR with low res textures, and that these guys are all raging graphics whores. This culture of keeping up with the Joneses is an absurd falsification much more likely perpetuated by misguided producers, IHVs trying to ensure they will be able to pimp their next hardware, and guys designing million dollar engines than these "graphics crazed consumers". Or maybe people actually believe all these douchebags on internet forums who have nothing better to do that bitch about upcoming titles' graphics.

Hell, even many of the high end consumers are far from mindless slaves to graphics... The first thing going through their mind when they pick up a shiny new copy of UT3: what useless stuff can I turn off to get a smoother framerate. I just wish there was something I could turn off in UT3 (other than lower the resolution) to get a decent framerate on my single core machine.
 
Back
Top