Could PCI-Express enable "animated lightmaps"?

Daliden

Newcomer
So, pre-calculated textures with radiosity lighting and what not look nice but are static. Would it be possible to allow some simple animation to the lighting by pre-calcing all the different frames and loading them via the hyped bandwidth provided by PCI Express as required? :)

I suppose effects like searchlights and swinging lamps could be done quite prettily with something like this -- but would it be worth the huge amount of extra textures it would require?

(I don't really know anything about 3D rendering, so try not to snort coffee on the screen when you read the above . . . )
 
I suppose effects like searchlights and swinging lamps could be done quite prettily with something like this -- but would it be worth the huge amount of extra textures it would require?
No need for PCI-E for this, I believe at least FarCry and Doom3 already use this to some extent. The UE3 demo movie also shows an animated light map (actually two cubemaps) being used to produce very nice area lighting from a swinging lamp (the penumbra gets bigger as the light moves further from a surface, a very nice effect).
 
By the time pcie is common in systems out there, 3D hardware will have evolved to the point high-quality animated shadow maps can be rendered realtime. No need to store them as animation sequences - which would work poorly in games by the way, as the swinging light would have to match up exactly with the shadows it casts causing framerate issues if the swinging motion is slower than the game update speed. Also, either the shadow animation would be jerky, or the costs in storage and precomputations to generate 30+ fps shadow motions would be vast.

A wholly unrealistic approach I would think - but a nice idea nonetheless so don't feel bad for thinking it up! :)
 
Daliden said:
So, pre-calculated textures with radiosity lighting and what not look nice but are static. Would it be possible to allow some simple animation to the lighting by pre-calcing all the different frames and loading them via the hyped bandwidth provided by PCI Express as required? :)

I suppose effects like searchlights and swinging lamps could be done quite prettily with something like this -- but would it be worth the huge amount of extra textures it would require?

(I don't really know anything about 3D rendering, so try not to snort coffee on the screen when you read the above . . . )

A big problem would be that you cannot calc radiosity in realtime, so unless you have a static animation sequence, it will be impossible to create the lightmaps for every frame at an interactive framerate (with global illumination, everything depends on everything).
Also, if you want to precalc lightmaps for an animation sequence, you will quickly run into trouble with storage requirements.
 
The original Unreal had animated light maps. It's perfectly doable on a Riva 128 and a PII at 300MHz, there's no need for PCI express. Besides, animated lightmaps really doesn't solve anything since they still won't integrate well with dynamic objects. Shadow maps all the way baby.
 
DemoCoder said:
How about Monte Carlo Radiosity in the near term future (say, circa 2007 CPUs and GPUs)?
Well, the only problem with actually using a Monte Carlo algorithm for rendering is that the use of Monte Carlo is really more art than science. What this basically means is that you have no idea how long it will take for the result to converge properly until you actually run through it, so an engine may have a hard time automatically using the right number of iterations for convergence. There are also situations where Monte Carlo really breaks down, but I have no idea what they'd look like in a 3D system...
 
Daliden said:
So, pre-calculated textures with radiosity lighting and what not look nice but are static. Would it be possible to allow some simple animation to the lighting by pre-calcing all the different frames and loading them via the hyped bandwidth provided by PCI Express as required? :)

....
(I don't really know anything about 3D rendering, so try not to snort coffee on the screen when you read the above . . . )

The thing is that even at the max theoretical bandwidth of PCIex16 of roughly ~8gbs/sec we're still far below the max onboard 3d-card bandwidth levels of ~25-35gbs/sec, so I'm not sure as to what role you'd think PCIe might play in this (as 256mb cards are common today and 512mb cards are probably only a year or two out.)
 
Goragoth said:
No need for PCI-E for this, I believe at least FarCry and Doom3 already use this to some extent. The UE3 demo movie also shows an animated light map (actually two cubemaps) being used to produce very nice area lighting from a swinging lamp (the penumbra gets bigger as the light moves further from a surface, a very nice effect).

Heh and unfortunately the penumbra is supposed to get bigger on distance from an occuluder and not distance from the light source (which would result in the penumbra getting smaller since the ratio of the size is approximately based on D_occluder/D_lightsource)

Heh well Doom 3 doesn't quite do this so much as just short animation sequences but same idea of course but there isn't as much data of course as a lightmap to swap through.

Anyways suprirsed no one showed you Humus's demo which does this over here http://esprit.campus.luth.se/~humus/3D/index.php

Interesting to find out the use of Monte Carlo in science is an art all the sudden :p Anyways when we use it, we definately lean on the heavy side of ensuring enough sampling is taking place. Doesn't matter since we can wait for it to complete unlike in realtime graphics.
 
Cryect said:
Interesting to find out the use of Monte Carlo in science is an art all the sudden :p Anyways when we use it, we definately lean on the heavy side of ensuring enough sampling is taking place. Doesn't matter since we can wait for it to complete unlike in realtime graphics.
Well, if you just want to get results, you can do that, but if either you are working at the limits of computing technology (i.e. your Monte Carlo problem is taking days or weeks to complete), or you want to ensure that the statistics are satisfactory, that you've sampled the entire relevant space, you will need to do some problem-specific tweaking. That's where the art lies.
 
WaltC said:
The thing is that even at the max theoretical bandwidth of PCIex16 of roughly ~8gbs/sec we're still far below the max onboard 3d-card bandwidth levels of ~25-35gbs/sec, so I'm not sure as to what role you'd think PCIe might play in this (as 256mb cards are common today and 512mb cards are probably only a year or two out.)

Well, 256 MB is less than, let's say, a disk cache of couple of gigs of pre-generated data. And there's always enough stuff that has to reside on the display card anyway.

But it does appear that this idea was both not that new and not that good :)

However, on a slightly different note, I'm a bit ambivalent about "unified" solutions. I mean, what's wrong with pre-computing as much as possible? After all, the levels must be carefully designed (for playability purposes) in the first place -- why not "fake" as much as you can? It might mean more work for modders, but it would almost certainly mean an end result that is both prettier and faster than anything with a unified solution.

And the physics engines? Do you really need an engine that is general-purpose, if your enemies will stand in a limited number of positions and can be shot in a limited number of locations? You could pack a huge amount of death animations in a small space (if you do it with vertices). Just send the required animation sequence to the GPU.

Playing a game like this would be like watching a Hong Kong movie. You might know the wires are there, but you still cannot see them. :)

(Of course, I just basically described the majority of games out there . . . but is there a *really* good reason for getting rid of "faking it"?)
 
Daliden said:
However, on a slightly different note, I'm a bit ambivalent about "unified" solutions. I mean, what's wrong with pre-computing as much as possible? After all, the levels must be carefully designed (for playability purposes) in the first place -- why not "fake" as much as you can? It might mean more work for modders, but it would almost certainly mean an end result that is both prettier and faster than anything with a unified solution.

I couldn't agree more on that one :)

(Of course, I just basically described the majority of games out there . . . but is there a *really* good reason for getting rid of "faking it"?)

Actually calcing something in realtime can be more flexible, but the only good reason to do it would be to actually use that extra flexibility ofcourse :)
I suppose that is pretty much the contraposition of the original statement. If you want to fake as much as you can, then apparently you are going to precalc everything that doesn't require that flexibility.
 
Daliden said:
However, on a slightly different note, I'm a bit ambivalent about "unified" solutions. I mean, what's wrong with pre-computing as much as possible? After all, the levels must be carefully designed (for playability purposes) in the first place -- why not "fake" as much as you can? It might mean more work for modders, but it would almost certainly mean an end result that is both prettier and faster than anything with a unified solution.
This is the whole point. For most games, it was absolutely necessary to preprocess tons of things to get it to look good. Doom 3 shows us that lighting is no longer one of them. As we move into the future, dropping pre-processed content will allow game developers more and more freedom in creating dynamic content, and will make the user experience subtly better (by removing the disconnect between pre-processed content and on-the-fly content).

Consider, for example, the significant improvement made when UT moved away from canned death animations, and implemented a physics system to do them instead. When it can be done adequately, processing on the fly is always better. But, unfortunately, it can't always be done adequately.
 
Chalnoth said:
This is the whole point. For most games, it was absolutely necessary to preprocess tons of things to get it to look good. Doom 3 shows us that lighting is no longer one of them.

In the case of Doom3, no. But I am quite sure that there are plenty of scenes where a precalced radiosity approach like in HalfLife2 will look better and/or be faster than rendering it directly with pointlights like Doom3 does.
So I disagree that Doom3 proves that lighting doesn't require preprocessing for all games. In fact, Doom3 still preprocesses certain lightmaps itself. So we're not quite there yet, not even with Doom3.
 
Scali said:
In fact, Doom3 still preprocesses certain lightmaps itself.

I've heard you say this earlier and wondering if you could clarify more specifically what you mean? As far as I could tell in the past you are referring to projected texture mapping which I would argue isn't preprocessing besides unless artists are preprocessors now. But at this point I don't feel like arguing about it just trying to understand your view on this.
 
Right, because there are situations where the shadow volumes technique breaks down. In the near future, I expect all lighting in games to be dynamic, through a series of techniques designed to give the best look and performance for that particular situation (whether it be shadow mapping, shadow volumes, or some other technique).

The challenge will be, of course, in combining all of these techniques and keeping them all looking good together, but it will be a lot better to do things this way than it would be to have precalculated radiosity, for example, since any dynamic lights destroy the illusion.
 
Cryect said:
I've heard you say this earlier and wondering if you could clarify more specifically what you mean? As far as I could tell in the past you are referring to projected texture mapping which I would argue isn't preprocessing besides unless artists are preprocessors now. But at this point I don't feel like arguing about it just trying to understand your view on this.

As far as I know, they are generated from the actual geometry with raycasting or such. I can't imagine an artist drawing these textures manually.
At any rate, they break the whole 'calculated at runtime' and 'flexible' idea, since these maps do not respond to changes in positions of lightsource or shadowcasting objects. So how they are created isn't really important in this case.
 
Chalnoth said:
The challenge will be, of course, in combining all of these techniques and keeping them all looking good together, but it will be a lot better to do things this way than it would be to have precalculated radiosity, for example, since any dynamic lights destroy the illusion.

I think it is highly debatable whether no radiosity at all looks better than radiosity + dynamic lights.
If 'better' means 'more realistic', then I would say that no amount of radiosity is less realistic than radiosity, but not responding correctly to dynamic lights, since obviously 'radiosity' exists in reality.
 
Well, I'm not sure radiosity lighting is exact, but it does look pretty good. My point is that consistent but incorrect looks better than inconsistent but with correct parts.
 
Chalnoth said:
Well, I'm not sure radiosity lighting is exact, but it does look pretty good. My point is that consistent but incorrect looks better than inconsistent but with correct parts.

My point is that this is your opinion, hence subjective.
 
Back
Top