Ubisoft's Lighting And Physics Tech For Splinter Cell

Ambient occlusion has been invented by ILM for Pearl Harbour and Jurrassic Park 4. It works by examining the scene from the point of view of each pixel; casting rays in a quasi-random pattern and checking if they hit anything and how far that other piece of geometry is. Summing up the results will tell you how much of the ambient (diffused) light from the scenery is blocked - occluded - out.
ILM has rendered the occlusion of each surface point into a separate pass - a render target if you will - and multiplied it with the pass containing the ambient lighting for the object. In CGI ambient lighting is usually constant throughout the entire scene so it tends to wash out shading detail and make objects look flat. Combined with AO it has changed dramatically and presented pretty cool results already; but ILM also combined it with changing the constant value to a spherical enviroment map. For movie VFX, this means that they photograph a chrome sphere on the set that reflects 99% of the enviroment, and thus they can extract practically all the lighting information of the real-world shooting location.
Another interesting technique is to manipulate the surface normal that they use for the enviroment map lookup in the lighting; kind of bending it based on the occlusion of the surface, into the direction where most of the light should be coming towards that surface point. Like, even though your surface is facing upwards, if it only gets ligh from the sides then it makes sense to modify your enviroment lookup to reflect this in the color and intensity of the incoming light.

AO has usually two main frequencies of detail: one is the result of large objects interacting with each other, like a character casting very soft 'contact shadows' to the ground around its legs, or the two legs occluding the crotch area and the inner sides of the thighs. This effect really helps to integrate objects into an enviroment.
The other part is the small surface details, like various pieces of equipment casting soft shadows on the clothing, or soft darkening effects on very rough surfaces in the cracks and other cavities.


For realtime applications, there's been a Microsoft research publication that replaced/approximated objects with spheres and created some low frequency soft shadows that looked quite good actually. It's obviously not as precise and detailed as real, raytraced AO, but it's probably very fast compared to that. I'll try to look up this presentation, I have the video on my computer somewhere.
Using this method objects could have a kind of an occlusion effect on the enviroments, though they obviously can't effect themselves. A part of the self-occlusion can be painted into the textures, it'll be static but it'll still have some effect.
Splinter Cell seems to be using this technique, combined with some sort of dynamic enviroment maps for the ambient lighting where they can render bright objects into the ambient sphere/cubemap. Rough approximation but it should work.

Another interesting thing is that Crysis seems to be using the 'bent normals' technique for the enviroment lighting.
 
http://en.wikipedia.org/wiki/Ambient_occlusion



I suspect they are commenting on Cell's NUMA (local memory) architecture because the algorithm needs to check global state in a tree walk regularly.

Do you mean disk trees? In implementations like Bunnell's, though, it's the GPU that walks the heirarchy (and builds it IIRC). I've seen it suggested that it may be a win to have the CPU build it instead, though (as the GPU is relatively poor at building versus traversing trees). Of course, you could have the CPU do both and go all the way in calculating the AO term for each pixel, in order to totally offload that from the GPU.
 
Uncharted does NOT have real GI.

I dont think any game has provided us with real Global Illumination yet, there are "cheats" like ambient occlusion that gives you less detailed and less advanced effects that are similar to GI, but its still rather far away from the real deal.


GI is very very very very expensive in terms of computing power, something we only see in CG-renders.

Yes of course. I know it can't be done yet. But it says "Global Illumination" in the press release...so I was wondering what type of GI it is using, or method to approximate to it.
 
If anyone is interested, Edge magazine has had some great little screenshots of this game in action for qute some time..

The lighting looks absolutely amazing however I know Edge are known for "spicing up" their screens for print so I'm probably going to reserve judgement on exactly how good the game/tech looks until I see it in action..

EDIT:

On anotheer note...:

The lead programmer told the magazine he doubts they would have been able to pull of the lightning effects they have right now on the ps3. That's the only thing they say straight out, but they do state they believe they're much better off on the 360.
Maybe we're mis-interpreting what he's actually saying..

Maybe it's less of an issue of "whether it could be done on the PS3" and more of an issue of "a developer expressing his own inadaquacies with respect to his own ability to get his head around and leverage the performance of the PS3 architecture"..?

:smile:
 
P.S.A (Public Service Announcement)

This is pretty much an asshole thing to say.... but will people please stop use the term "lightning" when they are really referring to "lighting" techniques?

Its like when people say Liberry instead of library, or pacific instead of "specific." Ugh.

Back to the regularly scheduled program...."

End PSA

:devilish:
 
Last edited by a moderator:
Maybe it's less of an issue of "whether it could be done on the PS3" and more of an issue of "a developer expressing his own inadaquacies with respect to his own ability to get his head around and leverage the performance of the PS3 architecture"..?
That, and the universally applicable misconception of "they had money stuffed into their wallets" are the rationalizations every PS3 fanboy will use in conjunction with a complete refusal to use brain cells thereafter.

The closest thing in a realistic setting would be that they used a specific technique that relies on things unique to Xbox360. Whether that be UMA, or MEMEXPORT, or rendering lots of cubemap passes(which means lots of geometry throughput necessary), it's all just pointless guessing at this stage.*

* referring to it being pointless specifically to try and think about 360-specialization.
 
That, and the universally applicable misconception of "they had money stuffed into their wallets" are the rationalizations every PS3 fanboy will use in conjunction with a complete refusal to use brain cells thereafter.

The closest thing in a realistic setting would be that they used a specific technique that relies on things unique to Xbox360. Whether that be UMA, or MEMEXPORT, or rendering lots of cubemap passes(which means lots of geometry throughput necessary), it's all just pointless guessing at this stage.*

* referring to it being pointless specifically to try and think about 360-specialization.

Wow SMM,

That was meant to be a joke...

Clearly didn't make it obvious enough.. :oops:
 
Wow SMM,

That was meant to be a joke...

Clearly didn't make it obvious enough.. :oops:
No, it was clear all right, and I knew you were trying to bring up an example of how someone might respond. The smiley was there before I responded, after all. You just brought up a classic example of one of those things that does bring forth a rise in my urge to kill.
 
But still a very dubious comment by the developer. After all you only have to look at nVidias own tech demos for the GF4 upwards to see that such lighting effects are readily available on their hardware.
 
But still a very dubious comment by the developer. After all you only have to look at nVidias own tech demos for the GF4 upwards to see that such lighting effects are readily available on their hardware.

No its not.

Simply by using a hardware specific feature it would be "impossible" for another console.
 
But still a very dubious comment by the developer. After all you only have to look at nVidias own tech demos for the GF4 upwards to see that such lighting effects are readily available on their hardware.
More on the side of the shader techniques they use to get realtime GI-like effects, they're all little more than academic exercises even now. I mean, the dynamic AO stuff that was in GPU Gems (2, IIRC) was nice and all and pretty clearly derived from radiosity form factors, but it only scaled up to something like 150,000 elements per *second*... Even if you were to have something 50x faster than a GeForce6-series GPU, that's still not good enough. And you can't tell me that somebody just suddenly cracked a nut that makes it 100x better throughput than before. The more likely thing is that somebody cracked a nut that made 1% of the throughput you'd really need to get a "proper" solution look passable.

I only give Ubi and maybe a few others the benefit of the doubt because they're not making claims that they compute the solution "to convergence" without using any sort of PRT, IBL, SH, or AO. Or that their solution is completely generic and has no limitations since it's computed every frame. They might be saying things which are vague and ambiguous, but you can always blame that on NDAs.
 
Simply by using a hardware specific feature it would be "impossible" for another console.

The implementation, not the end result.

The comment is in question, or has been brought into question by some. If anyone understands Finnish, the article is 'out there' and could check themselves. I guess that is asides from the main thread of conversation in this thread, which for the most part has been productive I think.

I mean, the dynamic AO stuff that was in GPU Gems (2, IIRC) was nice and all and pretty clearly derived from radiosity form factors, but it only scaled up to something like 150,000 elements per *second*... Even if you were to have something 50x faster than a GeForce6-series GPU, that's still not good enough. And you can't tell me that somebody just suddenly cracked a nut that makes it 100x better throughput than before. The more likely thing is that somebody cracked a nut that made 1% of the throughput you'd really need to get a "proper" solution look passable.

Actually, the Gems paper claims 150 million element-to-element comparisons per second on a 6800 Ultra ;) It is claimed that performance scales quite well with it, being O(nlogn). I suppose it should also be said, in fairness, that this work did not end with that paper - I've no doubt the author has extended it and improved upon it with his work at Fantasy Lab aswell.
 
Actually, the Gems paper claims 150 million element-to-element comparisons per second on a 6800 Ultra ;) It is claimed that performance scales quite well with it, being O(nlogn). I suppose it should also be said, in fairness, that this work did not end with that paper - I've no doubt the author has extended it and improved upon it with his work at Fantasy Lab aswell.
If that's the technique that Ubisoft is using, there is a very good reason for Xenos to be a lot faster: dynamic branching.

Using brute force (i.e. comparing each element with each other) you can only have a couple thousand elements at 30fps, and even then only if this technique is consuming 100% of your render time. If you want to cut the comparisons from O(n^2) to O(nlogn) and hence boost the number of elements you can handle, you're relying on dynamic branching to cut out most of that work by intelligently traversing a hierarchy.

Moreover, it's the worst kind of branching for RSX: Poor coherency and lots of looping. You could easily have a 10x performance boost on Xenos.
 
If that's the technique that Ubisoft is using

I'm not sure if it is or isn't, it just arose in conversation because someone mentioned hierarchical datastructures, which this technique does use.

As for PS3's potential with Bunnell's technique, he did discuss it himself in an interview some time back, and he seemed particularly enthusiastic about it, but not solely because of RSX it has to be said..
 
Actually, the Gems paper claims 150 million element-to-element comparisons per second on a 6800 Ultra ;) It is claimed that performance scales quite well with it, being O(nlogn). I suppose it should also be said, in fairness, that this work did not end with that paper - I've no doubt the author has extended it and improved upon it with his work at Fantasy Lab aswell.
I know it said that of element-to-element comparisons, but 1 pair comparisons does not equal a solution for a given point. And now that I look back at it, I do see that I was mixing up my figures -- numbers came out 510,000 results actually computed per second on a 6800, not 150... which is still total crap, but not as bad as I had in my mind. It still doesn't change the real problem it had when I tried it (and I haven't really thought about it since, so I'd totally forgotten what sort of throughput I got out of it) -- it could not maintain that sort of throughput beyond scene complexities of a certain size. While it supposedly does scale O(n log n) in the algorithmic sense, it doesn't scale well at all when that hierarchy gets big and deep because you'll end up memory access bound really fast. You might have noticed that the demonstrations don't go beyond the point of a scene containing even 20k verts, and that's probably also true of the Fantasy Lab demonstrations.

The only major problems which he claims to have fixed with the newer version include not needing to do multiple passes to get multiple bounces and working in conjunction with the displacement mapping technique.
 
You are talking about ambient occlusion fields ,right ? we've done some playtest ,too.
Simple primitives and a distance function.Nice for simple low level of complexity.We may (or not ) add this to handle dynamic objects and to complete our static occlusion lightmap pass.
 
Back
Top