Halo3 Global Illumination Engine, can be a UE3 killer on 360?

Full Auto does not use a tiling engine to my knowledge. Forza 2 will be the first.

unless David lied to me last time, FA does tiling. which they had to implement pretty late into the production cycle. you may be thinking of one other title by one 'privileged' studio who got away with a downsized framebuffer.
 
Last edited by a moderator:
Almost makes me wonder if they're doing some kind of skydome lighting. i.e. global illumination as in "it comes from a globe." They do mention PRT on the Chief, though, so I it's a more likely conclusion that they have some sort of irradiance volume type of technique (precalculating SH coefficients for deformable geometry is far more ugly). Again, just plain SH or something similar in the end.
I was thinking of using HDR lightmaps for illumination in pixel shaders. Combined with PRT shadowing it could be pretty realistic, especially in rather open areas, and quick, in my theoretical universe. Anyone given this idea a try?
 
Bungie have claimed the game will be running at 720p with HDR and 4xMSAA now that indicates that they are tileing, now when tileing is in use does'nt it disable the ability to use memexport?
When did Bungie claim 4xMSAA? I've been following H3 pretty closely, and I have not seen this claim. Can you provide a link?
 
GI is extremely computationally intensive, it's a complex form of Raytracing, in other words we can't expect to see GI, as found in offline renderers, running in real time before quite a while.

To be honest, the definition of GI does not require raytracing, just that it has to account for diffuse light transfer between objects. Most implemetations use raytracing though, in the form of MC sampling, or photon mapping or so on.
But you're of course right, using raytracing is computationally intensive, and it's hard to imagine any practical method that can sample an object's surrounding space without raytracing.


Some developers claimed that they have a running engine featuring some sort of real time GI, but given that they didn't expose their method, nor disclosed clear details on how their actual implementation works, we can't really draw any conclusions about it. It's surely a new type of real time approximation, like PRT via SH is.

There is one notable example I can recall, with that hippo in the desert tech demo, but it's not entirely convincing - far too cartoony shading, with simple scenery, basically two objects. And there's no subtle shadowing... Yes, ambient occlusion in itself is a hack; but it is used because it's an important part of the full GI solution to account for light blocked from a point by the surrounding geometry. GI without the AO part won't look convincing enough.
Edit: yeah I'm talking about the Fantasy Lab stuff here.
 
Last edited by a moderator:
Almost makes me wonder if they're doing some kind of skydome lighting. i.e. global illumination as in "it comes from a globe."

Yeah, that's very likely - in real life the only case where you have one main lightsource is a sunny day. There you have direct sunlight and scattered sunlight from the atmosphere and bounced sunlight from the ground and surrounding objects. You can actually use a single spherical mapped texture (preferably HDR) to account for all of this, though it will heavily simplify the bounced light.
This can give you very nice and quick results but it's not too dynamic. You can also use this texture as a reflection map, although for lighting it's better to use a low-res, heavily filtered texture - at least in offline rendering. Half-life used this approach to replace specular lighting on level geometry and it looked far better than Doom3's shiny surfaces IMHO - but it probably costs a lot more texture memory too, especially with HDR maps (you can probably cheat with 8-bit though).

In offline, we usually mask this skylight with the AO term, and use a simple direct light for the sun to have better artistic control, plus it'll provide cast shadows as well. I think a game engine can skip the direct light for the lighting and use a generic light position for shadows only.

But indoors is quite different :) and not that suitable for such an approach, unless they're willing to create dozens of such enviroment maps for all the various rooms.
 
I was thinking of using HDR lightmaps for illumination in pixel shaders. Combined with PRT shadowing it could be pretty realistic, especially in rather open areas, and quick, in my theoretical universe. Anyone given this idea a try?

Half-life uses an 'ambient cube map' to light their characters, without any PRT though.


The thing is that PRT is limited to non-deforming geometry, and if you want to move around something (like a vehicle) it'll somewhat limit how much info you can store in your PRT solution. As I know PRT is good for working with moving dynamic lightsources and static geometry; for example moving a character with a torch around a complex structure.

On the other hand the image based lighting is cool because you can move around objects lit with it and the results will look pretty cool, but the lighting itself remains static unless you replace the enviroment texture. So it's best for objects moving around in a static lighting enviroment. You can also use it without PRT to get good results.
If you want to light a static object, it's better to bake the results into a lightmap texture because it'll get you more detailed results and it'll be faster than doing all the lookups into the enviroment map.

Bungie probably decided for the image based lighting solution (if they did) because they'll have a lot of moving stuff scattered around large outdoor enviroments. They can also add PRT for the buildings but I'd think they're cheaper to light with a static lightmap... we'll see if they'll publish details about the engine in the end.
 
Thanks for proving my point, namely:

My point is simple: When PRT becomes common, or at least a trend in newly released products, you can say "it isn't new". Until then PRT is very new to realtime graphics in games.

You constant poopooing doesn't change the fact that PRT isn't being widely used in commercial game products.

hey ,You can see things how it fits you...
My point is that game code running PRT is at least two years old ,so,i'm familiar with it ,and saw it long time ago.

BTW ,after some inhouse engine evaluations (we) I'm not sure PRT will ever be common , ,and nor (even less) a trend.
And please be nice ,don't call my opinion, poopooing ;),reasons to blame me are in your imagination only.
 
hey ,You can see things how it fits you...
My point is that game code running PRT is at least two years old ,so,i'm familiar with it ,and saw it long time ago.

BTW ,after some inhouse engine evaluations (we) I'm not sure PRT will ever be common , ,and nor (even less) a trend.
And please be nice ,don't call my opinion, poopooing ;),reasons to blame me are in your imagination only.

Well you've been repeatedly and consistently downplaying the Halo 3 GFX since the first day the trailer was released.
 
But indoors is quite different and not that suitable for such an approach, unless they're willing to create dozens of such enviroment maps for all the various rooms.
There are examples of people doing this, though strictly for indirect and ambient lighting type purposes. Source, for instance, does exactly this, though their "cubemap" for this specific purpose is just six colors which correspond to each axis direction and go in as pixel shader constants. Though this pretty much only works for dynamic geometry, the fact that they've already got radiosity lightmaps means they don't really need to worry about it for scene geometry.

There is also a lot of research lying around about irradiance volumes and the idea of authoring a loose grid of points at which SH coefficients are computed, and you interpolate between the nearest neighbouring sample points. This is about the only technique I know of that works in general in indoor as well as okay in outdoor lighting. However, because the grid points are basically lying out in space, it doesn't really solve any shadowing except for very broad scale shadowing, so it's generally best left for indirect lighting components alone. Of course, the ugliest part of this is having to query those points for lots and lots of dynamic objects and hence, jumping around in memory (which is tantamount to suicide).
 
There are examples of people doing this, though strictly for indirect and ambient lighting type purposes. Source, for instance, does exactly this, though their "cubemap" for this specific purpose is just six colors which correspond to each axis direction and go in as pixel shader constants. Though this pretty much only works for dynamic geometry, the fact that they've already got radiosity lightmaps means they don't really need to worry about it for scene geometry.

Well as you also mention, they're not placing even a 256*256 8-bit lightmap there, just 6 colors for the cubemap. Adding nice HDR enviroment maps for every new room would take its toll on texture memory.

Then again I firmly believe that Valve has made some very good calls with Source. Man-made enviroments were very cool with lots of subtle and nice things going on, from the lighting through the reflections and the features have served the entire art direction very well. It's mostly static lighting and lots of precalculated data, but worked better than building a fully dynamic engine just to brag about - at least I've really liked the results. And for the record, I think Carmack made the right choices for Doom3, too :)

I've been a bit suprised though, to learn that they've added specular lighting for characters only in Episode 1 - I always thought that their shading and the photo-based textures could be better but why leave out this one texture layer? I suspect it wasn't a resource thing but lack of artist time...

There is also a lot of research lying around about irradiance volumes and the idea of authoring a loose grid of points at which SH coefficients are computed, and you interpolate between the nearest neighbouring sample points.

I think that image based stuff will probably be a preferred method for most lighting artists, it's easier to understand the process and estimate the results, and it does not take hours or even days to re-calculate the lighting (without PRT it should be almost realtime, right?).
 
acert93 said:
but no game has used it yet.
Afaik that is quite false(though don't go asking me for title lists, just going by what I remember people in dev community talking about for last few years, and besides they can do their own PR).

You constant poopooing doesn't change the fact that PRT isn't being widely used in commercial game products.
It may not be widely advertised - but then many features aren't and they are used extensively (eg. various clever compression techniques - it's always fun to read posts from people talking like compression will solve problems of data storage NOW, because before we 'clearly' never used it before).

It's not a question of PRTs being new or old though, it's people expecting any new buzzword to radically change the face of games - the last time that actually happened the buzzword was "3d graphic accelerator".
 
Last edited by a moderator:
You constant poopooing doesn't change the fact that PRT isn't being widely used in commercial game products.

It's in use all over the place. People just expect a lot more than they see, A/B's are night and day, but you don't get many A/B comparisons while playing a game.

Although the more I play with it the less I think precomputed visibility will ever be a general solution, the basis functions that provide sparseness for high frequency information leave the data in a GPU unfriendly state. Plus all the issues with moving objects, never mind a skinned character.
 
I think that image based stuff will probably be a preferred method for most lighting artists, it's easier to understand the process and estimate the results, and it does not take hours or even days to re-calculate the lighting (without PRT it should be almost realtime, right?).
Actually, the stuff I was thinking of did involve cubemaps and generating cubemaps for a given probe point. It's just that for runtime, the cubemap was reduced to a spherical harmonics representations which are a fair bit more compact and can go through as constant registers. And that would be used strictly for indirect lighting approximations.

There are also papers that talk of doing this whole process every frame, but it's really not even within a few miles of feasible. When we can afford to do 100 passes per frame, it might be all right.
 
. Of course, the ugliest part of this is having to query those points for lots and lots of dynamic objects and hence, jumping around in memory (which is tantamount to suicide).
SH interpolation is (not very easily) solvable using delaunay's triangulation (tetrahedrization in this case) and trilinear interpolation into a single tetrahedra at run time.
You don't need to "jump around" in memory as you SH coefficienty will be mostly static so they're are perfect candidates to be inserted in some kind of spatial suddivision structure as octrees, kd-trees, etc..
 
To be honest, the definition of GI does not require raytracing, just that it has to account for diffuse light transfer between objects. Most implemetations use raytracing though, in the form of MC sampling, or photon mapping or so on.
But you're of course right, using raytracing is computationally intensive, and it's hard to imagine any practical method that can sample an object's surrounding space without raytracing.

see this old iotd
http://www.flipcode.com/cgi-bin/fcarticles.cgi?show=63153
im not using any raytracing with that GI shot,
then again its pathetic geometry just a few boxes
i am thinking of adding a raytracing and or GI renderer to my game as the next major addition (though only once i get a new cutting edge graphics card)
 
SH interpolation is (not very easily) solvable using delaunay's triangulation (tetrahedrization in this case) and trilinear interpolation into a single tetrahedra at run time.
I believe all the papers that were concerned about realtime simulations just accepted the errors inherent with lerping per vertex. Since it was only used for indirect lighting, anything [within reason] looks better than nothing, though it does kind of demand that the environment be somewhat higher polygon density than the SH sampling grid, so even a flat floor needs to be subdivided.

You don't need to "jump around" in memory as you SH coefficienty will be mostly static so they're are perfect candidates to be inserted in some kind of spatial suddivision structure as octrees, kd-trees, etc..
I get the feeling we're not talking about the same problem. I was more thinking about the common problem that a lot of these papers don't really worry about which is that of having to do it for 100 moving objects that could all have different interpolation neighbors for SH coefficients. It's one of those things that's bound to happen if your test cases get no larger than a single room with two objects moving around.

As nice as the idea sounds on paper, I have my reservations about anything so dependent on precalculation -- they made perfect sense for demoscene type of things, but when you pile on seemingly nice features that are innately limited in their usability onto a big projects and a codebase that is intended to live through multiple big projects, Murphy's Law will chew you up and spit you out at some point. I've seen the results on characters using actual low-res cubemaps as opposed to low-detail representations thereof, and it's admittedly far better than constant ambient or (constant * AO) or something similar, but if it was useful for the environment and didn't have to take up texture reads, that would be nice.
 
Back
Top