Halo3 Global Illumination Engine, can be a UE3 killer on 360?

:LOL: this got me thinking last night during smoko, i had a brainwave to an semioriginal GI technique, its still gonna be slow as it is very computation intensive (whats new). since im gonna have a break from my game for a couple of days, ill code up a proof of concept app this weekend, + if it works will write a paper.
 
memexport lighting

Will it use memexport to do the lightining?

I do not understnad this my friend. Is this for CPU lighting calculation per vertex type? Already Xbox360 has powerful Xenos GPU for per pixel lighting no? If you will explain this idea for me I will appreciate.


Will It use the new updated tiling routines from microsoft?

I did not know about this. Is this driver change?
 
hey ,You can see things how it fits you...
My point is that game code running PRT is at least two years old ,so,i'm familiar with it ,and saw it long time ago.

BTW ,after some inhouse engine evaluations (we) I'm not sure PRT will ever be common , ,and nor (even less) a trend.
And please be nice ,don't call my opinion, poopooing ;),reasons to blame me are in your imagination only.

Allow me to dig some old thread, cause it's interesting. ;)

Well, count me as one not sold on the PRT things. Actually, I consider PRT is one of the topmost useless tech for video game graphics and definitely not the trend of future graphics. I would be amazed if any game has utilized (is utilizing or will utilize) it WIDELY in the lighting engine. After reading through this thread, I was also surprised nobody mentioned the real reason why PRT is such a craptastic tech: it's a very very efficient way to kill the productivity. The fact like a simple teapot model requiring hours pre-computing time just doesn't make PRT a practical way for content creation.

So what the general scenario if you want to put PRT in your engine? The producer:"I don't get it. Can't see the differece. Can it improve the gameplay?" The artist:"What? I just changed a mesh and need to wait another 2 hours to see the result?" The manager:"Did you consider spending $2,000,000 on a cluster system to improve a SUBTLE lighting effect worth it? We need cost control."

The truth is, we will definitely not choice any algorithm demanding more than 10 minutes preprocessing time. I don't want my head get smacked by our artists and producers constantly. Besides the productivity and cost control issues, there're also many minor annoying aspects of PRT. Sure it's hard to be applied for dynamic objects and complex scene illuminated from finite distance, but these tech issues are far from the unacceptable ones. Artist unfriendly issue is much worse. Requiring artists working with SH lighting is like let them work with images in frequency domain after FFT. Think the way artists doing their works, people, not your little tech gimmick.

I also wonder those who advocate for PRT did ever develope an actual game? And all those funny logics behind "precomputing time is not important for real-time graphics", "once-for-all" nonsense in a lot of papers.

Guys, PRT is a big joke. :devilish:
 
Last edited by a moderator:
The truth is, we will definitely not choice any algorithm demanding more than 10 minutes preprocessing time. I don't want my head get smacked by our artists and producers constantly.
Maybe someone should rewrite this preprocessing code, it seems to me that is a bit too slow :)
Also to preview stuff you really don't need to fill all your SH basis, just fill the lowest order coefficients.

Artist unfriendly issue is much worse. Requiring artists working with SH lighting is like let them work with images in frequency domain after FFT. Think the way artists doing their works, people, not your little tech gimmick.
If they are working 'directly' with SH lighting than there's something deeply wrong, they should not even know that SH lighting exists in your engine, they should not even care. Hide from them any technical detail if you can..and in this case you can do that!
Guys, PRT is a big joke. :devilish:
Umh..no it's not :)
 
Laa-Yosh said:
To be honest, the definition of GI does not require raytracing, just that it has to account for diffuse light transfer between objects. Most implemetations use raytracing though, in the form of MC sampling, or photon mapping or so on.
But you're of course right, using raytracing is computationally intensive, and it's hard to imagine any practical method that can sample an object's surrounding space without raytracing.
Diffuse light inter-reflection is not exactly the global illumination. The meaning of term "global illumination" has been blurred over years, just like the term "HDR" (many people think bloom effect is HDR). Maybe it's a fancy word? :) I do believe the term was originally referred to calculation of light transport between objects distinguishing from the surface shading, after a few years it specially referred to those algorithms which are aiming to solve the Rendering Equation. However different people have different interpretion according to themselves. Henrik Wann Jensen, who brought Photonmap to CG industry, insisted calling diffuse inter-reflection "global illumination" in all his publications. He worked for Mental Image and the product Mental Ray followed this interpretion, that's why most artists equal diffuse inter-reflection to GI. After the bloom of programmable hardware pipeline, many empirical methods, tricks and hacks can approximate certain aspects of GI rendering, namely ambient occlusion, irradiance volume, PRT, etc, and claim they're "global illumination" albeit without solving the Rendering Equation. Some of them are very usful to game making and widely accepted.

Also, not all GI algorithms using raytracing. :) Most radiosity algorithms don't need raytracing, like hierachical radiosity and wavelet radiosity use a push-pull scheme. Even a few MC algoritms don't use raytracing, they use splash method to render.

_phil_ said:
Reality engine is a licenced game engine.
http://artificialstudios.com/media.php
Unigine too supports PRT.
Demo there http://unigine.com[url] Generaly P...looks fine. Many games are using this scheme.
 
Last edited by a moderator:
If they are working 'directly' with SH lighting than there's something deeply wrong, they should not even know that SH lighting exists in your engine, they should not even care. Hide from them any technical detail if you can..and in this case you can do that!
No they want to obtain the full control. Manipulating lighting is their daily work.:mad::LOL: They're always whining and bitching and ranting the content creation tools are not well implemented, lighting effects are not matched from here to there, and demand adding more parameters to manipulate more stuffs.
 
No they want to obtain the full control. Manipulating lighting is their daily work.:mad::LOL: They're always whining and bitching and ranting the content creation tools are not well implemented, lighting effects are not matched from here to there, and demand adding more parameters to manipulate more stuffs.
They should manipulate light as you say it's their daily work, what they should not do is to manipulate SH lighting, they really don't need to. Your lighting implementation should be abstracted and hidden from them as much as possible. Why they should care about SH coefficients? They should just work with the same type of lights (or better a subset of that ;) )they can use in their most favourite DCC packages, imho.
 
They should manipulate light as you say it's their daily work, what they should not do is to manipulate SH lighting, they really don't need to. Your lighting implementation should be abstracted and hidden from them as much as possible. Why they should care about SH coefficients? They should just work with the same type of lights (or better a subset of that ;) )they can use in their most favourite DCC packages, imho.

It's hard to adjust meshes and cubemaps for getting better result due to the long precomputing time (meshes are not always naturally look good with PRT). So someone thought about manually editing. And our final solution was to use low order SHM for character lighting only.
 
Last edited by a moderator:
So what the general scenario if you want to put PRT in your engine? The producer:"I don't get it. Can't see the differece. Can it improve the gameplay?" The artist:"What? I just changed a mesh and need to wait another 2 hours to see the result?" The manager:"Did you consider spending $2,000,000 on a cluster system to improve a SUBTLE lighting effect worth it? We need cost control."
I wonder what you have in mind when you think of the preprocessing phase for PRT? MC raycast samples for every texel? To give you an example of how I'm working it in, it is, as you say, a very subtle effect used as an alternative to constant ambient, and we just sample at points representing an irradiance volume. Doing that, it's nowhere near two hours because we just render temporary cubemaps and treat every texel as a ray sample. It's not even one hour to sample all the points in the irradiance volume for a normal-sized level using an ordinary PC. Granted, I may have to eat my words on this in the long run, as it is still a work-in-progress, and feature after feature is thrown in... But the idea is that artists shouldn't worry about SH coefficients -- they just worry about where they're sampled. And since it's a per-scene thing that only a few people really have to deal with (everybody else just uses the data generated by it), it's a pretty small cost.

The truth is, we will definitely not choice any algorithm demanding more than 10 minutes preprocessing time.
I'm inclined to agree with the general gist of this, but I'm not so anal about the "10 minutes" figure. I mean, it's not uncommon to have enough of a mass of data for one level such that the *builds* take an hour or more (for a full build, not incremental, obviously). So long preprocesses, particularly ones that won't be executed very often, and more importantly, won't be executed by *everybody* are not a big concern to me. Moreover, if you think of the time added per person -- sure, one guy may see a 2 hour process, but if that becomes a resource for everybody else, it's only an additional 10 seconds on the build times for everybody else...

I also wonder those who advocate for PRT did ever develope an actual game? And all those funny logics behind "precomputing time is not important for real-time graphics", "once-for-all" nonsense in a lot of papers.
I agree that academic papers overstate things, but I think you're looking at it the wrong way. PRT isn't viewed as some great thing or some solution that will give us realtime GI. It's just one of those things that we have no choice but to use in a limited fashion to get us a little closer to a good end result. Nobody is claiming it as the future so much as... well... the present.

It's hard to adjust meshes and cubemaps for getting better result due to the long precomputing time (meshes are not always naturally look good with PRT). So someone thought about manually editing. And our final solution was to use low order SHM for character lighting only.
I'm finding it weird that people actually *wanted* to manually edit SH data, unless I'm misunderstanding you. It's not exactly the most straightforward data to edit... I'd liken it to editing rotations in normalized quaternion formats as opposed to something more obvious like Euler parameters.
 
SMM, I think he's talking about the precomputation for the models, not the level. The SH coefficents for each point on each model probably takes more time than figuring out the SH coefficients for points in an irradiance volume.

Still, I don't see how this is unscalable to the point of unusability like Cal is saying. No need to be so precise with lighting while your still tweaking the geometry. Just use an approximation at first (say equal computation time as AO) and do more detailed precomputation overnight.
 
SMM, I think he's talking about the precomputation for the models, not the level. The SH coefficents for each point on each model probably takes more time than figuring out the SH coefficients for points in an irradiance volume.

Still, I don't see how this is unscalable to the point of unusability like Cal is saying. No need to be so precise with lighting while your still tweaking the geometry. Just use an approximation at first (say equal computation time as AO) and do more detailed precomputation overnight.
I'm saying why should he have to? It seems that's a hairy way of going about it so long as you consider that each model is likely to animate, move around, be surrounded by things of different reflectance characteristics at different times (i.e., you need different SH based on the surroundings), so trying to do it on a per-model basis is indeed a mess for content creators. PRT on a per-model basis, to me, only makes sense for a single instance of a totally static model, and yet, he mentions doing it for characters.

Though it still begs the question of why anyone should want to manually edit SH coefficients.
 
Last edited by a moderator:
I'm saying why should he have to?
I guess you're using SH lighting for a different purpose than I was envisioning.

If you're calculating the SH coefficients for the lighting your irradiance volume, then what are you using them for? Objects that move around in your volume, right? So you need to calculate the SH coefficents for each object's response to that lighting. What exactly am I missing here?
 
I guess you're using SH lighting for a different purpose than I was envisioning.

If you're calculating the SH coefficients for the lighting your irradiance volume, then what are you using them for? Objects that move around in your volume, right? So you need to calculate the SH coefficents for each object's response to that lighting. What exactly am I missing here?
The way I'm using it, it's essentially localized ambient lighting. SH at a point in the volume is just a low frequency representation of what is "seen" from that point in all directions. So any given object moving around in the volume gets some interpolated SH coefficients which provides information about the indirect radiance (so it's just an "added-on" effect) while direct lighting is just done the old fashioned way. Assuming that the environment already has dynamic lighting as well as radiosity lightmapping on it for static lights in the first place, you don't really need to worry about the environment so much.

Admittedly, it's not the best example of PRT, but the point is it's a step forward from a simple constant ambient term and it is a lot easier on the content creation side of things than trying to compute per-texel SH on animated characters for every condition. It's a subtle effect all right, but you'd get a very subtle effect no matter what, so I don't see why one should take a more troublesome path.
 
I wonder what you have in mind when you think of the preprocessing phase for PRT? MC raycast samples for every texel? To give you an example of how I'm working it in, it is, as you say, a very subtle effect used as an alternative to constant ambient, and we just sample at points representing an irradiance volume. Doing that, it's nowhere near two hours because we just render temporary cubemaps and treat every texel as a ray sample. It's not even one hour to sample all the points in the irradiance volume for a normal-sized level using an ordinary PC. Granted, I may have to eat my words on this in the long run, as it is still a work-in-progress, and feature after feature is thrown in... But the idea is that artists shouldn't worry about SH coefficients -- they just worry about where they're sampled. And since it's a per-scene thing that only a few people really have to deal with (everybody else just uses the data generated by it), it's a pretty small cost.
I hasn't referred to any form of irradiance volume in the previous posts, which I didn't assume to be a part of PRT and hasn't even mentioned it. I actually think irradiance volume (from Gene Greger's paper) is a good approach for applying environmental lighting to movable objects with or without PRT (the PRT algorithms from Sloan's paper and shown in the DX SDK demo). Considering the irradiance spheres at most sampling points are at low resolution and can be calculated from the cubemap via a simple convolution, it's totally feasible for content creation. Our implementation is to add a cubemap rendering function in the level editor, allowing artists to generate cubemap at any location they specified, then convert these cubemaps to irradiance cubes. Character's environmental lighting can be interpolated between regions (usually 3~10 cubes are enough).

Back to the per model based PRT, which means self inter-reflection, neighborhood transfer, subfurface self-transfer, etc, it's what I was concerning in the previous post. As all discussions in the previous posts, I think we have to agree the PRT tech has a lot of constraints that can be only applied to a limited set of objects, so we must re-evaluate whether it's worth of putting so much effort into before the actual production. And I believe that's the reason why few games used it.

EDIT: yeah, terms are confusing. :) Irradiance volume is a kind of "precomputed irradiance distribution" not "precomputed radiance transfer". The PRT microsoft heavily pushed is the latter one (they added a set of API in D3DX library). I remember two years ago in a meeting microsoft staffs told us something like "do not waste your time on cliche effects like lens flare, realistic lighting like PRT is the bar of nex-gen..."
 
Last edited by a moderator:
The way I'm using it, it's essentially localized ambient lighting. SH at a point in the volume is just a low frequency representation of what is "seen" from that point in all directions. So any given object moving around in the volume gets some interpolated SH coefficients which provides information about the indirect radiance (so it's just an "added-on" effect) while direct lighting is just done the old fashioned way. Assuming that the environment already has dynamic lighting as well as radiosity lightmapping on it for static lights in the first place, you don't really need to worry about the environment so much.

Admittedly, it's not the best example of PRT, but the point is it's a step forward from a simple constant ambient term and it is a lot easier on the content creation side of things than trying to compute per-texel SH on animated characters for every condition. It's a subtle effect all right, but you'd get a very subtle effect no matter what, so I don't see why one should take a more troublesome path.
Okay, the radiance part is exactly what I thought, but how can you possibly do any lighting without without SH terms for the models?

When rendering the models, what are you doing with the SH terms that describing incoming light? You need to know the response of the object to each of the SH basis functions, which is the per model precomputation that I'm talking about. Then it's a simple dot product to figure out the lighting term. I don't see how you can use the SH terms in the irradiance volume without any precomputation on the model.
 
You need to know the response of the object to each of the SH basis functions, which is the per model precomputation that I'm talking about. Then it's a simple dot product to figure out the lighting term. I don't see how you can use the SH terms in the irradiance volume without any precomputation on the model.
You can do that simply making up a 'faked' response that depends on your local (per vertex, per pixel or whatever) normal. It does work :)
 
Yup, I know that works, and if that's what SMM is talking about I'm all clear. Normal maps are pretty much identical to the l=1 SH coefficients unless there's some heavy asymmetric occlusion nearby. But if you have the SH framework there, you might as well precompute the real coefficients for a better image. You'll definately notice it in the nooks and crannies.

AO can capture a decent chunk of PRT effects, but I don't see how it's so much cheaper to precompute than 0-bounce SH terms. Both need to cast a bunch of rays and see whether they escape the model. Once you have that it's a simple convolution of these boolean results with the different SH basis fn's, which should take a fraction of the time involved in the ray tests.
 
Last edited by a moderator:
Back
Top