Will it use memexport to do the lightining?
Will It use the new updated tiling routines from microsoft?
I did not know about this. Is this driver change?
hey ,You can see things how it fits you...
My point is that game code running PRT is at least two years old ,so,i'm familiar with it ,and saw it long time ago.
BTW ,after some inhouse engine evaluations (we) I'm not sure PRT will ever be common , ,and nor (even less) a trend.
And please be nice ,don't call my opinion, poopooing,reasons to blame me are in your imagination only.
Maybe someone should rewrite this preprocessing code, it seems to me that is a bit too slowThe truth is, we will definitely not choice any algorithm demanding more than 10 minutes preprocessing time. I don't want my head get smacked by our artists and producers constantly.
If they are working 'directly' with SH lighting than there's something deeply wrong, they should not even know that SH lighting exists in your engine, they should not even care. Hide from them any technical detail if you can..and in this case you can do that!Artist unfriendly issue is much worse. Requiring artists working with SH lighting is like let them work with images in frequency domain after FFT. Think the way artists doing their works, people, not your little tech gimmick.
Umh..no it's notGuys, PRT is a big joke.![]()
Diffuse light inter-reflection is not exactly the global illumination. The meaning of term "global illumination" has been blurred over years, just like the term "HDR" (many people think bloom effect is HDR). Maybe it's a fancy word?Laa-Yosh said:To be honest, the definition of GI does not require raytracing, just that it has to account for diffuse light transfer between objects. Most implemetations use raytracing though, in the form of MC sampling, or photon mapping or so on.
But you're of course right, using raytracing is computationally intensive, and it's hard to imagine any practical method that can sample an object's surrounding space without raytracing.
_phil_ said:Reality engine is a licenced game engine.
http://artificialstudios.com/media.php
Unigine too supports PRT.
Demo there http://unigine.com[url] Generaly P...looks fine. Many games are using this scheme.
No they want to obtain the full control. Manipulating lighting is their daily work.If they are working 'directly' with SH lighting than there's something deeply wrong, they should not even know that SH lighting exists in your engine, they should not even care. Hide from them any technical detail if you can..and in this case you can do that!
They should manipulate light as you say it's their daily work, what they should not do is to manipulate SH lighting, they really don't need to. Your lighting implementation should be abstracted and hidden from them as much as possible. Why they should care about SH coefficients? They should just work with the same type of lights (or better a subset of thatNo they want to obtain the full control. Manipulating lighting is their daily work.They're always whining and bitching and ranting the content creation tools are not well implemented, lighting effects are not matched from here to there, and demand adding more parameters to manipulate more stuffs.
They should manipulate light as you say it's their daily work, what they should not do is to manipulate SH lighting, they really don't need to. Your lighting implementation should be abstracted and hidden from them as much as possible. Why they should care about SH coefficients? They should just work with the same type of lights (or better a subset of that)they can use in their most favourite DCC packages, imho.
I wonder what you have in mind when you think of the preprocessing phase for PRT? MC raycast samples for every texel? To give you an example of how I'm working it in, it is, as you say, a very subtle effect used as an alternative to constant ambient, and we just sample at points representing an irradiance volume. Doing that, it's nowhere near two hours because we just render temporary cubemaps and treat every texel as a ray sample. It's not even one hour to sample all the points in the irradiance volume for a normal-sized level using an ordinary PC. Granted, I may have to eat my words on this in the long run, as it is still a work-in-progress, and feature after feature is thrown in... But the idea is that artists shouldn't worry about SH coefficients -- they just worry about where they're sampled. And since it's a per-scene thing that only a few people really have to deal with (everybody else just uses the data generated by it), it's a pretty small cost.So what the general scenario if you want to put PRT in your engine? The producer:"I don't get it. Can't see the differece. Can it improve the gameplay?" The artist:"What? I just changed a mesh and need to wait another 2 hours to see the result?" The manager:"Did you consider spending $2,000,000 on a cluster system to improve a SUBTLE lighting effect worth it? We need cost control."
I'm inclined to agree with the general gist of this, but I'm not so anal about the "10 minutes" figure. I mean, it's not uncommon to have enough of a mass of data for one level such that the *builds* take an hour or more (for a full build, not incremental, obviously). So long preprocesses, particularly ones that won't be executed very often, and more importantly, won't be executed by *everybody* are not a big concern to me. Moreover, if you think of the time added per person -- sure, one guy may see a 2 hour process, but if that becomes a resource for everybody else, it's only an additional 10 seconds on the build times for everybody else...The truth is, we will definitely not choice any algorithm demanding more than 10 minutes preprocessing time.
I agree that academic papers overstate things, but I think you're looking at it the wrong way. PRT isn't viewed as some great thing or some solution that will give us realtime GI. It's just one of those things that we have no choice but to use in a limited fashion to get us a little closer to a good end result. Nobody is claiming it as the future so much as... well... the present.I also wonder those who advocate for PRT did ever develope an actual game? And all those funny logics behind "precomputing time is not important for real-time graphics", "once-for-all" nonsense in a lot of papers.
I'm finding it weird that people actually *wanted* to manually edit SH data, unless I'm misunderstanding you. It's not exactly the most straightforward data to edit... I'd liken it to editing rotations in normalized quaternion formats as opposed to something more obvious like Euler parameters.It's hard to adjust meshes and cubemaps for getting better result due to the long precomputing time (meshes are not always naturally look good with PRT). So someone thought about manually editing. And our final solution was to use low order SHM for character lighting only.
I'm saying why should he have to? It seems that's a hairy way of going about it so long as you consider that each model is likely to animate, move around, be surrounded by things of different reflectance characteristics at different times (i.e., you need different SH based on the surroundings), so trying to do it on a per-model basis is indeed a mess for content creators. PRT on a per-model basis, to me, only makes sense for a single instance of a totally static model, and yet, he mentions doing it for characters.SMM, I think he's talking about the precomputation for the models, not the level. The SH coefficents for each point on each model probably takes more time than figuring out the SH coefficients for points in an irradiance volume.
Still, I don't see how this is unscalable to the point of unusability like Cal is saying. No need to be so precise with lighting while your still tweaking the geometry. Just use an approximation at first (say equal computation time as AO) and do more detailed precomputation overnight.
The first order acts as an ambient term, it might be useful to manually change it sometimes, but it's really a corner caseThough it still begs the question of why anyone should want to manually edit SH coefficients.
I guess you're using SH lighting for a different purpose than I was envisioning.I'm saying why should he have to?
The way I'm using it, it's essentially localized ambient lighting. SH at a point in the volume is just a low frequency representation of what is "seen" from that point in all directions. So any given object moving around in the volume gets some interpolated SH coefficients which provides information about the indirect radiance (so it's just an "added-on" effect) while direct lighting is just done the old fashioned way. Assuming that the environment already has dynamic lighting as well as radiosity lightmapping on it for static lights in the first place, you don't really need to worry about the environment so much.I guess you're using SH lighting for a different purpose than I was envisioning.
If you're calculating the SH coefficients for the lighting your irradiance volume, then what are you using them for? Objects that move around in your volume, right? So you need to calculate the SH coefficents for each object's response to that lighting. What exactly am I missing here?
I hasn't referred to any form of irradiance volume in the previous posts, which I didn't assume to be a part of PRT and hasn't even mentioned it. I actually think irradiance volume (from Gene Greger's paper) is a good approach for applying environmental lighting to movable objects with or without PRT (the PRT algorithms from Sloan's paper and shown in the DX SDK demo). Considering the irradiance spheres at most sampling points are at low resolution and can be calculated from the cubemap via a simple convolution, it's totally feasible for content creation. Our implementation is to add a cubemap rendering function in the level editor, allowing artists to generate cubemap at any location they specified, then convert these cubemaps to irradiance cubes. Character's environmental lighting can be interpolated between regions (usually 3~10 cubes are enough).I wonder what you have in mind when you think of the preprocessing phase for PRT? MC raycast samples for every texel? To give you an example of how I'm working it in, it is, as you say, a very subtle effect used as an alternative to constant ambient, and we just sample at points representing an irradiance volume. Doing that, it's nowhere near two hours because we just render temporary cubemaps and treat every texel as a ray sample. It's not even one hour to sample all the points in the irradiance volume for a normal-sized level using an ordinary PC. Granted, I may have to eat my words on this in the long run, as it is still a work-in-progress, and feature after feature is thrown in... But the idea is that artists shouldn't worry about SH coefficients -- they just worry about where they're sampled. And since it's a per-scene thing that only a few people really have to deal with (everybody else just uses the data generated by it), it's a pretty small cost.
Okay, the radiance part is exactly what I thought, but how can you possibly do any lighting without without SH terms for the models?The way I'm using it, it's essentially localized ambient lighting. SH at a point in the volume is just a low frequency representation of what is "seen" from that point in all directions. So any given object moving around in the volume gets some interpolated SH coefficients which provides information about the indirect radiance (so it's just an "added-on" effect) while direct lighting is just done the old fashioned way. Assuming that the environment already has dynamic lighting as well as radiosity lightmapping on it for static lights in the first place, you don't really need to worry about the environment so much.
Admittedly, it's not the best example of PRT, but the point is it's a step forward from a simple constant ambient term and it is a lot easier on the content creation side of things than trying to compute per-texel SH on animated characters for every condition. It's a subtle effect all right, but you'd get a very subtle effect no matter what, so I don't see why one should take a more troublesome path.
You can do that simply making up a 'faked' response that depends on your local (per vertex, per pixel or whatever) normal. It does workYou need to know the response of the object to each of the SH basis functions, which is the per model precomputation that I'm talking about. Then it's a simple dot product to figure out the lighting term. I don't see how you can use the SH terms in the irradiance volume without any precomputation on the model.