Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

With modern techniques like frustum/face partitioning and warping as well as some sort of nice filtering (PCF with proper filter regions, VSM, CSM, etc), and especially with shadow MSAA (VSM/CSM) it's not terribly difficult to get sub-camera-pixel accuracy with shadow maps, at which point you're already doing better than ray-traced shadows. This is particularly true if you're doing an "equal cost" comparison in which you could easily afford 4 or more 2048^2 shadow map partitions at the same cost as even the fastest GPU/Cell/CPU packet raytracer.
Warping has loads of problems due to fundamental flaw of linear interpolation in screen space (unless we're still talking about loads of subpixel polygons). Partitioning and cascading has its share of problems especially as you cross boundaries, and in my experience, these problems are massively magnified as detail levels go up within the scene. It's harmless when there is geometry for geometry's sake, but when when geometric density is actually used purposefully, any type of shadowmap is not a very good source to look at for the "gospel" rendering of the distance from the light. And this is partly because quality filtering of images is just not a strong suit of GPUs -- I don't mean this in the sense that they lack in power, but that they do stuff which is just plain wrong.

And while VSM-type methods with shadow MSAA are nice for hiding shadow aliasing problems and theoretically look good on their own, in dense geometry situations you end up with quite a bit of loss of self-shadowing detail (numerically no more than you'd lose anyway, but it looks more obvious in certain cases). The only remedies are a host of cheats many of which defeat the purpose of having VSMs in the first place.

I haven't seen anything that can tell me shadow maps can do anything and everything right -- just that we can keep grinding on it until we've gotten somewhere that can fool most people most of the time or that it can serve as a very good oracle to speed up something more exhaustive... whereas shooting rays is basically the exhaustive solution.

I dunno, I thought the cathedral example was pretty compelling on that front and had no noticeable GI artifacts, even with only 4-8 shadow maps updated per frame. Again with an equal-cost comparison you could easily be doing literally thousands of lights for the cost of photon mapping or similar, and at that point I don't think you'd have too much trouble with scene complexity.
Updating 4-8 lights per frame is equivalent to saying that it may take a few clicks for the indirect lighting to really catch up to a scene change. But then, that cathedral scene has been used as a test for many interactive GI papers, and I don't get the impression that 256 lights was enough for that one. Maybe they're just showing the scene with a much weaker lightsource than other people have demonstrated with. I was more impressed by the results they got in the "Maze" scene. Seems they kind of hit a sweet spot there, and it kind of makes sense that they would.
 
There's the academic goal of a near-perfect simulation of a camera in a virtual world, and there's the practical objective of being perceptibly close to reality that permeates all realtime 3D applications today. For the latter, rasterization is good enough.
"Good enough" is a moving target. 10 years ago straight texture mapping was good enough. Today environment maps are good enough. Tomorrow the bar will be higher again, and we'll see what new research brings to the table.
 
I don't want to get too off-topic here, but:

Warping has loads of problems due to fundamental flaw of linear interpolation in screen space (unless we're still talking about loads of subpixel polygons).
This is definitely a problem for things like dual-paraboloid shadow mapping, but affine warps like PSM/TSM/LiPSM and so forth work fine. Hell even LogPSM *would* work fine with some minor tweaks to the rasterizer (see the GH paper).

Partitioning and cascading has its share of problems especially as you cross boundaries, and in my experience, these problems are massively magnified as detail levels go up within the scene.
Definitely you can get some flicker in the distance (with the low-res shadow map) if your projection is poor, but as I mentioned I've not seen significant artifacts with, say, 4 2048^2 VSMs w/ MSAA (hell even split the frustum faces if you want to get much better warpings and avoid any "dueling frusta" problems). That's plenty of resolution, and even a poor frustum partition choice will still be fine. With better choices, you can do with fewer and smaller shadow maps too.

It's harmless when there is geometry for geometry's sake, but when when geometric density is actually used purposefully, any type of shadowmap is not a very good source to look at for the "gospel" rendering of the distance from the light. And this is partly because quality filtering of images is just not a strong suit of GPUs -- I don't mean this in the sense that they lack in power, but that they do stuff which is just plain wrong.
Not sure what you're saying here... texture filtering on GPUs looks pretty good IMHO, and you can even do whatever custom stuff you want (SAT, etc) and not sacrifice a whole lot of speed.

And while VSM-type methods with shadow MSAA are nice for hiding shadow aliasing problems and theoretically look good on their own, in dense geometry situations you end up with quite a bit of loss of self-shadowing detail (numerically no more than you'd lose anyway, but it looks more obvious in certain cases).
There are certainly problems with VSMs when you get multiple occluders in one filter. These can actually be addressed with adaptive sampling, or just falling back on PCF (i.e. using VSM as an "accelerator").

whereas shooting rays is basically the exhaustive solution.
Yes, but one which suffers from its share of significant problems. In particular you basically need to supersample the framebuffer (not just the shadow rays!) to get good results since otherwise you're not taking the receiver geometry over the filter region into account (a la PCF and all the associated artifacts). Furthermore simple shadow ray casting is totally unfiltered in screen space (look at how terrible it looks in the sunflower scene - anything beyond the first few rows of sunflowers is just a big aliasing mess!), so again you need to start supersampling the framebuffer to get reasonable results. Once you start adding up these costs, I don't think you're going to do much better than just using a friggin' huge shadow map, or several!

Updating 4-8 lights per frame is equivalent to saying that it may take a few clicks for the indirect lighting to really catch up to a scene change.
Sure, but I see no problem with that in most cases. In fact we already make much more significant assumptions of temporal coherence in many of the algorithms employed today (in particular, hierarchical occlusion culling a la GPU Gems 2 chapter).

But then, that cathedral scene has been used as a test for many interactive GI papers, and I don't get the impression that 256 lights was enough for that one.
Maybe not for a "perfect solution", but it looked pretty good to me! I've yet to see a solution running at that speed that looks better in any case.

I'm still unconvinced that we need raytracing for shadows, and the court is still out on GI IMHO :)
 
"Good enough" is a moving target. 10 years ago straight texture mapping was good enough. Today environment maps are good enough. Tomorrow the bar will be higher again, and we'll see what new research brings to the table.
Straight texture mapping was never enough to look real. Environment maps for reflections are, because in many cases they are identical to raytracing. What me and ShootMyMonkey are saying is that the limitation here lies in human perception. The majority of the objects around us are not reflective, and when they are, we do not examine them closely enough to see where each ray came from. You could take a photo, warp the reflections to give the kinds of errors that environment mapping gives, and it's still photorealistic.

The only time we might notice is when objects in the reflections are the primary focus of the scene, and nearly all of those cases involve planar reflection maps (which are even closer to the raytraced ideal).
 
I haven't seen anything that can tell me shadow maps can do anything and everything right -- just that we can keep grinding on it until we've gotten somewhere that can fool most people most of the time or that it can serve as a very good oracle to speed up something more exhaustive... whereas shooting rays is basically the exhaustive solution.
I don't see why you're think raytraced shadows are so much better than shadow maps. They're effectively the same thing aside from sample location and density.

Cascading/partitioning addresses the latter (admittedly in discrete steps), and the former is irrelevent in light of the aliasing associated with both. Rasterization is fast enough to give you gobs more sample density even in the worst case scenario when done with equal cost to whatever raytracing solution you're comparing with, and that'll allow better antialiasing.

The only cons to shadow mapping are precision and storage space. Precision is a non-issue when partitioning since you have bounds on both the depth range and viewer's ability to discern distinct objects, storage of shadow maps is a much simpler storage problem than dealing with non-immediate rendering, especially since it's also bounded.
 
This is definitely a problem for things like dual-paraboloid shadow mapping, but affine warps like PSM/TSM/LiPSM and so forth work fine. Hell even LogPSM *would* work fine with some minor tweaks to the rasterizer (see the GH paper).
You'll have to forgive me if I'm not going to give rasterization free points for things that I don't expect to exist outside of an entirely software rasterizer. To me, the argument of raytracing over rasterization tends to be more about the things that raytracing can do that rasterization will never be able to do. When you take all of that, there isn't anything compelling now or even within the near future. I guess I'm looking at it a little more like Lycium did, but then he was a little more adamant about the here and now than I am... But it's not a now or never problem. There's reason to look at it in the long run.

I don't see a lot of things as possible if the only thing we do is to try and throw silicon at the problem. There are things which are fundamentally limiting with the very idea of having pixels in the innermost loop.

Definitely you can get some flicker in the distance (with the low-res shadow map) if your projection is poor, but as I mentioned I've not seen significant artifacts with, say, 4 2048^2 VSMs w/ MSAA (hell even split the frustum faces if you want to get much better warpings and avoid any "dueling frusta" problems). That's plenty of resolution, and even a poor frustum partition choice will still be fine. With better choices, you can do with fewer and smaller shadow maps too.
My experience has typically been that you can say such things in ideal cases only. And Murphy's Law says that once you put it into practice you will hit the pathological failure case almost all the time. Bear in mind that the nature of the camera models you can use is such that the question of whether or not the projection is poor or not always has the same answer -- Yes, it IS poor, and any attempt to try and warp it will just move where the "poorness" lies. Any attempt to filter it away will just trade poorness of projection for destruction of information. Moreover, I'd add that PCF, VSM, shadow MSAA are all flawed by nature of the fact that you're not filtering the right thing -- you don't get softness in a shadow because the receiver distance from a single point light varies, after all.

Not sure what you're saying here... texture filtering on GPUs looks pretty good IMHO, and you can even do whatever custom stuff you want (SAT, etc) and not sacrifice a whole lot of speed.
I'm not sure how you can talk of SATs and speak of not sacrificing a lot of speed. In practice SATs cost far more than merely "a whole lot of speed." To me, texture filtering on GPUs looks good only when there is no magnification. Granted, this is a universal problem that applies to raytracing as much as anything else, but the thing is that for all you can complain about the problems that occur in filtering with raytracing, the problems that exist are universally unavoidable. You can't make the sunflowers scene look correct with rasterizers either no matter what you do, because a scene like that needs more samples than you'll ever be able to give and better filtering than we'll ever really have.

There are certainly problems with VSMs when you get multiple occluders in one filter. These can actually be addressed with adaptive sampling, or just falling back on PCF (i.e. using VSM as an "accelerator").
If you make use of Z-Prepass, then VSM are usually only faster than PCF if the total shadow map resolution (including splits and multiple maps) is less than half that of the screen resolution -- which in turn means you're guaranteed not to have sufficient resolution anyway. Granted, we still use VSMs whenever possible because it's visually quite simply better than PCF, and the artifacts are more combatable when you expose stuff to the artists, but "accelerator" it is not.

Yes, but one which suffers from its share of significant problems. In particular you basically need to supersample the framebuffer (not just the shadow rays!) to get good results since otherwise you're not taking the receiver geometry over the filter region into account (a la PCF and all the associated artifacts).
I sort of agree and sort of disagree here. It's not that you need to filter both necessarily, but that you potentially have more than one thing to filter. The light representation is actually significant, whereas with rasterization you're "pretending" when you filter shadowmaps. If you start taking things the way they were meant to be, the story changes. You need to be concerned both with variance in lines of sight from the light to the receiver (since a pixel is not infinitesimal) and from the receiver to the light (since a light isn't necessarily infinitesimal). With raytracing, screen-space supersampling deals with one while multiple shadow ray samples solves the other. The two aren't really related per se, nor does one really affect the other directly... but the thing is that if you only have variance in one, everything looks wrong if you choose to filter the wrong thing -- so you filter both only because it's the most generic. With shadow maps, you can't *properly* have both anyway (short of having an absurd number of shadow maps which is not at all reasonable).

Again, this is less of a "better" vs. "worse" argument and more of a "possible" vs. "impossible" argument.
 
Straight texture mapping was never enough to look real. Environment maps for reflections are, because in many cases they are identical to raytracing. What me and ShootMyMonkey are saying is that the limitation here lies in human perception. The majority of the objects around us are not reflective, and when they are, we do not examine them closely enough to see where each ray came from. You could take a photo, warp the reflections to give the kinds of errors that environment mapping gives, and it's still photorealistic.
Straigt texture mapping was enought to be immersive. There are actually very few cases where environment maps and raytracing are exactly identical. But I agree that in many cases it can be convincing. However...

The only time we might notice is when objects in the reflections are the primary focus of the scene, and nearly all of those cases involve planar reflection maps (which are even closer to the raytraced ideal).
... I disagree with this. Cube maps may be fine when you have a mainly convex object floating in space. Planar maps may be fine if the object is one-sided and about flat. Content creators carefully avoid the kinds of objects that would not look right with environment maps. But at some point on the road to realism we will have to add them, and we'll need something better than environment maps.
 
... I disagree with this. Cube maps may be fine when you have a mainly convex object floating in space.
They don't have to be mainly convex to look convincing. People were gawking at reflective water in DX8 games, yet they all used cube maps for some inexplicable reason. We have a good solution for that situation (planar reflection map), yet people still though it was convincing.

ShootMyMonkey's example of a bar scene in a cave is very illustrative. Sure, looking down the opening of a tuba with a mirror-finish won't be correct with environment mapping, but it can still be convincing with AO and a shader hack to prevent reflecting things through it (e.g. flip the normal when necessary).

Content creators carefully avoid the kinds of objects that would not look right with environment maps. But at some point on the road to realism we will have to add them, and we'll need something better than environment maps.
Do you really think content creators avoid these things? I don't. They put them in, use incorrect reflections, and look great anyway. The aforementioned headlights in GT5 are a great example. Technically, the reflections are all wrong, but they look great anyway simply because the reflections behave qualitatively correctly as the light and object move.

Go through your typical day and the things around you. How often do you come across something that would strike you as out of place if the reflections weren't ray-trace accurate? I guess my goal is qualitative photorealism, not true camera simulation, and I'd argue that 99% of the industry interested in lifelike 3D graphics thinks similarly.
 
You'll have to forgive me if I'm not going to give rasterization free points for things that I don't expect to exist outside of an entirely software rasterizer.
That was just LogPSM, the rest naturally work fine with linear rasterization.

To me, the argument of raytracing over rasterization tends to be more about the things that raytracing can do that rasterization will never be able to do. [..] There's reason to look at it in the long run.
Definitely, we're on the same page here. I don't dismiss raytracing entirely - it is certainly necessary in some cases. I do, however, try to do something more efficient whenever I can (without sacrificing quality).

Yes, it IS poor, and any attempt to try and warp it will just move where the "poorness" lies.
That's not true though. Face/frustum partitioning + warping for instance can put a provable bound on the amount of aliasing you can get in a shadow map. You're not just "moving" the error, you're actually eliminating it! Check out the "Warping and partitioning for low error shadow maps" summary paper for more info/math.

Any attempt to filter it away will just trade poorness of projection for destruction of information.
That's definitely true, but filtering is an entirely orthogonal problem to finding a good projection. Both are 100% necessary though. Even though ray traced shadows avoid the projection problem, they still have to be filtered properly, which is difficult to do efficiently.

Moreover, I'd add that PCF, VSM, shadow MSAA are all flawed by nature of the fact that you're not filtering the right thing -- you don't get softness in a shadow because the receiver distance from a single point light varies, after all.
If by this you mean "clamping the minimim filter size" aka "edge softening", then of course they aren't correct in a "soft shadows" sense. There are techniques like PCSS that do something more physically plausible, but that's besides the point. "Filtering" the extents of the onscreen pixel however is completely necessary - just like texture maps - and to that end PCF/VSM attempt to compute P(x < t), which is exactly what you want...

To me, texture filtering on GPUs looks good only when there is no magnification. Granted, this is a universal problem that applies to raytracing as much as anything else, but the thing is that for all you can complain about the problems that occur in filtering with raytracing, the problems that exist are universally unavoidable.
I'm talking about minification though, since magnification has to be avoided in shadow maps by using a good projection. With ray traced shadows there's not a really good way to do this other than super-sampling the framebuffer. With shadow maps you can sample the filter region. That's all I'm saying, and I believe it trivially counts as an advantage of shadow mapping over ray traced shadows.

If you make use of Z-Prepass, then VSM are usually only faster than PCF if the total shadow map resolution (including splits and multiple maps) is less than half that of the screen resolution -- which in turn means you're guaranteed not to have sufficient resolution anyway.
That's not true at all, since with PCF you either have to do random sampling (and take a LOT of samples), or sample the whole filter region, which could even be the whole shadow map! "Lets look at a 4x4 neighbourhood around our sample" is NOT real shadow filtering. I'm talking about real PCF/VSM here, not the "edge softening" implementations of late.

Granted, we still use VSMs whenever possible because it's visually quite simply better than PCF, and the artifacts are more combatable when you expose stuff to the artists, but "accelerator" it is not.
It definitely can be an accelerator. For instance SAVSM can conservatively cull out entire regions of light/shadow, and do PCF only on penumbrae regions. This implementation has actually been used in an offline renderer to "accelerate" PCF.

The light representation is actually significant, whereas with rasterization you're "pretending" when you filter shadowmaps.
You keep saying that, but the filtering is actually computing meaningful values, so I'm not sure what you mean by "pretending". VSMs even take into account the receiver geometry which is entirely how they avoid the biasing issues that plague PCF.

You need to be concerned both with variance in lines of sight from the light to the receiver (since a pixel is not infinitesimal) and from the receiver to the light (since a light isn't necessarily infinitesimal).
Sure, but I'm not even talking about soft shadows here, so you can consider the light to be infinitesimal if you wish. At some point you need to know "what percentage of the area covered by this framebuffer pixel are in shadow?". For ray traced shadows, I know of no other way to compute this value than to super-sample the primary ray. PCF only has to read the samples from the projected area in the shadow map (random sampling or otherwise), but doesn't actually take into account that the receiver geometry may be non-constant over the relevant filter area. VSM actually does take the latter into account, although will produce a potentially high approximation in complex cases.

Now once you get into soft shadows land, typical shadow maps become quite inadequate for even simple cases. That said, rendering hundreds of shadow maps (super-sampling the light area) may still be faster than shooting hundreds of shadow rays...
 
I'm talking about minification though, since magnification has to be avoided in shadow maps by using a good projection. With ray traced shadows there's not a really good way to do this other than super-sampling the framebuffer. With shadow maps you can sample the filter region. That's all I'm saying, and I believe it trivially counts as an advantage of shadow mapping over ray traced shadows.
Well, minification still has problems with shadow maps if you pre-filter the map itself (vanilla VSMs). Since you run into the issue of fixed-size filter widths, you do get some artifacts that occur as a result of adjacent pixels projecting to samples more than 1 filter width apart. Extreme minification on textures in general (i.e. not enough miplevels) have problems as well because the filtering hardware doesn't consider enough samples in the process. It just works with the local quad(s), which is just plain dumb, but obviously done for a reason.

That's not true at all, since with PCF you either have to do random sampling (and take a LOT of samples), or sample the whole filter region, which could even be the whole shadow map! "Lets look at a 4x4 neighbourhood around our sample" is NOT real shadow filtering. I'm talking about real PCF/VSM here, not the "edge softening" implementations of late.
Okay, well that is a different story than I was referring to then. Though, for something like that, I'd look at hierarchical approaches relying on the mip-chain to store sample region info... the max number of samples you might ever need to take in those cases is known a priori.

The things I was referring were the cases where you have a predetermined number of samples in either case, but with a Z-prepass, you're only concerned with the number of pixels on screen, and if it's about the same as your shadowmap resolution, VSMs will probably not perform quite as well for the same filter size. Up to a point, taking advantage of the linear separability of whatever filter you use can hurt performance as well because you're basically burning up twice the fillrate to do it over two passes (trading texel fillrate for pixel fillrate).

It definitely can be an accelerator. For instance SAVSM can conservatively cull out entire regions of light/shadow, and do PCF only on penumbrae regions. This implementation has actually been used in an offline renderer to "accelerate" PCF.
Well, offline is fine, though in realtime apps, the cost of generating the SATs themselves is the big one. The rest is comparatively small (also, I wouldn't shrug how effective SATs are at utterly swallowing numerical precision).

You keep saying that, but the filtering is actually computing meaningful values, so I'm not sure what you mean by "pretending". VSMs even take into account the receiver geometry which is entirely how they avoid the biasing issues that plague PCF.
Well, when I was referring to "pretending", I was still having true soft shadows in my mind, which means thinking not just about the receiver geometry but the source geometry. "Pretending" was referring to pretending that thinking only about receiver geometry is enough.

Which is not to say that shadowmap filtering is a lost cause or anything -- I wouldn't have leapt into Taylor Series-land or variance-of-source stuff if I hadn't had some interest in the matter. Just that it'll never totally be enough. That and we all need some Belgian beer once in a while :D.
 
That's definitely true, but filtering is an entirely orthogonal problem to finding a good projection. Both are 100% necessary though. Even though ray traced shadows avoid the projection problem, they still have to be filtered properly, which is difficult to do efficiently.
I don't think proper is a good word for it ... my idea of proper is no filtering and using supersampling to clear up the mess.
With shadow maps you can sample the filter region. That's all I'm saying, and I believe it trivially counts as an advantage of shadow mapping over ray traced shadows.
AFAICS anything you can do with samples from a shadow buffer you can do better with samples which you can pick the exact location for in image space, you don't even really need to sample the shadow rays at the same rate per pixel as the primary rays.

"Real PCF" with all the shadow buffer samples in the pixels region is the exact same thing as supersampling in raytracing with a really poorly picked set of shadow rays. The advantage is speed.
 
Last edited by a moderator:
Well, when I was referring to "pretending", I was still having true soft shadows in my mind, which means thinking not just about the receiver geometry but the source geometry. "Pretending" was referring to pretending that thinking only about receiver geometry is enough.

Which is not to say that shadowmap filtering is a lost cause or anything -- I wouldn't have leapt into Taylor Series-land or variance-of-source stuff if I hadn't had some interest in the matter. Just that it'll never totally be enough. That and we all need some Belgian beer once in a while :D.
Oh, I see. When you were criticising shadow map filtering, you were talking about it's applicability to realistic soft shadows. AndyTX and I were talking about eliminating aliasing. If you use one ray per pixel, you'll get aliased shadows.

Soft shadows from fairly big light source are indeed a tough problem. It may be that approximating them as many point lights (which is the raytracing solution) is the best way to handle it. It's tough to say whether raytracing can be faster than rasterization in the long term here, but I'll admit that there are culling/acceleration opportunities when using the light as the innermost loop.

The most promising stuff I've seen is from Assarson and Akenine-Moller, but that slows to a crawl with high geometric complexity (it's dependent on silhouettes, which scale with the square root of poly count). Of course drawing 100 shadow maps scales even worse. Raytracing may scale the best here, but the crossover would involve a lot of rays per pixel and is thus a looong time away from realtime.
 
Last edited by a moderator:
I don't think proper is a good word for it ... my idea of proper is no filtering and using supersampling to clear up the mess.
Well you can always make that argument, but we tend to avoid super-sampling the frame-buffer due to its rather significant performance implications. It should also be noted that we can never super-sample enough (there are plenty of infinite frequencies in the scene), although in practice there may be a practical limit.

So even though in a "pure and holy" world we wouldn't need filtering, even ray tracers tend to use mipmapping/SAT combined with ray differentials to get any reasonable amount of quality in finite time :).

AFAICS anything you can do with samples from a shadow buffer you can do better with samples which you can pick the exact location for in image space, you don't even really need to sample the shadow rays at the same rate per pixel as the primary rays.
That's certainly true for PCF, as you just pick rays that originate on a plane parallel to the light projection plane and shoot off towards the light.

For something like VSM with pre-filtering (mipmapping) it's a little less clear what to do "equivalently" for ray-tracing. In particular you have to take the receiver geometry into account, which isn't something that's easy to do without super-sampling primary rays. The advantage of VSMs is that they store a rough representation of the receiver distribution in the shadow map itself. That's not easy to attain without shooting rays from the light towards the receiver, which defeats any advantages of ray-traced shadows.

And I agree with Mintmaster in that for robust soft shadows, ray-tracing may well be the best solution. However for *filtering*, I think shadow maps currently have a huge lead, in several areas (not just performance).
 
Last edited by a moderator:
And I agree with Mintmaster in that for robust soft shadows, ray-tracing may well be the best solution. However for *filtering*, I think shadow maps currently have a huge lead, in several areas (not just performance).
It seems like the only time you and I conclude that raytracing's the answer is when a characteristic of lighting needs to be robust and requires tens or hundreds of rays per pixel to look good.

I think the demise of rasterization is not quite as imminent than Intel would like us to believe. 10 billion rays per second is a little more than a few years off...
 
what about development costs?

I see it's been a week since anybody posted anything in this thread, but maybe some of you are still monitoring it. Hi, btw. This is my first post here.

I read through most of the previous posts including the original article (took awhile), and I see a lot of arguments about whether real-time raytracing is going to take over the industry. The consensus is that this isn't going to happen anytime soon: the coherency of rasterization is hard to beat until you get to secondary rays, but those can be faked convincingly. I agree with all that.

But here's something I haven't seen discussed in regard to faking secondary rays: how much artist/programmer time does wrangling the various fakes eat up? Secondary rays are expensive performance-wise, but they don't change the design of the raytracer -- secondary rays are intersected against the scene just the same as primary rays. Does that elegance translate into tangible benefits for developers of games and game engines? Consider the setup times for "real" 1-bounce indirect diffuse lighting in a raytracer vs. faking it with lots of local lights, shadow maps, and approximate occlusion. The physically based approach has much lower performance, but it's easy for an artist to set up the scene by placing a couple of area lights. (I know that no current interactive raytracer can 1-bounce diffuse without running on a big machine, this is hypothetical).

Faking it gives much higher performance, but an artist has to place some local bounce lights, and maybe a programmer has to tell the graphics engine to do something special. That increases development costs, and higher development costs favor large studios. Anybody have thoughts on that? Would raytracing actually lower development costs?
 
That is a very dubious arguement I've seen Intel mention several times. It seems rather compelling on first sight, but it rightfully loses its charm when reformulated properly: "3D rendering software with lower image quality per unit of performance tends to be less expensive to develop."

Well, duh. Getting a PS3 game to look as good overall as a PS2 game, for example, is much less expensive than it was for the original PS2 game, as the engine can be less optimized and the art can be more about sheer brute force.

This isn't exactly the same thing, and what we should really be looking at here is the "maximum image quality attainable for a given piece of hardware for a given budget and design document". However, it should be easy to see that the two arguments are highly similar and the same principles apply in both cases.

Raytracing promises some different trade-offs than that of today's cost-cutting measures, but that doesn't mean those trade-offs are actually much better. In fact, they aren't, IMO - but that's slightly subjective and we already had that debate indirectly in this very thread, so let's not reiterate it.

P.S.: Welcome to the forum! :)
 
GI is actually a LOT harder to art direct because you can't easily tell light not to go here or look more saturated there. Even if it's rendered in (near) real time which can significantly shorten the iteration overhead, it is somewhat less intuitive to work with and can even be a lot more complicated, given how the actual lighting implementations will probably be quite bare-bone because of performance requirements.
 
GI is actually a LOT harder to art direct because you can't easily tell light not to go here or look more saturated there.
I don't know that I'd say that firstly because it's not anything new to artists and art directors. It's really quite common that lighting artists already light scenes with GI in mind (just not dynamic GI). And secondly because the problems that lighting artists have are typically not ones of keeping a light from reaching a certain area, but one of making sure a light actually does reach an area. I can't recall the last time I've heard of anybody complain that a light reaches too large an area. It tends to me -- "we need more fill lights to cover this area," "we can't get enough shadow-casting lights in here." Similarly, localizing saturation shouldn't really be any different for raytracing-based GI since there is no rule that direct lighting and indirect lighting samples must follow the same rules (assuming "correctness" is not a goal here).
 
Fill lights are almost as neccessary with GI as without, and shadow casting lights are obviously not part of GI at all, as they're direct illumination.
Ever been to a movie shoot? A lot of extra lights, bounce cards and trickery is always required to create mood, emphasize parts of the scene, separate actors from backgrounds and so on - even though mother nature gives us high quality GI.

So even if we have all the realtime raytracing and GI that we can use, we're still going to have to put in extra lights, create shadows and so on, make it functional in short. This is where GI becomes a bit more problematic. For example, you do get bounced light automatically but its strength or color or decay rate isn't what you'd really need. It's quite counterintuitive to then go and try to adjust a main light source, effecting the entire scene; or trying to do something with the surfaces that bounce the light, again possibly a destructive step. Also, our own eyes have an automatic adjustment of exposure, so we can easily deal with lighting conditions that when realistically recreated, would create overbright or underlit areas in an ingame setting. With current lighting systems it's a lot easier to adjust each component, be it the sunlight, an ambient cube or anything else, compared to trying to tweak a GI solution.

I could go on and list other examples but I hope I've already made my point clear. GI will present a lot of wins - but there are many new problems and challenges as well, and people expecting a simple and easy solution are in for a lot of disappointment.
 
Back
Top