Carmack on Ray Tracing & Rasterization

1280x544? im guessing you guys are doing cutscenes for a 360 title?

No, it's a trailer, it's very rare for a game to feature CG cutscenes. Halo 4 was really unique int hat.

the current trend with arnold is to try and render as much as possible in a single pass. splitting out passes into a lot of elements is old style that was done mostly for memory concerns. these days a objectid layer will take care of all the old style pass info (for cc).

I've long lost track with what our compers do, but I do know that there's some quite highend stuff going on. One of my pals has been on and off working here, at Framestore, MPC, and now he's back at Weta where he has been at least twice before, and it's his opinion about our approach ;)

So there's probably a reason for why we work like this.
Same for the way jobs are distributed, we actually have a lot of custom code even in the scheduler and dispatcher. It's also all hooked up into the production system, you can start any kind of render or even sim job without touching Maya or Nuke, and even do some level of adjustments to the layers and such. The pipeline has been completely rewritten once since I've been here, and kept evolving after that rewrite constantly.
 
The problem that Laa-Loosh and I run into is tryign to create a shadow map that will encompass the whole scene. You will *not* be able to compute a high enough shadow resolution in a reasonable amount of time or memory footprint. Also, shadow maps are pretty time-consuming to compute when hair or semi-transparent geometry is involed where you need to create deep shadow maps.

I remember I did some really crazy trickery there, like multiple shadow cameras with different FOVs, comping together the output of several shadow cameras, and even going as far as re-using shadows for completely different lights that had a totally different position and angle ;)

This was the movie:
http://www.youtube.com/watch?v=iUKsb03Rnf0

Back then we were a very little shop so once I was done with the modeling I moved to other tasks. Now we're like 10x bigger and I only work on characters ;)
 
No one would ever only cast 1 shadow ray in ray-tracing. That's just asking for trouble.
Indeed, but many shadow rays per pixel puts it out of the realm of reasonable for real time. There's the rub :)

The problem that Laa-Loosh and I run into is tryign to create a shadow map that will encompass the whole scene.
Right but this just *pure* legacy in that the tools you're using have not been updated to use any reasonably modern shadow mapping implementation (i.e. automatic cascades, sample distribution analysis, etc). There should be no need to play shadow maps manually, ever, these days. If I can consistently achieve sub-pixel shadow map resolution everywhere in a 1080p framebuffer with 4 automatically placed 2k shadow maps, in real time at 60Hz, there's really no excuse in offline except for dated implements that have not been touched in a decade.

Also, shadow maps are pretty time-consuming to compute when hair or semi-transparent geometry is involed where you need to create deep shadow maps.
Sure but again, deep shadow maps are borderline legacy at this stage compared to newer stuff like Adaptive Volumetric Shadow Maps. Too much stuff in offline is just done way too brute force, which is sort of ironic since in offline you have a lot more time to get clever, but it doesn't really closely track the research and there's a lot of built-up experience around one way of doing something or another.

Now smoke and hair and volumetric stuff is always going to cost some time - even in a ray tracer. But it can be a lot better than brute force deep shadow maps.

Again, I'm not claiming ray tracing isn't useful - it certainly is. And it's definitely more useful in offline where you're happily willing to sacrifice some computation time for masses of production/art time, but it's less of a clear win in real-time at the moment. It may get to the same stage where art time - not performance - is the bottleneck, but the rasterization-based techniques are a moving target too. As I noted, shadow maps have come a long way since the 1980's too, although seemingly not in offline renderers...
 
There's no need for shadow maps to get better. In the occasions when they're used, they work perfectly well, but their use is incredibly limited these days. I haven't used one in years.

In regards to the memory question, rendering actually doesn't use that much. Most of the memory is taken in the scene itself, all of the meshes and textures and so forth. On my setup (LightWave's native renderer), it's all about pure brute force. The only thing that gets a workout is my i7. I've got 16GB of RAM in my PC here, and I rarely use even half of it, even in scenes with hundreds of objects and millions of polys.

I do occasionally render out passes like I used to, but only in cases where I need the extra levels of control. These days, the low-res renders can be done quick enough, even with bounced light, that I can get a pretty good idea right in LW about what I need to tweak, and then just crank up the settings for the high-res render. Stereoscopic rendering is actually a pain in the rear right now because it doesn't work well with post-processing motion blur. Which means I'm having to do more motion blur in the renderer which, as Laa-Yosh mentioned, cranks up the render times to eliminate noise. But, on the flip side, the blur is also much more accurate.
 
Right but this just *pure* legacy in that the tools you're using have not been updated to use any reasonably modern shadow mapping implementation (i.e. automatic cascades, sample distribution analysis, etc). There should be no need to play shadow maps manually, ever, these days.

I disagree. It's not the tools that's giving us the limitations, it's the renderer. We have automatic cascades in our applications at scene export, but sometimes we don't WANT that because of the strict rules of the bounding. You'd still want more resolution for certain objects than others. You could use deep shadow maps but their accuracy goes out the window with a non-linear projection the further away your light is from the object. Because the renderer assumes that the further away it is, the less important it is.

If I can consistently achieve sub-pixel shadow map resolution everywhere in a 1080p framebuffer with 4 automatically placed 2k shadow maps, in real time at 60Hz, there's really no excuse in offline except for dated implements that have not been touched in a decade.

2k shadow maps won't eliminate aliasing in volumes or assets such as feather/fur (which very few game studios even try implementing). Your solution is great for objects that don't require 3D deep shadow maps for non-volumes.

Too much stuff in offline is just done way too brute force, which is sort of ironic since in offline you have a lot more time to get clever, but it doesn't really closely track the research and there's a lot of built-up experience around one way of doing something or another.

That is true. But brute force is really what everyone wants. :LOL: Otherwise, we'd still be using fixed function vertex buffers from DX9 and not Direct Compute. The more you want to approach realism, the more you will bias towards brute force programming IMO.


Again, I'm not claiming ray tracing isn't useful - it certainly is. And it's definitely more useful in offline where you're happily willing to sacrifice some computation time for masses of production/art time, but it's less of a clear win in real-time at the moment. It may get to the same stage where art time - not performance - is the bottleneck, but the rasterization-based techniques are a moving target too. As I noted, shadow maps have come a long way since the 1980's too, although seemingly not in offline renderers...

How would you implement area lights in real-time? And without creating a light rig that spawns several point lights that creates your shape only to throw it into a deferred render target? How about transparent shadows from a light refracting in something like ice? I have yet to see multiple shadow intensities that produce different soft shadows based on the light falloff, prenumbra, and umbra settings. Every shadow will overlap other shadows with the same exact intensity no matter how far away the light is from the object instead of appearing darker the closer the light is and fading out softly with distance. Games use Photo 1. RT use Photo 2.

soft_shadows.jpg


I agree though that while it seems the offline render world is in the stone age, we really have time to brute force things. That's why they look so good. :LOL:
 
Last edited by a moderator:
You'd still want more resolution for certain objects than others. You could use deep shadow maps but their accuracy goes out the window with a non-linear projection the further away your light is from the object. Because the renderer assumes that the further away it is, the less important it is.
When I say "tools" I mean the entire offline rendering pipeline. The whole "want more resolution for certain objects" shouldn't even be relevant because modern shadow mapping techniques can pretty easily attain subpixel resolution, that's my point. If the ones in your renderer/tools don't attain that, there's a problem.

2k shadow maps won't eliminate aliasing in volumes or assets such as feather/fur (which very few game studios even try implementing).
That's the geometry itself aliasing, not the shadow map projection. "Should be using LOD" is the high level answer, but in practice you can special-case these situations and brute force them separately in offline, since they really are not the same as the rest of the rendering workload.

The more you want to approach realism, the more you will bias towards brute force programming IMO.
When I say "brute force" here, I mean as an alternative to a better algorithm that produces the same results, not as an alternative to approximations. Certainly I agree that there's less reason to make approximations offline than real-time, and indeed sometimes real-time research does migrate into offline, but I'd like to see bidirectional use more often than happens currently. It's possible the economic constraints in the two spaces are just too different though.

How would you implement area lights in real-time?
You need multi-layered shadow maps at least of course. But let's not pretend area shadows are going to be cheap enough for real-time in ray tracers either! With area lights it's definitely a toss-up over which is ultimately going to be better if/when they come to real-time rendering commonly, but we're quite a ways off there.

How about transparent shadows from a light refracting in something like ice?
That's caustics and there are rasterization-based ways to do it (see Chris Wyman's work), but indeed you end up with fairly incoherent rays depending on the object. Same situation as above really... no one is saying rasterization can solve all the same things that ray tracing can, just that conventional hard shadows it does just fine with. Thus ray traced shadows only really get interesting when you're using a *lot* of rays... i.e. still not practical for a long time yet.

I have yet to see multiple shadow intensities that produce different soft shadows based on the light falloff, prenumbra, and umbra settings.
There are a few games that use filter-based plausible approximations, but fundamentally you don't have enough data to do it artifact-free in a single-layer shadow map of course. Same problem as depth of field, with an indirection thrown in to make it even more fun.

I agree though that while it seems the offline render world is in the stone age, we really have time to brute force things. That's why they look so good. :LOL:
No doubt, and I'm not criticising the results or the method. I'm just explaining why your constraints and conclusions are not necessarily right for real-time.
 
That's the geometry itself aliasing, not the shadow map projection. "Should be using LOD" is the high level answer, but in practice you can special-case these situations and brute force them separately in offline, since they really are not the same as the rest of the rendering workload.

Well, it's relevant because we want to only use 1 light source but it must cast shadow for everything in the scene no matter what the material is made of. If our cone is very wide, then the deep shadow map might miss a lot of information when stepping through a small volume trail that only takes up 5% of the shadow maps pixel area. We ended up having to rig a light specifically for that smoke trail where the light was really close on it to get the sampling density we needed when stepping through that volume.



When I say "brute force" here, I mean as an alternative to a better algorithm that produces the same results, not as an alternative to approximations.

Gotcha!

You need multi-layered shadow maps at least of course. But let's not pretend area shadows are going to be cheap enough for real-time in ray tracers either! With area lights it's definitely a toss-up over which is ultimately going to be better if/when they come to real-time rendering commonly, but we're quite a ways off there.

Not at all! I agree.

That's caustics and there are rasterization-based ways to do it (see Chris Wyman's work), but indeed you end up with fairly incoherent rays depending on the object.

Yea, I didn't mean to misspeak. I wasn't talking about caustics per se. More of the soft shadow that happens from object casting shadow even though it's semi-transparent.

wHwbJ.jpg


There are a few games that use filter-based plausible approximations, but fundamentally you don't have enough data to do it artifact-free in a single-layer shadow map of course. Same problem as depth of field, with an indirection thrown in to make it even more fun.

Out of curiosity, which games? I'd like to take a look at them.

Lastly, take a look at this vid in Houdini. How much shadow map resolution you think will be needed to capture that kind of detail in that volume?
 
I want this in realtime dagnamit!!

title.jpg
Well, that's the thing.. you can see that in real time. The difference is that what you'll see is just an approximation or simulation of actual reflections and bounced light and soft shadows and the like.

For a simple scene like that, visually, you wouldn't really see much difference unless you really knew what you were looking for. That's how devs like Crytek can do what they do.. real time reflections and area lighting look more or less the same as what we do in offline renders, but the execution is very different. Smoke and mirrors, so to speak. But you can do some pretty impressive stuff with smoke and mirrors. I'm constantly amazed by seeing things in real time games that take a thousand times longer to render offline and look more or less the same.
 
If our cone is very wide, then the deep shadow map might miss a lot of information when stepping through a small volume trail that only takes up 5% of the shadow maps pixel area.
I haven't tried it, but I really do think something like Adaptive Volumetric Shadow Maps would perform a whole lot better in cases like that. The point is that the actual attenuation function in light space Z is not really that complex, it's just high-frequency at a few places. Thus you want an adaptively compressed visibility function (a. la. AVSM), *not* a sampled or ray marched one.

Yea, I didn't mean to misspeak. I wasn't talking about caustics per se. More of the soft shadow that happens from object casting shadow even though it's semi-transparent.
That's certainly doable with AVSM too. Sounds like mostly what you're having issue with is deep shadow maps...

Out of curiosity, which games? I'd like to take a look at them.
The only one I remember off the top of my head was Hellgate London for their characters, but that game went out of business :S It's all stuff based off of "Percentage Closer Soft Shadows", so a Google search might turn up some more.

Lastly, take a look at this vid in Houdini. How much shadow map resolution you think will be needed to capture that kind of detail in that volume?
See I don't think that sort of detail would actually require a massive amount of information to store... again, you just want to represent a compressed attenuation function rather than sample it, but this is of course all hand-waving.

With respect to the hair picture, if you neglect the GI components, NVIDIA had a hair demo a few years back that actually produced a remarkably convincing approximation of that sort of result...

Just to summarize my position again... ray tracing is obviously necessary for incoherent rays, but I'm just not convinced that most shadow rays are actually incoherent enough to *need* it ultimately if performance efficiency is king (like in real-time).
 
The only one I remember off the top of my head was Hellgate London for their characters, but that game went out of business :S It's all stuff based off of "Percentage Closer Soft Shadows", so a Google search might turn up some more.

I'll look it up.

With respect to the hair picture, if you neglect the GI components, NVIDIA had a hair demo a few years back that actually produced a remarkably convincing approximation of that sort of result...

But that's just it. They are always in demos and never in games. That indicates that putting that functionality in games and running at a decent clip is still impractical.

-M
 
Out of curiosity, which games? I'd like to take a look at them.

I believe the last Stalker game also experimented with compute based shadow generation that tried to simulate how shadows would look the farther they were from the object casting the shadow. I'm not sure if it's exactly the same as what the two of you were discussing, but I believe it is.

There's at least one or two other games, but I can't think of what they were at the moment.

Regards,
SB
 
But what about Brigade Engine? It's doing Ray Tracing in real time, The only downside is the lighting which is NOT complete,
It still doesn't have indirect shadows.
 
Back
Top