Carmack to use shadow map in the next game?

DiGuru said:
I don't know much about it, so I am probably about to say something quite stupid. So I would appreciate it if you could tell me how this method is called already and what the largest problems with it are.

Looking at shadows and such, why not just render a scene from the viewpoint of each light source, only taking the transparency layer into account and rendering the z-value only? That way, you get a z-buffer full of intensity values, that you can scale down and save as a texture.
I think you could do this, but you'd need one shadow map for the scene as a whole, and one for each transparent polygon in the scene for this to be robust. It'd be a fair bit better to do some sort of projected texturing instead.
 
Chalnoth said:
DiGuru said:
I don't know much about it, so I am probably about to say something quite stupid. So I would appreciate it if you could tell me how this method is called already and what the largest problems with it are.

Looking at shadows and such, why not just render a scene from the viewpoint of each light source, only taking the transparency layer into account and rendering the z-value only? That way, you get a z-buffer full of intensity values, that you can scale down and save as a texture.
I think you could do this, but you'd need one shadow map for the scene as a whole, and one for each transparent polygon in the scene for this to be robust. It'd be a fair bit better to do some sort of projected texturing instead.

Thanks!

And I figured it wouldn't work very well (if at all) for light sources that are omnidirectional and in the field of view, although it might work if you keep the distance of the illumination short and rendered it from multiple, very wide angles. But that sounds like too many render passes and textures.

:D

I really have to do some 3D programming real soon, so I understand it much better.


Edit: why would you need one for each transparent polygon? Can't you skin them with the textures that contain that information? Or would that require too much work per pass?
 
Well, if I understand the way you were describing the technique adequately, you'd have problems whenever you have one transparent polygon in front of another. Of course, if you can guarantee that this won't happen, you won't have this problem. Otherwise...

Regardless, I think projective texture mapping would be more useful, as it's really designed to handle this kind of scenario (a light shining through a transparent surface). With projective texture mapping you can represent a very complex transparent object, and you can simply use the normal shadow map to produce the shadows (though you'd have to not render any transparent polygons into the shadow map for this to work properly).
 
Chalnoth said:
Well, if I understand the way you were describing the technique adequately, you'd have problems whenever you have one transparent polygon in front of another. Of course, if you can guarantee that this won't happen, you won't have this problem. Otherwise...

Regardless, I think projective texture mapping would be more useful, as it's really designed to handle this kind of scenario (a light shining through a transparent surface). With projective texture mapping you can represent a very complex transparent object, and you can simply use the normal shadow map to produce the shadows (though you'd have to not render any transparent polygons into the shadow map for this to work properly).

How does that projective texture mapping work?

Isn't the biggest problem with shadow maps that you have to make them up front, so they are really only useful for static scenery? How would you make mobile objects cast shadows if you use shadow maps?
 
Btw, how does Doom 3 or another engine that can do that calculate the illumination of omnidirectional light sources in the field of view? Calculating 6 wide angle volumes? Or with a shader that compares the distance and normal for each pixel?


Edit: I suppose they calculate volumes as well, otherwise your shadows wouldn't be correct anymore. But in that case I really wonder how much worse the method I described would work. And it would calculate the illumination maps in real time, if fast enough.

Edit2: And you could create global illumination by a combination of the basic intensity of the textures and one or more point sources that are a long distance away, I think.
 
DiGuru said:
How does that projective texture mapping work?
To tell you the truth, I'm not completely sure of all the details. I know what it looks like, and I'm sure that the way it's applied is to just use relatively simple math ops that would calculate the appropriate texture coordinate based upon the distance and direction from the projector.

nVidia has a couple of demos if you want to look into it in a bit more detail.

Isn't the biggest problem with shadow maps that you have to make them up front, so they are really only useful for static scenery? How would you make mobile objects cast shadows if you use shadow maps?
Nope, not at all. Generating the shadow map requires rendering a depth buffer from the point of view of each light source in the screen. You can obviously do this beforehand, but you can also do it in realtime.
 
Chalnoth said:
DiGuru said:
Isn't the biggest problem with shadow maps that you have to make them up front, so they are really only useful for static scenery? How would you make mobile objects cast shadows if you use shadow maps?
Nope, not at all. Generating the shadow map requires rendering a depth buffer from the point of view of each light source in the screen. You can obviously do this beforehand, but you can also do it in realtime.

So, is that what I was describing? :oops:

From what I read about them, I had a different idea of shadow maps, more like pre-computed 2D maps, horizontal in the field of view, that calculate the vertical (negative) illumination modifier. I'm confused. Time for some more research.
 
DiGuru said:
Btw, how does Doom 3 or another engine that can do that calculate the illumination of omnidirectional light sources in the field of view? Calculating 6 wide angle volumes? Or with a shader that compares the distance and normal for each pixel?
Shadow volumes are relatively independent of the type of light used. You simply take a triangle that is at the edge of a model, and extrude it to infinity (or as far as you need it to go) in a direction directly away from the light source. This gives geometry that acts as the envelope of the shadow. Then to calculate whether something is inside or outside of a shadow, you simply measure how many times you enter and exit shadow volumes.
 
Chalnoth said:
I don't quite get this argument. Rendering a cubemap is no more geometry limited than rendering a single shadow map. You just have to render more maps.

Exactly. More maps means rendering the scene more often, which means processing the geometry more often. So if you render a high-poly scene, which is vertex-limited, it will be slower with cubemaps, because you have to process the vertices more often.

And I claim that these problems will have to be dealt with, in some fashion, for shadows to become ubiquitous in future games, as they are bound to become. It remains that the performance benefits of shadow maps over shadow volumes are too great to deny.

As I said before, the 'raytracing' argument.
Once the problems are solved, shadowmaps will be nice. But the problems aren't solved, so let's not write off shadowvolumes just yet.

Except that it does it with low geometry counts and a limited type of game environment.

Yes, but as I repeated many times before, the low geometry count is mostly due to bad design/implementation of its shadowvolume algorithm. It would scale to much higher polycounts if more work would be offloaded to the GPU.
As for limited type of game environment... As long as the shadowmap problems aren't solved, there will be limits on the environments in which shadowmaps can be applied aswell. And since these limits don't overlap, don't write off shadowvolumes just yet.
 
Chalnoth said:
DiGuru said:
Btw, how does Doom 3 or another engine that can do that calculate the illumination of omnidirectional light sources in the field of view? Calculating 6 wide angle volumes? Or with a shader that compares the distance and normal for each pixel?
Shadow volumes are relatively independent of the type of light used. You simply take a triangle that is at the edge of a model, and extrude it to infinity (or as far as you need it to go) in a direction directly away from the light source. This gives geometry that acts as the envelope of the shadow. Then to calculate whether something is inside or outside of a shadow, you simply measure how many times you enter and exit shadow volumes.

So all lights are treated equal. Ok.

Wouldn't it be a positive point of the method I sketched, that it doesn't require new geometry?

But anyway, how are shadow maps generated differently from what I was telling? If they are at all, that is.
 
How does that projective texture mapping work?

For each vertex (or pixel if you like) that you render, you take the coordinates in worldspace (actually you would use cameraspace, and then multiply by the inverse camera matrix in practice, when using fixedfunction projective texturing. So effectively that brings you back to worldspace then), and then you apply the camera and projection matrix of the projector (in this case the lightsource... basically the matrices with which the map was originally rendered), and the resulting projected 2d coordinate is the pixel in 'screenspace' of the projected texturemap that corresponds to that vertex (or pixel) on screen.
So basically that coordinate is the (perspective) projection of the vertex (or pixel) on the viewplane of that projector.
 
Scali said:
How does that projective texture mapping work?

For each vertex (or pixel if you like) that you render, you take the coordinates in worldspace (actually you would use cameraspace, and then multiply by the inverse camera matrix in practice, when using fixedfunction projective texturing. So effectively that brings you back to worldspace then), and then you apply the camera and projection matrix of the projector (in this case the lightsource... basically the matrices with which the map was originally rendered), and the resulting projected 2d coordinate is the pixel in 'screenspace' of the projected texturemap that corresponds to that vertex (or pixel) on screen.
So basically that coordinate is the intersection of the view plane of the projector and the vector from the vertex (or pixel) to the position of the projector.

Ah, so that is like I was describing! Only I would make the projection matrix in an intensity texture, shaped like a heightmap as seen from the projector.

I'll have to read what you wrote a few more times to see how that would differ.

Edit: How would that projection matrix look like and how would you render it?

Edit2: In both cases you have to calculate the real world coordinates, but you could store them in the map, I guess.
 
Btw. When you use the z-values as seen from the POV of the projector, you get the intensity for free, but you would have to see if your pixel is aligned to that map (normal, I think), and you would have to look up the displacement of the texel and calculate the real world coordinates to see if they match up. And if they do, you could use texture filtering to get a nice, soft border.
 
How would that projection matrix look like and how would you render it?

The projection matrix is the same type of projection matrix (aka perspective matrix) that you would use for normal rendering. It defines the field-of-view for the projector, like it normally defines the fov for the camera.

In both cases you have to calculate the real world coordinates, but you could store them in the map, I guess.

If you do it per-vertex, you can have the fixedfunction texcoord generator take care of that, as I said before. Then you can just store camera^-1 * projectorviewmatrix * projectorperspectivematrix as a texture matrix, and enable projection of texcoords.
You can also write an equivalent vertexshader ofcourse.
This would still enable you to calc the z per-pixel via a heightmap if you are bumpmapping.
You could also do it completely per-pixel, but there should generally be no need to, and it requires quite a lot of precision and processing power per-pixel. So it would only be suitable for DX9 cards.
 
Scali said:
How would that projection matrix look like and how would you render it?

The projection matrix is the same type of projection matrix (aka perspective matrix) that you would use for normal rendering. It defines the field-of-view for the projector, like it normally defines the fov for the camera.

In both cases you have to calculate the real world coordinates, but you could store them in the map, I guess.

If you do it per-vertex, you can have the fixedfunction texcoord generator take care of that, as I said before. Then you can just store camera^-1 * projectorviewmatrix * projectorperspectivematrix as a texture matrix, and enable projection of texcoords.
You can also write an equivalent vertexshader ofcourse.
This would still enable you to calc the z per-pixel via a heightmap if you are bummapping.
You could also do it completely per-pixel, but there should generally be no need to, and it requires quite a lot of precision and processing power per-pixel. So it would only be suitable for DX9 cards.

Thanks. I think I got it. The main difference between projected textures and the method I proposed would be, that in my case you use the values of the z-buffer as seen from the projector as the intensity values for the texture. If you use a low cut-off value for the intensity and the regular texture(s) for the base color, you have a cross between projected textures, shadow maps and shadow volumes or stencil shadows, I think.

The only two bad cases I can see (in so far I understand it), are mobile light sources that come very close to geometry and softening up the edges of geometry by texture filtering, although that last effect will probably be what you want most of the time, but you could check the difference in the texture "height" (intensity) to make it look as designed in all cases.

:D
 
Scali said:
Chalnoth said:
I don't quite get this argument. Rendering a cubemap is no more geometry limited than rendering a single shadow map. You just have to render more maps.
Exactly. More maps means rendering the scene more often, which means processing the geometry more often. So if you render a high-poly scene, which is vertex-limited, it will be slower with cubemaps, because you have to process the vertices more often.
Two maps are no different than one or ten in their performance implications. I really don't see how this is a particularly bad case for shadow maps. In fact, I would tend to think that shadow volumes would tend to be a bit worse in terms of performance implications for multiple lights.
 
Scali said:
Yes, but as I repeated many times before, the low geometry count is mostly due to bad design/implementation of its shadowvolume algorithm. It would scale to much higher polycounts if more work would be offloaded to the GPU.
Or not...
Moving silhuette extraction to the GPU is a bad idea in many cases. Its not a magic bullet, in our real profiled case we are vertex bound in some scenes and moving silhuette extraction back to the CPU is on the cards (CPU functions scale well for next-gen consoles).

Just because it works for your data set, don't assume that its a rule that applies to mine or Carmacks engine.

If Carmack had done more shadow volume stuff on the GPU, the game would be much worse for me personally (I play it on a 3.0Ghz CPU with a relatively crappy GPU, its completely GPU bound now with CPU silhuette extraction).

So I trust Carmack made the right choice, for his engine and game. Works in my favor at least.
 
Two maps are no different than one or ten in their performance implications. I really don't see how this is a particularly bad case for shadow maps. In fact, I would tend to think that shadow volumes would tend to be a bit worse in terms of performance implications for multiple lights.

If we assume vertex-limited scenes, obviously the method that requires the least passes will be the fastest.
So then two maps will be twice as slow as one map, and ten maps will be ten times as slow as one map.
If you need 6 maps per light, you render the scene 6 times for the shadowmaps, while you would render it once for the shadowvolumes. You do the math.
Really, it is not that hard to understand.
 
Back
Top