Carmack to use shadow map in the next game?

As long as hardware doesnt do adaptive shadowmaps (and I dont see it doing them in the next generation) I dont see shadow volumes dieing out, even if you use something like Light Space Perspective Shadow Maps you are going to run into aliasing or waste a lot of samples. Although shadow silhouette maps seem like a nice approach to lessen the problem.

As far as soft shadows are concerned I think for the next gen penumbra wedges makes sense, they use the same approach as volumetric fogging/lighting through multiple rendering targets which will probably be used in next gen engines anyway (at least mr. Sweeney used that effect as one of his reasons for praising DX9, so I assume he will end up using it). This works with shadowvolumes though.

For shadowmaps the smoothie approach looks nice and efficient, but is patented (by crytek, although the guy who "created" smoothies only learned of the patent after he finished his reinvention ... showing once again how silly it is to pretend that a patent which only gets published years after application fosters innovation).
 
Xmas said:
Scali said:
Well, to me it sounds like it's the same as regular shadowmapping, except that you want to use the z-value as light intensity. Normally the z-value is only used to determine whether a pixel is the closest one to the light or not (if it isn't closest, there must be one in front, so this one is in shadow).
But I suppose your idea behind this is that the intensity can be filtered now, which would give softshadows. To be honest, I'm not sure how well that would work. I suppose you'd have to implement it and see.
Actually, I don't see how this method would work at all, since you never see a shadow from the POV of the light source. So of what use would an "intensity value" be?

Well, you start off with really dark materials (in the shadow), which you illuminate by all intensity values for all the light sources which intersect. It is the opposite of making shadows.
 
Actually, I don't see how this method would work at all, since you never see a shadow from the POV of the light source. So of what use would an "intensity value" be?

I interpreted the intensity as the attenuation factor for the light, which ofcourse is a function of the distance between light source and surface.
If you filter that, I suppose you end up with soft edges, but I'd have to see an implementation to see how well that idea works.
 
Scali said:
Actually, I don't see how this method would work at all, since you never see a shadow from the POV of the light source. So of what use would an "intensity value" be?

I interpreted the intensity as the attenuation factor for the light, which ofcourse is a function of the distance between light source and surface.
If you filter that, I suppose you end up with soft edges, but I'd have to see an implementation to see how well that idea works.

Yea, me too! :D

Unlike stencil shadows, these shadows would not only have soft edges, but they wouldn't be totally black as well. They should look just like real shadows. And the other illumination should be correct as well. The only things not taken into account is the global scattering (which you could simulate by the base intensity of your textures) and reflections (which for mirrors you might do with more passes).

If you add that Carmack told on the video that he wanted to make soft shadows by using multiple lights close together or multiple samples, 16 or more in all, I don't think the overhead would be very bad.

Btw. What engine did you make?
 
DiGuru said:
Well, you start off with really dark materials (in the shadow), which you illuminate by all intensity values for all the light sources which intersect. It is the opposite of making shadows.
Shadow maps and shadow volumes work additive, too.
And how do you determine whether the pixel you're just rendering is affected by the light or not?


Scali said:
I interpreted the intensity as the attenuation factor for the light, which ofcourse is a function of the distance between light source and surface.
If you filter that, I suppose you end up with soft edges, but I'd have to see an implementation to see how well that idea works.
You can do attenuation per pixel, which is better IMO. But how does attenuation give you shadows from occluders?
 
Xmas said:
DiGuru said:
Well, you start off with really dark materials (in the shadow), which you illuminate by all intensity values for all the light sources which intersect. It is the opposite of making shadows.
Shadow maps and shadow volumes work additive, too.
And how do you determine whether the pixel you're just rendering is affected by the light or not?

Shadow volumes and stencil shadows are substractive, AFAIK. And they are just pitch black because that is the easiest. There is just a binary value: shadow yes/no. So, first you have to do the lighting and afterwards you make shadows. Why do things twice? Illuminating things only would be enough.

Scali said:
I interpreted the intensity as the attenuation factor for the light, which ofcourse is a function of the distance between light source and surface.
If you filter that, I suppose you end up with soft edges, but I'd have to see an implementation to see how well that idea works.
You can do attenuation per pixel, which is better IMO. But how does attenuation give you shadows from occluders?

You don't cast or otherwise make shadows at all. You only make things brighter.
 
MfA said:
As long as hardware doesnt do adaptive shadowmaps (and I dont see it doing them in the next generation) I dont see shadow volumes dieing out, even if you use something like Light Space Perspective Shadow Maps you are going to run into aliasing or waste a lot of samples. Although shadow silhouette maps seem like a nice approach to lessen the problem.

I don't think the aliasing problem is much of a problem going forward. For hard shadows it's a problem, but for soft shadows it sort of goes away. I have played around with a soft shadows technique at work that works by first rendering a hard shadow, and in another pass blurring the shadow appropriately. I've been getting pretty good results with it.
 
DiGuru said:
Shadow volumes and stencil shadows are substractive, AFAIK. And they are just pitch black because that is the easiest. There is just a binary value: shadow yes/no. So, first you have to do the lighting and afterwards you make shadows. Why do things twice? Illuminating things only would be enough.

No, there's no reason to do either shadow volumes or shadow maps subtractive. You can do it that way if you like, but it's just as easy to do it additive.
 
Humus said:
DiGuru said:
Shadow volumes and stencil shadows are substractive, AFAIK. And they are just pitch black because that is the easiest. There is just a binary value: shadow yes/no. So, first you have to do the lighting and afterwards you make shadows. Why do things twice? Illuminating things only would be enough.

No, there's no reason to do either shadow volumes or shadow maps subtractive. You can do it that way if you like, but it's just as easy to do it additive.

Ah, you mean that only the shadows that are cast by all light sources are used? So that if you have 5 lights, you have to cross a shadow moundary 5 times to have a real shadow? I didn't get it at first. My mistake.
 
Humus said:
I don't think the aliasing problem is much of a problem going forward. For hard shadows it's a problem, but for soft shadows it sort of goes away. I have played around with a soft shadows technique at work that works by first rendering a hard shadow, and in another pass blurring the shadow appropriately. I've been getting pretty good results with it.

Obviously if the aliasing gets bad enough (and a priori it is almost impossible to predict if and when that is going to occur) no amount of blurring will help anyway. Also what if you want hard edged non-aliased shadows? (Well the shadow silhouette map can help there ... but still I like shadow volumes better.)
 
DiGuru said:
Shadow volumes and stencil shadows are substractive, AFAIK. And they are just pitch black because that is the easiest. There is just a binary value: shadow yes/no. So, first you have to do the lighting and afterwards you make shadows. Why do things twice? Illuminating things only would be enough.
No, the lighting is only done for those pixels that are not in shadow when you use shadow volumes.

Shadow mapping gives you the result of a depth comparison in the pixel shader, so you can't avoid the lighting work without dynamic branching or equivalent solutions.

You don't cast or otherwise make shadows at all. You only make things brighter.
But how do you decide which things to make brighter?


DiGuru said:
Ah, you mean that only the shadows that are cast by all light sources are used? So that if you have 5 lights, you have to cross a shadow moundary 5 times to have a real shadow? I didn't get it at first. My mistake.
No, additive means that you start with black and add light to the scene, not start with a lit scene and darken areas in shadow.
This isn't related to intersecting volumes in any way.
 
Xmas said:
DiGuru said:
Shadow volumes and stencil shadows are substractive, AFAIK. And they are just pitch black because that is the easiest. There is just a binary value: shadow yes/no. So, first you have to do the lighting and afterwards you make shadows. Why do things twice? Illuminating things only would be enough.
No, the lighting is only done for those pixels that are not in shadow when you use shadow volumes.

So the shadows are calculated first. And after the shadows, you run a shader for each lightsource and compare the normal and distance to that source?

Shadow mapping gives you the result of a depth comparison in the pixel shader, so you can't avoid the lighting work without dynamic branching or equivalent solutions.

You don't cast or otherwise make shadows at all. You only make things brighter.
But how do you decide which things to make brighter?

The idea is, that you render a scene for each lightsource (or even up to 6 times, for omnidirectional lightsources in the field of view of the player). If possible, you do that at a smaller resolution than the final rendering. And after each rendering, you store the z-buffer as a texture containing the intensity values of the light source.

When rendering the final pass, for each pixel you compare the normal against the various intensity maps. If you have a map that is in the right direction, you look at the coordinates of the texel to see if it fits. If it does, you use the intensity value in the map (the distance to the light source) to compute the new brightness of the pixel. And you can use texture filtering to make it nice and smooth.

DiGuru said:
Ah, you mean that only the shadows that are cast by all light sources are used? So that if you have 5 lights, you have to cross a shadow moundary 5 times to have a real shadow? I didn't get it at first. My mistake.
No, additive means that you start with black and add light to the scene, not start with a lit scene and darken areas in shadow.
This isn't related to intersecting volumes in any way.

Thanks for explaining. I was thinking in volumes, not in brightness. So, it is both just about the same, but instead of shadow volumes you use intensity maps.

Edit: The render passes for each light source are obvious z-only.
 
DiGuru said:
So the shadows are calculated first. And after the shadows, you run a shader for each lightsource and compare the normal and distance to that source?
Basically, yes. The stencil shadow volume pass gives you a stencil value of 0 for lit pixels, and higher values for pixels in shadow. Then you render the whole scene with a shader for whatever lighting you may want, and stencil test discards all pixels in shadow before they reach the shader.

The idea is, that you render a scene for each lightsource (or even up to 6 times, for omnidirectional lightsources in the field of view of the player). If possible, you do that at a smaller resolution than the final rendering. And after each rendering, you store the z-buffer as a texture containing the intensity values of the light source.
So you actually store a depth texture?

When rendering the final pass, for each pixel you compare the normal against the various intensity maps. If you have a map that is in the right direction, you look at the coordinates of the texel to see if it fits.
How do you determine if "it fits"?

If it does, you use the intensity value in the map (the distance to the light source) to compute the new brightness of the pixel. And you can use texture filtering to make it nice and smooth.
Z is the distance to a plane, not to a point. And you can't filter depth values in a meaningful way here.
 
Xmas said:
DiGuru said:
The idea is, that you render a scene for each lightsource (or even up to 6 times, for omnidirectional lightsources in the field of view of the player). If possible, you do that at a smaller resolution than the final rendering. And after each rendering, you store the z-buffer as a texture containing the intensity values of the light source.
So you actually store a depth texture?

Yes, the z-buffer has the information you want: where the rays hit the geometry and how far from the light source it was when hit, thereby determining the intensity.

When rendering the final pass, for each pixel you compare the normal against the various intensity maps. If you have a map that is in the right direction, you look at the coordinates of the texel to see if it fits.
How do you determine if "it fits"?

By calculating if they occupy the same spot. The intensity value is the distance from the light source. Scali explained that in a previous post better than I can. We know the pixel is visible, if there is a texel in the same general location, that is your intensity to add. There is a slight error, due to the pixelation from rendering done from two different points of view and the lower resolution.

If it does, you use the intensity value in the map (the distance to the light source) to compute the new brightness of the pixel. And you can use texture filtering to make it nice and smooth.
Z is the distance to a plane, not to a point. And you can't filter depth values in a meaningful way here.

You store the z-buffer as a texture with intensity values. Why would you not be able to filter those?

If Z is not the distance to the point of view but the plane containing the point of view, you would get a slight error. But would that be noticable?

Anyway, with pixelation, different resolutions and that Z error, it would be an approximation. That makes it a soft illumination, with soft shadows. And if you want one light source to be exact, you could use a higher resolution, up to the final one if you have enough memory.


Edit: As far as I understand it, the use of the map itself is much like shadow maps and exactly like projected textures. So that won't be a problem.
 
DiGuru said:
By calculating if they occupy the same spot.
Ah well, this is called shadow mapping. :D

You store the z-buffer as a texture with intensity values. Why would you not be able to filter those?
You would only filter depth (or intensity) of pixels that are visible from the light source. How would you get soft shadows that way?

If Z is not the distance to the point of view but the plane containing the point of view, you would get a slight error. But would that be noticable?
73% off maximum error for a 90° FOV (which you would use to construct a cube map) isn't a "slight error". It would be quite visible.

Edit: As far as I understand it, the use of the map itself is much like shadow maps and exactly like projected textures. So that won't be a problem.
It is shadow mapping. And there are several unsolved problems with that method.
 
Xmas said:
Ah well, this is called shadow mapping. :D

Yes, with z-buffer intensity maps. :D

You would only filter depth (or intensity) of pixels that are visible from the light source. How would you get soft shadows that way?

When you start out with a very dark scene and you turn on the lights, the things that are not illuminated are said to be in the shadow. Ergo: it depends on your definition of shadows: as being the absence of light or as spots that are made darker.

The idea is, that you use only those illumination maps to make dark things brighter.

73% off maximum error for a 90° FOV (which you would use to construct a cube map) isn't a "slight error". It would be quite visible.

Yes, 73% sounds pretty bad. That might require you to compute the right value by the distance from the center of the map. We would have to see, I guess.

It is shadow mapping. And there are several unsolved problems with that method.

Ok. What are the largest unsolved problems?
 
DiGuru said:
Unlike stencil shadows, these shadows would not only have soft edges, but they wouldn't be totally black as well.
Stencil shadow volumes and shadow maps only label which pixels are or are not in shadow. How the in-shadow pixels are treated is up to the developer.

For example, with stencil shadow volumes, the shadow won't be completely black if either you have some ambient lighting, or another light shining in a different direction.
 
DiGuru, what you're describing is *exactly* shadow mapping. It isn't even a variant, the only difference is that you determine light source attenuation directly from the shadow map, which isn't strictly necessary.

The problems with shadow mapping are related to aliasing. You get depth quantization aliasing since a considerable volume of space (when viewed from the camera) can occupy a single depth value in the shadow map, and you get spatial resoultion aliasing because a small part of the shadow map may cover a large part of the view.
 
Thanks for clearing that up, GameCat. So it suffers from the same problem like all other things that use a 2D projection or index to map a 3D space.

It seemed like the most "natural" solution, but it would have suprised me greatly if I had been the first to propose it. All in all, I think I do know a lot more about shadows and illumination right now.
 
Back
Top