The Softness Of Shadows

Reverend said:
Hmm...

With performance being the THE limiting factor, would you guys agree that shadow buffers, with separate and discreet X-sampling for shadow blur/softness for separate scenes, is THE feature for now and the foreseeable feature? It's relatively cheap right now (and will become cheaper). Yes, we'd still need to do some hacks, if we use it "globally", in specific instances... but it really is [edit : is it really) THE shadow technique to use taking into account the reason I started this thread. [edit : left out the question mark...?].
Well, this has been discussed ad nauseum in the forums here. It appears that the main problem with shadow buffers is that it's very hard to make them of the right resolution to look good in any given scene. Make them too high in resolution and performance plummets. Too low and shadow edges start to look very blocky. What makes all this much worse is that the proper resolution for the shadows depends heavily upon view angles and other such things that cannot be known before rendering.

Personally, I'm really hoping that we see an implementation of an irregular z-buffer sometime soon. An irregular z-buffer is, in essence, a shadow buffer that, instead of being a square texture rendered from the point of view of the light, has one pixel for each pixel on the screen but transformed to be rendered from the position of the light. This makes it so that the end result of an irregular z-buffer should produce the same results as stencil shadow volumes, but has potentially much higher performance (though the irregular nature removes the use of any sort of ordered grid in rendering, which, in turn, means that the hardware must do more work in finding which pixels fall within a given triangle to be rendered).
 
An irregualr z-buffer would be nice, but considering it currently requires you to store the sample points in some kind of tree and traverse that to determine what samples a triangle hits when rasterising, I doubt it will be implemented efficiently in hardware anytime soon.

Just increasing shadow map resolution and taking multiple samples to smooth shadow edges will look good with less effort (it works on current high end hw) and it is the kind of brute force solution that scales well with GPUs. My guess is that will be the most common solution the coming couple of years.
 
What about Adaptive Shadow Maps?
(They're hierarchical, variable-resolution buffers.)
Are they amenable to HW implementation?
 
Not really, sadly. They might be in the long term (as may irregular z-buffers) but in the near future they won't work well with graphics hardware.
 
GameCat said:
An irregualr z-buffer would be nice, but considering it currently requires you to store the sample points in some kind of tree and traverse that to determine what samples a triangle hits when rasterising, I doubt it will be implemented efficiently in hardware anytime soon.
Well, the benefit of this is that you can do this in parallel with other rendering operations. So, provided the hardware can traverse this tree as fast as it can do the transformation and depth checking, there won't be any performance problems.

Just increasing shadow map resolution and taking multiple samples to smooth shadow edges will look good with less effort (it works on current high end hw) and it is the kind of brute force solution that scales well with GPUs. My guess is that will be the most common solution the coming couple of years.
Sure, but what about the case where you have a surface that is at an oblique angle to the light source, but is nearly face-on to the viewer? You'd need a fantastically high resolution map to make that situation look good.

My claim is that with an irregular z-buffer, you can make a scene look better than one rendered via a normal shadow buffer with a buffer that is many times smaller (thus saving depth checks).
 
Looks like there really is no good solution to this (well known) problem I brought up.

Anyway :

John Carmack said:
Reverend said:
When I make my own synthetic demos/benchmarks, it's pretty easy for me to set the "softness" of shadows because the scenes usually have one type of lighting. That is to say, the "softness" is consistent everywhere.

For a game, with its many scenes and environments with different types of lighting (sun, moon, a single lightbulb, fluorescents, etc. etc), how do you determine the "softness" of shadows? I mean, hard-edge stencils actually work pretty accurately in some kind of specific lighting conditions (and distance-to-light, as well as distance-to-shadow-from-eye) while we all know it's just completely unrealistic in other conditions.
In almost all cases, I am taking the Pixar approach -- it is up to the designer to decide how the scene is going to look. Even if we had infinite processing power to do everything accurately, we would still be adding back in all the hacks to let the "director" control the scene lighting.

John Carmack

John Carmack said:
Reverend said:
Wouldn't shadow buffers be the best solution for now and the foreseeable future? For different lighting conditions, we just apply different sampling to effect different degrees of blur. Plus, it's relatively cheap (given the current and next-gen 3D hardware).
Yes, I expect shadow buffers to be the primary solution for quite some time, but I still don't have final enough data to tell if we are going to be using them for the next product. I'm still waiting on a properly optimized driver to gather statistics with.

John Carmack

Don't want to sidetrack this thread but I get the feeling John prefers shadow buffers to stencils for the next one but he just isn't satisfied with shadow buffer performance (perhaps he really wants high X-number of samples for The Softness of Shadows) in the ICDs (NV and ATI). I think I posted something along these lines in one of my RATP threads in the GD forum.
 
Chalnoth said:
GameCat said:
An irregualr z-buffer would be nice, but considering it currently requires you to store the sample points in some kind of tree and traverse that to determine what samples a triangle hits when rasterising, I doubt it will be implemented efficiently in hardware anytime soon.
Well, the benefit of this is that you can do this in parallel with other rendering operations. So, provided the hardware can traverse this tree as fast as it can do the transformation and depth checking, there won't be any performance problems.

The problem is the "traversing the tree" part. It's just not the kind of datastructure that fits in very well in that part of the graphics pipeline. The whole approach requires either a major reconfiguration of the way todays hardware is set up or insane fragment programs.

Just increasing shadow map resolution and taking multiple samples to smooth shadow edges will look good with less effort (it works on current high end hw) and it is the kind of brute force solution that scales well with GPUs. My guess is that will be the most common solution the coming couple of years.
Sure, but what about the case where you have a surface that is at an oblique angle to the light source, but is nearly face-on to the viewer? You'd need a fantastically high resolution map to make that situation look good.

My claim is that with an irregular z-buffer, you can make a scene look better than one rendered via a normal shadow buffer with a buffer that is many times smaller (thus saving depth checks).

Of course it will look much better, in fact, with a screen sized irregular buffer it will be perfect. My point was just that with current or near future hardware, it will be completely unfeasible. Unless someone manages to implement irregular buffer rasterisation efficiently in hw of course :D

Besides, handling soft shadows with an irregular buffer seems non trivial since the samples aren't adjacent in light space.
 
GameCat said:
The problem is the "traversing the tree" part. It's just not the kind of datastructure that fits in very well in that part of the graphics pipeline. The whole approach requires either a major reconfiguration of the way todays hardware is set up or insane fragment programs.
Right. It requires special hardware, but I doubt it would require all that much hardware. What you need to implement the system is basically hardware that can test very quickly whether a particular pixel is within a triangle or not. For this to lead to efficient rendering, you'd need to be able to do this many times per clock per pixel pipeline.

The only question that needs to be considered when one asks whether or not this should be implemented, then, is the transistor cost of such a design (side comment: some of the processing could be done in the vertex shader to reduce the per-pixel processing).
 
Hmm - the pixel in triangle test is just about a ray in triangle test. Adding HW support for traversal/construction of a kD tree gets you almost all the way to a HW ray-tracer...
 
psurge said:
Hmm - the pixel in triangle test is just about a ray in triangle test. Adding HW support for traversal/construction of a kD tree gets you almost all the way to a HW ray-tracer...
Which would seem to me to be even more of a reason to implement the technique in hardware.
 
psurge said:
Hmm - the pixel in triangle test is just about a ray in triangle test.
I would say that the former is considerably simpler especially when you take into account the fact that many will either be done in parallel OR will be done very cheaply in an incremental fashion.
 
Reverend,

So he may not be using shadow-buffers? What possible solution would be able to do Pixar quality shadows other than shadow buffers? Also, if he says that they'll be the primary solution for a while, why in the world would he not use them?

Strange!
 
For a tiler adaptive shadowmapping comes relatively naturally. If you use the driver to compute the local sampling denities from the view space Z-buffer, then only the binning engine would have to be changed ... in the case of PowerVR using arbitrary sampling points in the tile probably would even be feasible (which would allow you to implement exact intersections for all the shadow rays, ala the irregular Z-buffer). Since intersections are computed in parallel anyway I doubt the regular sampling pattern really saves much transistors.
 
This two year old paper describes a technique where the "corners" of the shadow map's texels were offset from their original position, warping the previously regular grid of the map into a pattern that better resembles the occluding object's projected silhouette.

Silhouette Shadow Mapping

edit: repaired tag
 
Can the blurring/sophisticated filtering etc... be done to stencil volumes? Basically, with enough processing, could one get stencils to look as good as "X"sample shadow-buffers? What about the performance?
 
MfA said:
Since intersections are computed in parallel anyway I doubt the regular sampling pattern really saves much transistors.
Doing samples in parallel does not mean that all calculations need be totally independent.
 
XxStratoMasterXx said:
Can the blurring/sophisticated filtering etc... be done to stencil volumes? Basically, with enough processing, could one get stencils to look as good as "X"sample shadow-buffers? What about the performance?

Have you seen those soft-shadow screenshots of Doom 3?
 
Back
Top