Why shadows are horrible...

K.I.L.E.R

Retarded moron
Veteran
Yes, I am looking to implement a general purpose shadowing algorithm that's good at everything.
The only problem is that I cannot find such an algorithm.

The biggest problem is, there are problems with every method I've read through, even Carmack's reverse is horribly flawed.

Advice?
 
It's interesting you mention that hybrid algorithm, it's been a while since I looked at it. I ponder how good they would be nowadays, with NVIDIA's reject rate of 256 pixels/clock (or rather, 1 triangle/clock for a 16x16 tile, but heh ;))

On a related topic, I'm blown away by Call of Juarez's shadows. They take the crown in terms of quality imo, and by a good order of magnitude. The most impressive is that it seems to be a mere 2048x2048 shadowmap - I wonder what algorithm they're using, I couldn't even notice any obvious corner case, and shadow quality felt nearly constant! Good stuff.


Uttar
EDIT: I'm personally more of a fan of variance shadow mapping though, fwiw...
 
Yeah have to agree COJ has a very nice shadow system,

Well as vram goes up, althougth I'm a fan of volumetric shadows, I really think shadow maps will become predominently used specially with complex scenes. Or the use of some kind of 3d texturing for global illumination simulation.
 
Yes, I am looking to implement a general purpose shadowing algorithm that's good at everything.
The only problem is that I cannot find such an algorithm.

The biggest problem is, there are problems with every method I've read through, even Carmack's reverse is horribly flawed.

Advice?

Best of luck with that.
AFAIK it doesn't exist assuming performance is one of the criteria.
 
So a standard run of the mill maximum texture resolution shadow map scaled down to frame buffer size?

I figure that would be best IMO, especially considering the patent issue.

Thanks guys.
 
Cascaded variance shadowmapping would be best if you want to target DX10+ hardware, yes :) IMO, at least.
If you're targeting DX9 hardware, then various other shadow mapping techniques might have to be considered, including PSMs, LiPSM and TSM. Or just plain cascaded shadow maps. Or FP16-based variance mapping and, I'd imagine, a lot of headaches.
By far the easiest solution to implement remains plain shadowmapping though, obviously.


Uttar
 
I've looked at many implementations of shadow maps, while some look good, nothing comes for free.
I am targeting my hardware, which makes it impossible for me to do too much.
There was an nVidia paper I was reading last night/this morning and it was discussing shadow maps and their performance in regards to scene complexity.

For me, volumetric shadows aren't an issue, however if I was stupid enough to do what I did with my last assignment and load 512MB 3D model for rendering, that would be another story.
I'm obviously going to be gunning for something fast, but looks good.
 
I'm pretty sure variance shadow maps will take over. They rule.

(FP16 is woefully inadequate though. Fixed point 16-bit is much better, but still not good enough to be practical. Thankfully, Xenos has 32-bit fixed point filtering, and G80 has 32-bit FP filtering.)
 
While I like variance shadow maps as well as logarithmic shadow maps (more formally described here, though in German), I still find the Achilles' Heel of any shadowmapping algorithm to be omnidirectional lights. While I find shadow volumes to have limitations that are yet worse than that overall (and in general, not acceptable because of it), it's still no fun to have to use omnidirectional lights for that purpose. It also gets really iffy when you start playing around in non-point lightsources like area lights or "line-segment" lights where a decent setup for shadowmaps aren't so straightforward.

Though the biggest mess for me with VSMs is just the numerical instability and having to deal with texture format compatibilities and how the various drivers essentially "cast" the formats to something unexpected.
 
I'm pretty sure variance shadow maps will take over. They rule.
Variance shadow maps rule? Yes, they do, it's an amazingly good and simple idea! but they will not take over too soon imho.. a bit too artifacts prone.
 
Variance shadow maps rule? Yes, they do, it's an amazingly good and simple idea! but they will not take over too soon imho.. a bit too artifacts prone.
I was experimenting with them in 2000. [Hmm... perhaps I was the first, although I was using the standard deviation (i.e. sqrt(variance)].

I tended to find that, because statistics assumes a bell-shaped curve, a lot of transitions in the shadow map simply didn't match that model very well. <shrug>
 
I was experimenting with them in 2000. [Hmm... perhaps I was the first, although I was using the standard deviation (i.e. sqrt(variance)].
prior art! :)
I tended to find that, because statistics assumes a bell-shaped curve, a lot of transitions in the shadow map simply didn't match that model very well. <shrug>
Well..maybe there are better ways..:cool:
 
but they will not take over too soon imho.. a bit too artifacts prone.
Numeric stability problems are gone on the G80 with fp32 (and maybe even better with fx32 - haven't got DX10 up and running yet). I suspect R600 will be able to do the same, or something similar. Note that hardware filtering isn't even required to get really nice results, but more on that in the near future ;)

You can also reduce/eliminate light bleeding by just lopping off the tail of the distribution (Mint suggested this at one point I think). Of course no matter how much one cuts off, a degenerate case can be constructed. Still this solution has proven to be very effective in my testing, and it's an artist-editable one-liner.

Furthermore anything short of brute-force PCF of the entire filter region (which is prohibitively expensive) will have *some* incorrect case... i.e. there is no silver bullet to visibility.

I'm becoming pretty certain that VSM is the best you can do with two pieces of data. It's certainly the best upper bound that one can get with that amount of information, but I think one would be hard pressed to find a more suitable approximation even. In particular by modifying the falloff function as mentioned above, light bleeding can be eliminated at the cost of some over-darkening. The key however is that the shadow edge will still be (projectively) anti-aliased even if the whole of the distribution function is removed (i.e. converted to a step function).

Anyways getting back on topic: shadows and visibility are just very hard problems, particularly when multiple overlapping occluders are involved. Almost all shadow implementations have similar problems due to this complexity.

I tended to find that, because statistics assumes a bell-shaped curve, a lot of transitions in the shadow map simply didn't match that model very well. <shrug>
I'm not sure what you mean - there's nothing inherent about using statistics that assumes "bell-shaped" distributions. In particular, Chebyshev's Inequality is an upper bound for *all* distributions with a given mean and variance, which is precisely why it is useful.

PS: Simon F, I'd love to see some of those pretty pictures if you're willing to share :)
 
me said:
I tended to find that, because statistics assumes a bell-shaped curve, a lot of transitions in the shadow map simply didn't match that model very well. <shrug>

I'm not sure what you mean - there's nothing inherent about using statistics that assumes "bell-shaped" distributions.
I'm afraid I didn't do statistics at Uni (Analysis, Combinatorics, etc yes, stats, no) so my stats is really only to secondary school level but, AFAICS, you have to assume some sort of model for your distribution of Z within the pixel and the Gaussian seemed the most straightforward. I was attempting to perform MIP mapping directly using mean and std. deviation and this appeared to me to make it simple to downfilter pixels as the lower resolution MIP maps were generated.

I then just used the usual properties such as , e.g., X% of samples are within +/- 1 std dev of the mean, Y% within 2, and 99% within +/- 3 std devs.

In particular, Chebyshev's Inequality is an upper bound for *all* distributions with a given mean and variance, which is precisely why it is useful.
Thanks, I'll look that up when I get a chance.
PS: Simon F, I'd love to see some of those pretty pictures if you're willing to share :)
Well, "beauty is in the eye of the beholder" and I did say "semi-pretty". The scene was just a simple face model etc casting shadows - it was just that I used just false-colour images to show the per-pixel range of the standard deviation and/or coverage amounts .
 
Yes, I am looking to implement a general purpose shadowing algorithm that's good at everything.

Well, you are simply facing an unsolved problem with no general purpose solution :)
Whatever you do, you can always create scenarions where your algorithm breaks and to have perfectly soft, nicely filtered shadows you need very high resolution (and I mean very high) and huge filter kernels.

Then you have things like variance shadow mapping, which is very clever and simple in my opinion, but it still suffers artifacts, it's slower than "other" algorithms given certain conditions, and doesn't solve temporal aliasing. But it's still very worth a try to see if it works ok in your case.

At the end of the day, I can suggest to understand exactly how your scene is going to be configured most of the times (type size and number of objects, camera angles and so on), what kind of look you want to achieve and then write the simplest possible algorithms that works well most of the times. In all the situations where it breaks, you can always run to the artists and tell them to work around it if it's feasible. If it's not feasible, you tell them to work around it anyway.

I'd say that cascaded shadowmaps with variance flavour or other kinds of PCF filters tend to work pretty nicely and I would personally avoid perspective-like shadow mapping.

Good luck. Working on shadows is great fun.
 
Well, you are simply facing an unsolved problem with no general purpose solution :)
Agreed - this can't be stressed enough! In the case of filtering, nothing short of PCF can give you the so-called "exact" result in every circumstance because of the step-function nature of visibility. Thus anything faster will be an approximation with various trade-offs.

and doesn't solve temporal aliasing
Not entirely sure what you mean here... "temporal aliasing" is usually used to describe the discrete nature of displayed frames (i.e. motion blur is a form of temporal anti-aliasing). I'm not sure how this applies specifically to shadows. If you mean that they shear and jitter when lights/objects move, that's actually just the normal spatial aliasing problem. If properly filtered shadows are attained, there will be no flicker and swimming along the edges.

Good luck. Working on shadows is great fun.
Hehe, it is for a while, but then you get sick of it ;) It's worth noting that shadows really are just a general problem of visibility and that actually comes up in a *ton* of other areas of graphics. Incidentally very similar problems come up in non-graphics-related work as well.
 
Not entirely sure what you mean here... "temporal aliasing" is usually used to describe the discrete nature of displayed frames (i.e. motion blur is a form of temporal anti-aliasing). I'm not sure how this applies specifically to shadows. If you mean that they shear and jitter when lights/objects move, that's actually just the normal spatial aliasing problem. If properly filtered shadows are attained, there will be no flicker and swimming along the edges.

My bad and improper definition here. Spatial aliasing is much more correct.
I was actually referring to flickering and swimming along the edges that can't be solved unless the resolution is high 'enough' and the filter is wide 'enough'.
 
I was actually referring to flickering and swimming along the edges that can't be solved unless the resolution is high 'enough' and the filter is wide 'enough'.
Ah yes, agreed. Actually some of the latest work that I've done with VSMs completely eliminates this swimming, etc. since filter widths can be arbitrary wide. Of course maintaining shadow detail while having large filter widths requires huge resolutions (as you note).
 
Back
Top