Shadow Volumes + Fragment Shaders = soft shadows?

Richard

Mord's imaginary friend
Veteran
First let me say that I'm still learning the ropes when it comes to shaders but I had an idea and was wondering if someone could tell me if it's at least possible.

If we pass the fragments composing the shadow to a simple "blur" fragment program will that gets us soft shadows? My idea for the shader would be:

Read pixel on the left, if brightness > current pixel, then raise current pixel's brightness by left pixel.brightness /2 (or whatever).

Of course you'd want to implement a radial "blur" shader but my question is: can we pass the shadow fragments through a fragment program?
 
That would certainly give you blurry shadows..

Correct soft shadows reqire a gradient based on the distance from the occluder.
 
Silent Hill 3 (on Ps2 and PC I thikn) does this to get soft shadows. So it's certaibnly doable and you can see for yourself how it looks in an actual game. You can get problems with the blurred shadows leaking through objects though.
 
My understanding is pixel shaders can only read textures and thier current pixels so blur effects are done multipass.

Blurring default dx shadows would make all shadows equally soft which is still kind of hard shadows you can adjust the alpha of all shadows without bluring. What soft shadows tries to do is make shadows of different levals based on how much light hits a certain area. Current shadow methods use a z test which is pass or fail only so you can see how this complicates things. Most soft shader methods store some data about shadows in a texture and use that in a pixel shader the problem is texture memory is very limited and the resolutiuon of texture needed for good soft shadows can be very high. So demos that want to show soft shadows can use up all the memory on this effect but games need that space for many other things, as gpus increase memory and speed either more complex math operations in a shader will allow for better soft shadows or more texture memory.

I think the best way to think about it is where trying to get softer shadows :D as current techniches are making progress.

Blurring can be used to take a small res texture and remove the effects of pixalation when its upscaled, which can be useful in some current methods.
 
I suppose you could implement a screenspace effect similar to the circle-of-confusion stuff used for depth-of-field. If you use a texture to store the distance to the lightsource for every pixel in the scene, you can use that to determine the amount of blur required in that area.
Check out Humus' SoftShadows 2 demo, which sorta does that. It uses shadowmaps, but the general idea can be adapted to any kind of shadowing method, I suppose.
 
Silent Hill 3 (on Ps2 and PC I thikn) does this to get soft shadows.
Actually by now, majority of PS2 games that use volumes do this to soften them (to various extents), the first time it was done was way back in ICO.
And yeah, it's far from perfect, but it can be tuned to look good most of the time, and generally better then sticking with sharp shadows.
The halos are actually less of an issue then it might seem at first though, but generally it looks better in scenarios with limited/fixed camera distance (the filter being in screen space, kernel size doesn't change with distance, so distant/small stuff tends to get overfiltered).
 
ERP said:
I believe that penumbra wedges are patented..
Really?

I just did a quick search for patents by Jacob Strom and Thomas Akenine-Moller and couldn't see anything mentioning shadows n the abstract. Mind you, it may exist and not have become public yet but then, they surely would have had to file the application before submitting the paper to Siggraph 2003, which would mean before January 03. It's been over 18 months so it should now be public. <shrug>

Also, the optimised version that was presented at Graphics Hardware '03 was a joint effort between themselves and Microsoft so patenting would be a complicated issue.

Simon
 
Simon F said:
ERP said:
I believe that penumbra wedges are patented..
Really?

I just did a quick search for patents by Jacob Strom and Thomas Akenine-Moller and couldn't see anything mentioning shadows n the abstract. Mind you, it may exist and not have become public yet but then, they surely would have had to file the application before submitting the paper to Siggraph 2003, which would mean before January 03. It's been over 18 months so it should now be public. <shrug>

Also, the optimised version that was presented at Graphics Hardware '03 was a joint effort between themselves and Microsoft so patenting would be a complicated issue.

Simon

I could be wrong I have been reading a lot of shadow papers recently, IIRC either penumbra wedges were patented or they at least imply that it's pretty much covered by the smoothies patent but developed independantly.
 
I think the smoothies idea is the one that is being patented.
http://graphics.csail.mit.edu/~ericchan/papers/smoothie/smoothie.pdf

This method is pretty neat, and circumvents a lot of the problems I had with my own techniques because this uses silhouette geometry. However, it will have the same resolution problems as standard shadow maps (though ameliorated due to softness), and only does an "outer" penumbra, which IMO is a bit inadequate. Whatever was a hard shadow in ordinary shadow maps will also be completely shaded in this algorithm, i.e. the umbra doesn't shrink.

"Penumbra maps" is the method that's similar to smoothies, AFAIK. Penumbra wedges are quite different, and are shadow volume based.
 
flick556 said:
My understanding is pixel shaders can only read textures and thier current pixels so blur effects are done multipass.
(emphasis mine)
That's not the whole truth. "Common" shader hardware does not allow you to get framebuffer data as an input to your programming. Not even at the current fragment position. This is only available at the blending stage, which is fixed function.

There is one exception, I believe, and it's the 3DLabs P20. Launch material suggests that it has programmable blending. According to my own interpretation this would mean that you still can't get the framebuffer color into the fragment shader itself. But it is (and always has been) available in the blending stage, and if that's programmable, the problem is solved.
 
zeckensack said:
flick556 said:
My understanding is pixel shaders can only read textures and thier current pixels so blur effects are done multipass.
(emphasis mine)
That's not the whole truth. "Common" shader hardware does not allow you to get framebuffer data as an input to your programming. Not even at the current fragment position. This is only available at the blending stage, which is fixed function.

There is one exception, I believe, and it's the 3DLabs P20. Launch material suggests that it has programmable blending. According to my own interpretation this would mean that you still can't get the framebuffer color into the fragment shader itself. But it is (and always has been) available in the blending stage, and if that's programmable, the problem is solved.

I was a little unclear I did not mean to say they can read previous pixels that effect thier curent pixel. I simply meant that you know what "your" pixel is/could be because you are currently writing it.

In other words i'm assuming the default behaviour of dx or ogl of no blending.
 
I think penumbra wedges are pretty cool, uses basically the same trick as doing volumetric fogging by doing arithmetic on depth values in destination alpha.
 
personally I think generalised modifier volumes always had the best potential for reall accurate shadowing.

The Neon250 only had 1 bit volumes, extend that to 8 bits and allow more flexibility and you'd really be onto a winner, providing you got developer support of course.
 
Without the ability to do arithmetic with depth values they wouldnt have given you cheap-ish soft shadows such as penumbra wedges do (PVR has a patent on doing such arithmetic though ... so I guess in a way penumbra wedges are patented Simon :).
 
Im sure there is a way to calculate the intersection of a modifier volume with a surface and that intersections distance and angle off the light source end of the modifier volume, then give an interpolation between the two alternative sets of parameters you are issuing.
 
I dont understand what you mean...

There would only be extra blending involved in this operation where the volume is part way between the two different object settings (texturemap, lightmap etc..).

Thinking about it though this may cause complications with per pixel lighting, but then I expect you'd be able to apply a parameter in the pixel shader on the occluded light source according to the modifier volume 'density' at that point.


Exactly what are penumbra wedges? A generalised modifier volume's effect is calculated before rendering, objects that lie within it take on an alternative set of values for everything/anything like textures, transparency etc.. and of course light intensity. These alternative settings are predetermined by the application. Im sure you know what they are anyway ;)

I was really annoyed they were dropped from the KYRO, such a great feature. I'd love to see doom 3 running on a PowerVR card using generalised modifier volumes instead of the stencil buffer for shadowing. Can you say shadow volume rendering for no fillrate cost? in fact the only extra work you have to do is transform the volumes and you have to do that with stencil shadows anyway, and thats not what makes them slow...

(Hey Simon F.... feel free to correct my almost certain mistakes :rolleyes: )
 
Back
Top