Screenshots were captured in realtime, without any pre/postprocessing, I have no time for photoshopping it
Is it possible that with your light, shadow caster and shadow receiver positions, penumbra _should_ look how it looks? Try e.g. moving light closer to shadow caster, it should make penumbra wider.
Sorry if I sounded like you were faking the shots, I didn't mean it... With problems being reported here and there, I thought shaders were not running like they were supposed to.
Now I see, the light was far away, and the objects are slightly over the ground, that deceived me somewaht, but the fact that the shadow doesn't get sharp enough is also a visual problem (the minimum blur amount is too high, e.g, the object is very close to the wall, the light is on the other end of the room, but the shadow is blurred. I guess if you fix this problem, it's going to be great..
It looks to me like it's just the obvious approach to soft shadows: supersampling the light source area. IIRC Doom3's "soft shadows" mode did something like this.
This is actually the only "physically current" method that I know of to do soft shadows. All other single-shadow map methods like PCF, etc. are approximations that look "plausible", but only while the light source is comparably small. The problem is that they do not take into account that different parts of the light will "see" (and thus cast light onto) different parts of the scene.
That said, supersampling the light source area has the same problem as other supersampling methods: it's slow, and in the case of shadows it requires a ton of samples to eliminate banding (as noted on the web page). How many samples are being used in these images?
It looks to me like it's just the obvious approach to soft shadows: supersampling the light source area. IIRC Doom3's "soft shadows" mode did something like this
It would be faster to directly render a shadow at higher resolution and then convolve in lightspace against the light shape if one can afford the extra memory.
It would be faster to directly render a shadow at higher resolution and then convolve in lightspace against the light shape if one can afford the extra memory.
That still wouldn't capture the effect if I'm understanding you correctly - the point is that a single point projection cannot represent the visibility from every point on the light surface. Consider standing in a room and looking through a doorway. As you move from side to side you will see different things through the door - this cannot be captured by rendering a high resolution image from one point of view.
I agree that it's very slow, however, and rarely a justified cost for real-time work except for extremely large lights. It should also be noted that the exact same computation/effect can be achieved by just placing multiple shadowed light sources.
That still wouldn't capture the effect if I'm understanding you correctly - the point is that a single point projection cannot represent the visibility from every point on the light surface.