The Softness Of Shadows

Simon F said:
Doing samples in parallel does not mean that all calculations need be totally independent.
I still dont think it would be a big deal for the potential benefits, irregular Z-buffer and raytracing acceleration. If any of your hardware designer friends did some number crunching on this I would love to hear the results ... but I doubt you would say so even if they had :) So Ill stick with intuition.
 
Alstrong said:
XxStratoMasterXx said:
Can the blurring/sophisticated filtering etc... be done to stencil volumes? Basically, with enough processing, could one get stencils to look as good as "X"sample shadow-buffers? What about the performance?

Have you seen those soft-shadow screenshots of Doom 3?

EDIT:

How costly would it be to do that kind of sophisticated jittering etc.. in real-time?

If shadow-buffer performance is an issue, wouldn't doing stencils for soft shadows be even more costly?
 
Hyp-X said:
An infinately small light source would cast hard shadows whereas a camera with an infinitaly small optic would produce sharp image for all distances. .

Sorry, but that just aint true. A smaller light source would create harder shadows but there are loads of other factors like diffraction and radiosity off the surface the shadow is being seen on.
 
I'm not so sure about that, Dave. Those other effects may make the shadow not entirely black (assuming just the one light source), but I'm sure you'd still see a hard edge to the primary shadow.

Anyway, the other part sounds completely wrong:
whereas a camera with an infinitaly small optic would produce sharp image for all distances. .
I'm not entirely sure what you mean by this, Hyp-X, but the smaller the optics of a camera, the more quantum effects come into play. Large telescopes (particularly those that view low frequency radiation) have to deal with this problem, since many objects are so incredibly far away.
 
nelg said:
Small optics implies small aperture, hence a proportionably large (deep) depth of field.
...and, consequently, dispersion due to the wave nature of light. As I said, this is a large problem with telescopes, particularly low-frequency ones (radio telescopes in particular are made gigantic for this very reason).
 
Within the confines of this discussion it is a non issue. As you said so yourself "but the smaller the optics of a camera, the more quantum effects come into play", the scale here is not small enough to make this an issue. IMHO.
 
nelg said:
Within the confines of this discussion it is a non issue. As you said so yourself "but the smaller the optics of a camera, the more quantum effects come into play", the scale here is not small enough to make this an issue. IMHO.
Depends on how small of optics you're talking about. Much below a millimeter and it'll start to become significant. Oh, and by the way, this is also a very significant issue with satellites that take images of the Earth (so, for example, you should be very skeptical about any purported information that spy satellites can detect detail smaller than about a centimeter in size....that is, they may be able to read license plates or the headline of a newspaper, but nothing smaller).
 
XxStratoMasterXx said:
How costly would it be to do that kind of sophisticated jittering etc.. in real-time? If shadow-buffer performance is an issue, wouldn't doing stencils for soft shadows be even more costly?

As far as I can tell the process of rendering the softshadow screenshots in Doom3 was to jitter the position of each light source by a random value within a certain reasonably small sphere "x" many times, and for each light position rerender the scene, finally acculumating each jittered light source's contribution into an A-buffer. It goes without saying that this is not a reasonable method for real time soft shadowing in the foreseeable future. (But then again in the 70's it was dogma among graphics professionals that Z-buffering would never be useful as a means of surface visibility determination.)

It's also the way IIRC the screenshots with ultrahigh levels of AA were produced. Jitter the viewpoint 32 times and acculumate each view of the scene into an A-buffer. This is the old school way (actually one of the ways) to reduce aliasing when using OpenGL.
 
XxStratoMasterXx said:
How costly would it be to do that kind of sophisticated jittering etc.. in real-time?

If shadow-buffer performance is an issue, wouldn't doing stencils for soft shadows be even more costly?

very costly. :oops:

Those screenshots weren't exactly taken in real-time. ;)
 
Alstrong said:
XxStratoMasterXx said:
How costly would it be to do that kind of sophisticated jittering etc.. in real-time?

If shadow-buffer performance is an issue, wouldn't doing stencils for soft shadows be even more costly?

very costly. :oops:

Those screenshots weren't exactly taken in real-time. ;)

I know, I made some screens! But seriously, it's lifelike shadowing when you take screens like that!

But couldn't you do the randomized dithering and jittering through fragment program processing on the stencil shadows?
 
Heh, not hardly. It's still only the first approximation to shadowed rendering. No scattering is calculated.
 
akira888 said:
But then again in the 70's it was dogma among graphics professionals that Z-buffering would never be useful as a means of surface visibility determination.
Well, in fairness, back then memory for a Z-Buffer would have cost about the same as a house. :)


Anyway, sorry to drag this off topic but I recently bought the first 3 series of "Babylon 5" and each time I read the subject "The Softness of Shadows" I keep wondering which disc that episode is on :?
 
Simon F said:
Anyway, sorry to drag this off topic but I recently bought the first 3 series of "Babylon 5" and each time I read the subject "The Softness of Shadows" I keep wondering which disc that episode is on :?

Lol...that'll be the second theatrical movie after TMoS. :mrgreen:
 
Back
Top