Was reading through this document on the feasibility of raytracing algorithms on today's programmable GPU's and one of the points struck me the most. It had to do with applying the raytracing method for shadow determination.
Sounds like a pretty damn good idea to me. From my understanding it would be extremely fast to cast a ray from the eye to each pixel to each light source to determine what parts of the screen are in shadow (no need for reflection/refraction rays). Seems to me to be a lot simpler, more accurate and less gimmicky than the other approaches so why isn't it being used?
We simulate a hybrid system that uses the standard graphics pipeline to perform hidden surface calculation in the first pass, and then uses ray tracing algorithm to evaluate shadows. Shadow casting is useful as a replacement for shadow maps and shadow volumes. Shadow volumes can be extremely expensive to compute, while for shadow maps, it tends to be difficult to set the proper resolution. A shadow caster can be viewed as a deferred shading pass [Molnar et al. 1992]. The shadow caster pass generates shadow rays for each light source and adds that light's contribution to the final image only if no blockers are found.
Sounds like a pretty damn good idea to me. From my understanding it would be extremely fast to cast a ray from the eye to each pixel to each light source to determine what parts of the screen are in shadow (no need for reflection/refraction rays). Seems to me to be a lot simpler, more accurate and less gimmicky than the other approaches so why isn't it being used?