Raytracing for shadows on today's hardware

trinibwoy

Meh
Legend
Supporter
Was reading through this document on the feasibility of raytracing algorithms on today's programmable GPU's and one of the points struck me the most. It had to do with applying the raytracing method for shadow determination.

We simulate a hybrid system that uses the standard graphics pipeline to perform hidden surface calculation in the first pass, and then uses ray tracing algorithm to evaluate shadows. Shadow casting is useful as a replacement for shadow maps and shadow volumes. Shadow volumes can be extremely expensive to compute, while for shadow maps, it tends to be difficult to set the proper resolution. A shadow caster can be viewed as a deferred shading pass [Molnar et al. 1992]. The shadow caster pass generates shadow rays for each light source and adds that light's contribution to the final image only if no blockers are found.

Sounds like a pretty damn good idea to me. From my understanding it would be extremely fast to cast a ray from the eye to each pixel to each light source to determine what parts of the screen are in shadow (no need for reflection/refraction rays). Seems to me to be a lot simpler, more accurate and less gimmicky than the other approaches so why isn't it being used?
 
Isn't this the one that assumes static scenes stored in 3D textures? If so, there's your reason for why no game uses it.
 
Inane_Dork said:
Isn't this the one that assumes static scenes stored in 3D textures? If so, there's your reason for why no game uses it.

Nope. This is just an additional pass using a raytracing technique for shadow calculation. From what I understand all that is needed is geometry and transparency information for each object.
 
Looks like they were using a GeForce3 in the test, and I did notice voxels were mentioned at one point, so I wouldn't be surprised if it did use a 3D texture for the scene...but I didn't read it carefully enough to know for sure.
 
Chalnoth said:
Looks like they were using a GeForce3 in the test, and I did notice voxels were mentioned at one point, so I wouldn't be surprised if it did use a 3D texture for the scene...but I didn't read it carefully enough to know for sure.

Saw that too but since their algorithms were based on future hardware capabilities (today's hardware) I didn't pay too much attention to the Geforce3 reference.

Regardless, what are the barriers to using this sort of shadow determination on today's hardware. Off the top of my head, implementation wise one would need a pixel shader to:

1. Find object_intersection of eye-cast ray to nearest object in view
2. Find light_intersection of ray cast from object_intersection to light source
3. If (2) is obstructed by object then (1) is in shadow, else it's not

Is today's hardware configuration not amenable to the object intersection calculation or something?
 
The problem with this is efficient raytracing requires fairly random access to scene datastructures.

Hardware just isn't optimised to do this.

The intesection tests are potentially fast the walking of the datastructures (today at least) isn't.
 
trinibwoy said:
Saw that too but since their algorithms were based on future hardware capabilities (today's hardware) I didn't pay too much attention to the Geforce3 reference.
Except that stuff done for future hardware is always speculative. Something done on a GeForce 6800 would be much more meaningful.

Regardless, what are the barriers to using this sort of shadow determination on today's hardware. Off the top of my head, implementation wise one would need a pixel shader to:

1. Find object_intersection of eye-cast ray to nearest object in view
2. Find light_intersection of ray cast from object_intersection to light source
3. If (2) is obstructed by object then (1) is in shadow, else it's not
Well, an object intersection calculation may be really tough. Consider a naiive implementation: you would render the entire scene from the view of the pixel being rendered into a one pixel depth buffer. That would be prohibitively expensive in terms of geometry throughput. Even though you could potentially do it in world space, and thus not do any geometry transforms, it would take quite a long time to physically check the intersection for each triangle.

So, you could consider a scheme whereby you use bounding boxes and the like to eliminate many triangles from consideration. But you're still going to have a lot of triangles to check intersections for.

What you're going to need to do, then, is somehow whittle-down the geometry that is useful to consider for interactions. The only way I know how to do this efficiently enough would be to do some sort of pre-processing on the scene, like, I believe, BSP trees do. The problem with this is obviously that BSP trees (or any intersection pre-processing) won't handle dynamic geometry.

Now, go back and remember that voxels were mentioned in the paper. If it was true that they were using voxel rendering for the shadows, then that may explain how they were able to consider performance to be acceptable. From what I understand, voxel rendering is inherently based on a raycasting approach, and thus would have excellent performance. I'm just not sure it would be useful in a modern game (though there have been a few games in the past, before 3D acceleration, that used voxel-based rendering).
 
ERP said:
The problem with this is efficient raytracing requires fairly random access to scene datastructures. Hardware just isn't optimised to do this. The intesection tests are potentially fast the walking of the datastructures (today at least) isn't.

Chalnoth said:
Well, an object intersection calculation may be really tough. Consider a naiive implementation: you would render the entire scene from the view of the pixel being rendered into a one pixel depth buffer. That would be prohibitively expensive in terms of geometry throughput. Even though you could potentially do it in world space, and thus not do any geometry transforms, it would take quite a long time to physically check the intersection for each triangle.

Much thanks for the responses. So I guess the biggest problem is 'finding the object' in world space. Wonder how future hardware will address that.
 
trinibwoy said:
From my understanding it would be extremely fast to cast a ray from the eye to each pixel to each light source to determine what parts of the screen are in shadow (no need for reflection/refraction rays). Seems to me to be a lot simpler, more accurate and less gimmicky than the other approaches so why isn't it being used?
Hum... That looks a lot to me as a good definition of shadowmaps. Hardware support for shadowmaps exists in at least the chips of one major IHV (if that's still in there). Surely it's used by at least some games? (is someone knows a list :)...)
 
Remi said:
Hum... That looks a lot to me as a good definition of shadowmaps. Hardware support for shadowmaps exists in at least the chips of one major IHV (if that's still in there). Surely it's used by at least some games? (is someone knows a list :)...)

Nah I think shadow mapping just renders the scene to a depth texture the regular way from the perspective of the light source. Then it applies the texture from the perspective of the viewer.
 
With the irregular Z-buffer approach raytracing shadow rays and the shadowmap approach are 100% equivalent as far as results are concerned. Just for hard shadows you dont need raytracing (it wont be more efficient than using an irregular Z-buffer).
 
This sounds like it could possibly work on a PowerVR processor;P

It requires the deferred shading of the scene because these ops sound like scene capturing is required.

That might be difficult on an IMR.
 
Back
Top