ATI/nVidia Realtime Ray Tracing...when???

haider

Newcomer
Hi Guys,

Just a small question, have been oggling ray traced graphics since the days of my Amiga 500. I was wondering how far (in terms of time length and computational power) are we from having realtime ray tracing on consumer/enthusiast graphics cards?

Thanks
Haider
 
still going to be a few years probably around 4-5 years before we see abundent use of raytracing. But it will be used to a smaller degree in the next few years :smile:
 
Depends what you want to do. There are several realtime raytracers on GPUs, we have one here at Stanford that was shown in the ATI booth at SIGGRAPH. The main issue is efficiency compared to other designs, like the Saarland RPU work which is pretty neat.

The rough estimate from the last raytracing talk I heard at SIGGRAPH was that we need to be on the order of 500Mray/s to be interesting. The best raytracers currently published use Cell and are ~100Mray/s for primary rays. When you start talking about global illumination and dynamic scenes, the requirements start to go up. However, few people do full shading of scenes like modern game engines do, so the numbers are generally ultra simple shading. I still think we are a good couple of generations of processor design until we can really think about doing fully raytraced games. But, I'd expect to continue to see hybrid designs come out and blur the line, using rasterization for primary hits and raytracing for secondary effects. For example, POM (parallax occlusion mapping), already does localized raytracing.
 
mhouston, will you be updating us with your findings as to the g80 and r600 once you have migrated the code to run on those chips ?
 
A few places have demoed near-real-time raytracers on GPUs and the Cell lately.

RTT (A large visualization company) has integrated a real-time GPU raytracer into their product.

The same raytracer running on the Cell was demoed at SIGGRAPH this year doing ~6-10fps for a 1024x640 screen with 12 bounces (reflection/refraction) and arbitrary lighting/shading on each bounce. This has probably also improved in speed since then (it was an initial version).

Thus we're getting close, but probably not mainstream this generation. That said, G80 will hopefully bring the threading granularity down even further which has been one of the major bottlenecks for GPU ratyracing lately.

I wouldn't be surprised if we start to see raytracing of secondary rays showing up in more mainstream applications in the next few years though. Dynamic scenes will continue to pose a problem however as most raytracers rely on a very deep bounding volume hierarchy to get their efficiency. Still, a two-level BVH can be used for larger moving objects, and things like bones can often be tightly bounded in space (which is probably good enough).
 
Last edited by a moderator:
Thus we're getting close, but probably not mainstream this generation. That said, G80 will hopefully bring the threading granularity down even further which has been one of the major bottlenecks for GPU ratyracing lately.
Where did you hear that threading granularity is one of the major bottlenecks? I hadn't noticed any papers saying that.
 
Where did you hear that threading granularity is one of the major bottlenecks? I hadn't noticed any papers saying that.
Personal experience actually with the aforementioned raytracer. Admittadly it will be somewhat dependent on the BVH and intersector structures used, but it's totally possible to keep shifting the bottleneck until it lands on branching. Note that it is less of an issue on current ATI hardware than current NVIDIA hardware for obvious reasons.

Note that the most recent Cell raytracing paper only considers primary and shadow rays (read, coherent) and thus does not have this issue. Unfortunately, raytracing only these rays has very little benefit - if any - over rasterization.
 
Last edited by a moderator:
Back
Top