Realtime raytracing chipset?

I saw this blurb over @ /. and thought it warranted a thread: http://science.slashdot.org/article.pl?sid=05/03/15/0124218

Looks like they already have a prototype up, and running... So we have a physics chipset coming out, and perhaps a raytracing chip to boot? :oops:
This seems like a good marriage of two chipsets that should be on the same card IMO.

Check out the gallery: http://www.saarcor.de/gallery.html

-edit- link to some games that offer raytracing via the game engine:

http://graphics.cs.uni-sb.de/RTGames/
 
It isn't the first time the Saarcor raytracer FPGA has been demonstrated - I did get to see it in action at the Eurographics Graphics Hardware conference half a year or so ago, and it has apparently been around for quite a bit longer than that too. IIRC, its fillrate at 90 Mhz was about 4-5MPixels/sec (no AA, hard shadows only) with the demos that they showed.
 
this is from the quake 3 raytraced

runs faster with more computers (about 20 fps@36 GHz in 512x512 with 4xFSAA)

it would be good for things like animated movies and 3d rendering but it has a long wat to go before it goes into games
 
This quote sums up better where they are @:

"Video and images are rendered in realtime on the SaarCOR Prototype FPGA implementation running at 90 MHz.
This small prototype with only one rendering pipeline achieves already realtime frame rates of 15 to 60 fps in 512x384 and 32 bit colour depth and between 5 to 15 fps at 1024x768 in our benchmark scenes as presented on this page. Thus the prototype with on 90 MHz already achieves the performance of the highly optimized OpenRT software ray tracer on a (virtual) Pentium-4 with 8 to 12 GHz"

Not quite ready IMO for the gaming world, but this is the first prototype. Perhaps with more pipelines, optimizations, & a higher core speed things might change significantly...
 
MasterBaiter said:
This quote sums up better where they are @:

"Video and images are rendered in realtime on the SaarCOR Prototype FPGA implementation running at 90 MHz.
This small prototype with only one rendering pipeline achieves already realtime frame rates of 15 to 60 fps in 512x384 and 32 bit colour depth and between 5 to 15 fps at 1024x768 in our benchmark scenes as presented on this page. Thus the prototype with on 90 MHz already achieves the performance of the highly optimized OpenRT software ray tracer on a (virtual) Pentium-4 with 8 to 12 GHz"

Not quite ready IMO for the gaming world, but this is the first prototype. Perhaps with more pipelines, optimizations, & a higher core speed things might change significantly...

Well this is an FPGA prototype.

Should they put it on production process they could put in multiple pipelines and can have a much higher core speed.

I still don't see the quality advantages though.
 
The good thing about the chip, is that it uses as little as 100-200 MB/s of external memory bandwidth. So you could easily scale the performance by adding other chips or just adding ray-tracing pipelines on the one chip.

I have no doubt we're very close to coming by hardware power strong enough to do realtime raytracing. I'm just not so optimistic about the rendering quality that these first raytracing "demos" will display compared to todays top of the line raster engines with complex shaders and all other hacks. Now if they were talking about photon mapping... *drools*
 
I've seen this before and read all their papers, and can't really say that I find it that interesting. First off, I think people have this confused correlation between raytracing and soft shadows, diffuse interreflection, caustics and the like. Just because the card produces pixels without rasterization doesn't mean that any of the effects that we associate with pretty offline raytraced images are feasible, or even possible. Getting decently GI takes much much much more work than what they are doing here.

There isn't really anything in those images that you can't currently accomplish with modern graphics hardware besides the reflections between objects. Hard shadows? Check. Crappy, low sampled "soft" shadows? Check. Sure, the reflections are nice, but honestly do we really need that so badly? How often in games or in real life do you see massive numbers of highly specular objects in proximity to each other. The biggest problem is still transparency, but there are ways to achieve the right result.

You could add back the environment mapping and lightmaps to improve the visual quality, but the aliasing is simply unacceptable for modern real-time graphics. Also, you would have to add a lot more circuitry to handle anything near the complexity of current shaders.

I think people continue to overlook the difference in how zbuffered scanline rasterization and raytracing operate. In their naive form, both would be linear in runtime with the number of objects to be rendered. Raytracing makes the claim that it can operate in logarithmic time with spacial acceleration, while rasterization would be linear. While this may be true if you tried to render your scene in a single Draw() call, rasterization can benefit greatly from spatial subdivision as well and can use recent hardware advances like top of the pipe Z reject where you can render your scene once very quickly as a first step and then only compute shading once per pixel.

The biggest difference is in data coherency. When raytracing, the entire scene must be available because without prior knowledge of the object's surface, a reflected ray could go anywhere. You need to have all of the scene data available. Often adjacent rays lead to vastly different reflected rays, and have very poor coherency of access. Rasterization differs where each object is independent of the scene. You could theoretically page geometry on and off the GPU for each draw and have no effect. With computational power outpacing memory access in the fashion it is, coherent access is needed to hide the latency of the memory system. You take the cache hit on the first read and get the next 7 for free or whatever. Clever use of rasterization hardware can decompose many of the portions of simple raytracing that are not memory coherent into rasterization steps that are.

I'm not saying that hardware raytracing won't be possible, but my guess is that it'll take at least another 10 years to be useful. And even when the computational power exists to support it, I feel that rasterization hardware would make much better use of it. For the majority of things, I feel that rasterization will be the way to go, and future GPUs will be general enough to let you implement raytracing, etc... for the things that need, but avoid the cost for the things that don't.
 
I am imagining that all future-generation platforms
(beyond the upcoming next-gen ones) will have raytracing hardware built in.

ATI R800 or R1000

Xbox3

Nvidia NV80 or NV90

Playstation4

future Sega arcade board (and console? haha)

etc.

by early part of the next decade, 5 to 7 years away.
 
Ray-tracing every pixel on-screen in real-time applications will happen *after* high-end non-real-time applications ray-trace every single pixel on-screen (routinely).

What I mean by that is ... movies these days are rendered without widespread use of ray-tracing (most of the pixels are rendered using rasterization techniques, ray-tracing is only used when the required effect can't be achieved by other means). Why would you choose to use a more expensive and less flexible (in terms of world geometry) technique in real-time applications when it is not generally required in high-end non-real-time applications?

What I mean by *that* is ... I think ray-tracing is overrated and that some people naively and incorrectly hold it up as a kind of Holy Grail.
 
I think the valid argument in favor of ray tracing is not that it is faster than rasterization (even theoretically) but that it is allows a more straightforward way to model a physical environment. Just create the model, set the lights, and raytrace! No need to play with funny shadowing algorithms, the rays can do that. No need to fiddle with how to imitate translucency effects, the rays can do that! Well, that's the theory anyway.

In practice, movies with special effects have always been made by compositing together multiple images, each of which was generated in the simplest and most efficient way for that particular effect. This was true in the days of optical effects, and so far as I'm aware, it's still true today. I don't see that changing -- it's just too flexible a technique. So I see ray tracing more as "one of the tools", rather than as something that can ever replace rasterization. So maybe future chips will have hardware support for ray tracing, but I believe that a ray-tracing-only chip is a research project (and useful as such), rather than being the future of graphics hardware.

I was at a SIGgraph panel session once where someone from an effects house was talking about how much better ray tracing is than depth buffering. I think that all his points about ease of modelling were valid. But when someone asked what method they had used for their most recent project (which had a tight schedule, as usual), he said they had used depth buffering...

Enjoy, Aranfell
 
Back
Top