D. Kirk and Prof. Slusallek discuss real-time raytracing

Waltz King

Newcomer
This is quite an interesting discussion, moderated by German gaming magazine "GameStar":
David Kirk, nVidia's well-known Chief Scientist, and Prof. Slusallek from the Saarbrücken University (Germany) discuss real-time raytracing vs. traditional rasterization on today's hardware.

http://www.gamestar.de/magazin/specials/hardware/17734/

The intro is in German, but the discussion itself is in English.
 
Given that a GeForce 6800 has 10-20x the floating point of the Opteron system you describe, you are a poor programmer if you cannot program it to run a ray tracer at least twice as fast on a GPU as on a CPU.

Maybe it's just the german way, but these guys seem kidn of hostile.
 
SecretFire said:
Maybe it's just the german way, but these guys seem kidn of hostile.

Mr. Kirk isn't from Germany, only the Professor is. And I don't think this statement is meant to be insulting.
But yeah, they don't like each other that much, do they? ;)
 
Neat article...now according to one of them they made a 90mhz processor do ray-tracing about as fast as a 8-12 ghz P4 would be expected to? Why couldn't something like that be integrated into next-gen GPU's, or perhaps even improved upon so that next-gen graphics could make full use of ray-tracing?
 
I thought it was an interesting discussion, but it seems to me that Professer Philipp Slusallek hadn't looked at raytracing on the 6800. I'd be more interested in such a discussion after he'd had a chance to test a raytracer on new hardware.
 
Chalnoth said:
I thought it was an interesting discussion, but it seems to me that Professer Philipp Slusallek hadn't looked at raytracing on the 6800. I'd be more interested in such a discussion after he'd had a chance to test a raytracer on new hardware.

How can he look at raytracing on hardware that doesn't exist for practical purposes? :?:
 
I think they were both feeling playful, in a sharp-elbows but no real animosity kind of way.

I found this interesting: "Too many PhD students have spend way too much of their time improving shadow tricks for rasterization -- and its still does not work correctly. The same is true for many other effects." and "With rasterization each different combination of effects would require its own complex programming both at the shader and the application level. You better don't ask how much special effects programming is involved in some of the great demos that you see showing off the latest graphics cards."

That's an interesting point in the sense that we are burning a lot of quality brain cycles on something that is inherently pretty limited in just how close you can make it to real. And sort of like approaching the speed of light, the effort required ramps up the harder you try to push it. That is a serious argument that if the focus changes to something inherently more rewarding when done well that you are leveraging that R&D for a longer time-frame going forward. Kind of like trying to improve the mule --you can make him bigger, stronger, faster, requiring less rest. . .but he'll still never be a tractor no matter what you do, and a tractor is a much better solution.

Kirk seems to be saying that we don't have many dedicated transistors for rasterization anyway (comparitively). While German guy is saying yeah, but you don't have any at all to accelerate ray-tracing.
 
Well, one of the things about PC game hardware is that it needs to be backwards-compatible. So that rasterization hardware still needs to be there. Pure raytracing hardware just won't work in the PC space, as Professer Slusallek seems to be pushing for.

All that you really need is current hardware to be extended so that raytracing algorithms run efficiently. I am still very interested in knowing how efficiently such algorithms could run on NV4x hardware. It may be possible to write a decent raytracing algorithm on NV4x hardware, or we may have to wait for "DX next" hardware.
 
What I didn't get a sense for with either of those guys was how many transistors we'd be talking about for adding some useful dedicated hardware acceleration. It probably makes a pretty big difference if it is 10 million versus, say, 60 million, and how much acceleration you get for the cost (2x, 4x, etc).

I'd also wonder if the developers are out there clamoring for it at all, as that seems to be a large part of what drives these kinds of decisions. If they aren't then there are only three explanations --1) They are so stuck in the current paradigm they can't see outside their box. 2). Smart money says that we aren't at a place yet where it is technologically feasible to produce useable performance (smart money is often wrong, of course, but that is a different discussion). "Lots faster than before" is not necessarily "fast enough to use." 3). Kirk is right.
 
My main question is how much dedicated hardware would save in transistors vs. the shaders we already have.

The other problem with raytracing, for games, is that the performance impact isn't simple, and so there could be significant concerns about performance in heavy-action sequences.
 
:?

Slusallek doesn't appear to be fully aware of the capablities
of a modern GPU, Kirk is right when you're kicking the butt
of today's CPUs why add dedicated hardware? The trend over
the last few years has been towards general stream
processors, ray tracing ain't the only game in town. Future
graphics chips will be doing a lot more than just graphics.

We're likely to see PCs move towards CMP branchy scalar
processors, with an auxillary CMP inorder vector-oriented
stream processor. At some point they will likely merge and
there'll be a huge fight between Nvidia/ATI and Intel. :)

Kirk should send him a 6800 with a copy of Gelato.
 
Chalnoth said:
My main question is how much dedicated hardware would save in transistors vs. the shaders we already have.

Now you lost me. You seem to be suggesting replacing current shaders. Aren't you the guy who just pointed out in transition we have to have *both* capabilities? I was inherently agreeing with you that we'd have to be looking at "bolt-on" in transition for several generations.
 
Yeah, that's the point. You can always save transistors by having dedicated hardware, but you'd need to sacrifice transistors somewhere else to add that dedicated hardware.

Said another way, you could either have more available processing power, but have it not be quite as efficient with raytracing, or you could have less generalized processing power, but have it be very efficient for raytracing (i.e. more total processing power, but some of it is only available when raytracing is in use).

So, the question is, how much of a tradeoff would it be? Can GPU's get general enough, while maintaining good performance in traditional scenarios, to be efficient at raytracing without special hardware? Are we already there with the NV40? Or can you get very significant gains in raytracing performance from adding relatively few transistors?
 
I think the IHV should move towards RT cores ASAP. The rasterizer route is a slippery slope not to mention a hack. ;)

Of course if they're making a lot of money pimping existing infrastructure, they'd be stupid to move to a totally different architecture just for correctness sake. Also a lot of programmers and artists would be out of jobs if everything were to be correctly calculated using RT cores. Those people who can think up of bandaids as a substitute won't have anything to think up.
 
PC-Engine said:
I think the IHV should move towards RT cores ASAP. The rasterizer route is a slippery slope not to mention a hack. ;)
But it has one huge benefit: performance. To put simply, it's easy to keep performance under control in a rasterizing environment (i.e. performance scales linearly in most variables). It's not so easy with Raytracing (or other, more exotic offline rendering techniques), as performance doesn't necessarily scale linearly, and will be highly scene-dependent.
 
Back
Top