NV30 = Real time ray-tracing?

JonWoodruff

Newcomer
http://www.theregister.co.uk/content/54/25312.html

I suspect the Register has grossly misunderstood this paper. They point out that the paper seems to be describing a real solution, and it may well be, but I doubt it's NVidia's.

But this is worth talking about. How would real-time raytracing fit into the present scheme of things? How could it be a natural extension of the present rendering methods, like is mentioned in the article?
 
Need to read it but at first glace they mention "ray casting" which is what PowerVR does and they mention tiling towards the end... I wonder why ;) But as said would have to actually read it comment more.

K-
 
He does not suggest NVIDIA's next architecture will use raytracing as its basic method of rendering ... he just extrapolates features of the architecture from comments in the paper.
 
MfA, they mention Imagine and it seems like you're right - the streaming model is good for 3d.

They do suggest separating streaming variables from caching variables in the paper, but apart from that...

I wonder how long it will take gfx companies to produce something similar to Imagine...
 
I think the registers article is quite insightfull BTW (or rather, Im in agreement :).

Kristof, well spotted on the tiling ... of course their method extends to per pixel ray's not originating from a single point, PowerVR unfortunately cant do that. Its slightly late, and this is of course not my own realisation ... but I just cant miss out on the oppurtunity to point out that that feature (definable per pixel ray's) is another missed oppurtunity for PowerVR :) (Would it have been possible with low overhead?)

IMO tiling as a method to get spatial coherence is almost an unavoidable optimization if you are not restricted by cruddy API's ... if everyone was still writing software engine's today, the vastly increased OPS/external-bandwith ratio would have forced people to do so long ago to get maximum performance (hell an example can be found today ... Qsplat, or ask Eric Bron on Ace's hardware ;). The realization in this paper is just an example of this.

Psurge, of course Imagine is from Stanford too. If all you know is a hammer ... (thats not to say 3D rendering isnt a nail of course, just saying that while I agree with them ... they are not wholly unbiased, and Im just me :).
 
hell an example can be found today ... Qsplat

To be fair, there are a few things that are true for QSplat that aren't true for general-purpose rendering packages which allow its tiled renderer to realize some significant optimizations. And even with all of those assumptions made, the overhead for binning/sorting eventually breaks down as the splat size approaches 0 (I believe the original regression tests showed that Zbuffering was faster on average for images with an average splat size of < ~3). Of course, in some (most) cases, the tiled renderer is between 6 and 15 times faster than the Zbuffer renderer, and no effort was made to do platform-specific optimizations (I'm sure using some prefetching/caching instructions could improve performance further).
 
Interesting that two of the 3 people working on this project (Pat Hanrahan is the professor at Stanford so he doesn't really count ;)) either work for NVIDIA or have worked for NVIDIA. William Mark currently works for NVIDIA (I think on Cg as he also worked on the Stanford Real Time Shading Language) and Ian Buck worked on a next generation nForce at Nvidia. Not a big deal, but it is interesting

http://graphics.stanford.edu/papers/rtongfx/

Also 2 out of 3 people that worked on the real time shading language are also working for Nvidia . William R Mark and Svetoslav Tzvetkov :)
 
Back
Top