NocturnDragon
Regular
Jon Olick will talk about id's raycasting on sparse octree tech at siggraph.
Interesting thread:
http://ompf.org/forum/viewtopic.php?f=3&p=8319
"Currently using CUDA, but I plan on trying out Larabee."
Jon Olick will talk about id's raycasting on sparse octree tech at siggraph.
Well more that just as soon as rays diverge in the data structure traversal (i.e. at triangle edges), the SIMD stuff effectively gets scalarized to the point where you wind up with 1/16 throughput. Still not terribly (and still generally faster than a lot of similarly-priced CPUs!), but definitely noticable. So ironically, you have trouble with scenes with lots of tiny triangles and high-frequency data structures... remind you of any very similar rendering algorithm that you know of?
No issues - certainly a lot of this stuff will come to the forefront in the next few years since we're definitely at the point where it's quite possible to implement an efficient ray tracer on consumer graphics hardware. Again, we're not quite at the point of generating the data structures there (ironically the best data structure scatter/sort unit on GPUs is currently the rasterizer ), but that will improve with time too.
Jon Olick will talk about id's raycasting on sparse octree tech at siggraph.
Not quite... different bounces are handled as different queries, so really the only interesting thing to look at is the distribution of ray queries that you need to render a given scene. If you're rendering primary rays, that distribution is given precisely with a simple projective transformation (a la rasterization), so it's possible to find much more efficient ways to evaluate the queries than a general-purpose spatial data structure like a kd-tree. In the general case of having *no* knowledge of the ray distribution in the scene, then a kd-tree is pretty close to optimal, but you're almost never in such a situation.Since you can't tell in advance which rays are going to be more important for the render (assuming a simple, unbiased renderer), you have to (essentially) build a database that describes every possible (within memory constraints) "bounce" a ray can take until it's no longer visible.
I'm not entirely sure what you mean here... rays ending up in the same place is almost never an interesting phenomenon (excepting maybe caustics or similar), and certainly doesn't help data structure traversal complexity at all. Complexity gains happen in ray tracing when you can find a group of rays that are all in close proximity and traveling in similar directions that you expect to all take similar spatial paths.If you're lucky, more than one ray from different sources will land on the same point on a surface at the same angle, allowing you to reuse that particular ray "path." However there's no guarantee that as any ray makes more "bounces" it will coincide with a previous ray, thus creating a problem with trying to reduce the number of calculations needed.
Well avoiding the topic of reprojection caching stuff for now, you can't generally avoid shooting rays again... the problem is that you don't know if a previous ray that hit some object hits some other object this frame unless you effectively shoot it again. Plus with lots of camera and object movement to be expected, it's hard to actually reuse previous frame data directly, although sometimes it can be used to accelerate or predict future computation.The second issue involves having dynamic objects within a scene. You can skip recalculating rays that didn't intersect with an object before it moved, and you can skip recalculating any rays that don't intersect with it in its current position. However, there's no guarantee that any new ray calculated for the moved object will coincide with any previously calculated ray path, again making it difficult to reduce the total number of calculations needed.
There's an interesting quote from the above-linked thread by Jon Olick:LOL, funny how rasterization and ray-tracing aren't so different after all when you break it down like that.
Certainly he understands the similarities here, but I would even go so far as to say that the distinction between rasterization and raycasting isn't particularly meaningful anymore. Rasterization is a really fast way of building an acceleration structure for ray queries with a common origin (or arguably, anything "near" to this too). Of course there are a lot of other interesting ray queries which often demand different data structures to evaluate, but they're really all on the same continuum. So while I agree with what Jon's saying, I would phrase it a bit differently: "as we expand the capabilities of graphics hardware, we will be able to efficiently implement a larger range of rendering algorithms and pipelines than just traditional rasterization, including more generic ray queries than simple projection-based ones". My point is that it's not as if we're adding some hardware to make rasterization faster that we can hack into helping ray casting... no, it's really just that they're pretty similar at a fundamental levelThis is a particular point in my talk. I basically explain how its kind of the destiny of graphics cards to have hardware that enhances raycasting as it will likely be developed to enhance rasterization and then raycasting can piggyback on the technology.
It's not that simple... what sorts of rays do you want to be tracing? If it's primary rays (or arguably shadow rays), you have a pretty effective "ray tracer" already in your computerCurious question - Would a more effective ray-tracer include something with more SIMDs, but less width, thus minimizing the traversal penalty, or would that destroy performance in other areas?
http://ompf.org/forum/viewtopic.php?f=3&p=8319
After reading the coments regarding the lack of a general cache in current gpu's and how it draws back the performance on sparse voxel octree ray casting it crossed my mind, would edram be a good solution here as a general cache?(slapped together as a parent die ala R500/C1)
If so how much would be minimaly usefull? 2, 4, 6MB?
Only if you admit that GPUs don't suck at raytracing... and you admit that there's nothing too special about it and its unsurprising that AMD could be doing some raytracing on the GPU considering quite a few such demos have already been shown in the past. [Don't get me wrong, it's still cool, just not that surprising.] All of the discussion pertains to these points, so let me know if you're convincedThread hijacked
Only if you admit that GPUs don't suck at raytracing... and you admit that there's nothing too special about it and its unsurprising that AMD could be doing some raytracing on the GPU considering quite a few such demos have already been shown in the past. [Don't get me wrong, it's still cool, just not that surprising.] All of the discussion pertains to these points, so let me know if you're convinced
Cool, thanks for the link!
Oh and a bit more:
http://www.youtube.com/watch?v=Bz7AukqqaDQ#
The Lightstage-captured woman is pretty impressive.
Jawed
That was more than just "impressive", it was downright astounding!
Jon Olick will talk about id's raycasting on sparse octree tech at siggraph.
That was more than just "impressive", it was downright astounding!
I was pretty stunned. When that section first came up, I thought it was supposed to be film of the actress doing the actual recording. When he said that this was the computer generated version, it was totally photorealistic (at least to the limits of the online movie).
raytracing just doesn't make much sense for primary rays or shadow rays
Nice summary! Reminds me of Sutherland et al's classic paper.You can throw rays at the scene, or you can throw the scene at the rays ... triangles, voxels .... it doesn't matter.