Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

I haven't heard about voxels in a while now.
I recall vaguely there were some stumbling blocks with those, including some wicked patents (edit: non-technical obstacles) on some pretty basic algorithms (marching cubes, I think, is or was one of them).
I think he uses the name to describe solid objects to be rendered. Like you would want to build and animate objects from volumetric parts that have their own set of rules. Flex those muscles and shake those boobs!

It simply means volumetric texels, and he's talking about sub-pixel structures.
 
Carmack's ballpark figure on Larrabee is interesting if not very broad.
A factor of 2x performance advantage or disadvantage is a pretty big spread.
The lower bound would probably be unacceptable. I can't think of many benchmarks between AMD and Nvidia where being half as fast wasn't getting your ass handed to you.

In raw performance, he's positing a possible 4x advantage in raw performance and possibly 3-4x the clock speed.

The clock speed possibility doesn't seem likely, given the target clocks for Larrabee and the clocks of current GPUs (G92's shaders are a few 100 MHz shy of the lower bound of Larrabee's clock rate with two years to go).

The 4x advantage in raw performance seems like a stretch on an architecture that does not seem to have extreme ALU density as high up on the priority list as more specialized GPUs.

It would take more than just a single process shrink to get a margin that high.
 
I interpreted John to mean those would be requirements of Larabee to match rasterizers of the day instead of predicting what Larabee's actual relative performance would be in comparison to them.
 
In our current game title we are looking at shipping on two DVDs, and we are generating hundreds of gigs of data in our development before we work on compressing it down. It’s interesting that if you look at representing this data in this particular sparse voxel octree format it winds up even being a more efficient way to store the 2D data as well as the 3D geometry data, because you don’t have packing and bordering issues.

This might be a stretch of an interpretation but it seems to me that he's implying here that they are having to line the borders of their virtual texture pages with extra pixels which seems to suggest that they are doing more of a 'SVT' thing w/ the accompanied broken trilinear filtering
 
I'm not convinced a compressed octree will give big gains over compressed geometry. Ignoring that though the representation in itself doesn't necessitate ray tracing, you can "rasterize" an octree too (ie. splatting).
 
And how many times do I still need to point out that rudimentary calculations based on 'die' shots of the Raytracing FPGA clearly indicates raytracing would also benefit from special-purpose hardware?

Jörg Schmittler's PhD has some figures about the subject, though it is a bit old and doesn't consider some newer algorithms that have been developed since then (bvh instead of kd-tree). Anyone knows if any newer publications have been made?
 
I'd bet Carmack is really thinking about GPU raycasting into a virtual 3D texture, instead of using BVH or KD-trees or other acceleration structures.
 
I'm not convinced a compressed octree will give big gains over compressed geometry. Ignoring that though the representation in itself doesn't necessitate ray tracing, you can "rasterize" an octree too (ie. splatting).
I think he's talking about storing bounded volumes in a tree, most likely by using curved surfaces.
 
I dunno what drew me to this old thread, but here goes... :smile:

Well, most people who dismiss raytracing do so saying that games dont have a million shiny spheres so the advantage that raytracing brings to the table with correct reflections/refractions is useless.

On the contrary, just take a look around, every single object (well except maybe extremely matt surfaces) reflects to a certain degree, and that adds a good deal to the realism! Tables, bottles, spoons, doors, cell phones, computer cases, even that cooler on your slick new 4870! :D (On a side note, I remember the first few UE3 demos had shiny bricks even, but that maybe have been due to Epic's artists getting carried away with the all new shader pipeline :p)

The point is, rasterization, even with DirectX 10.1's cube map arrays would never be able to achieve (these usually subtle, but important) reflections on all objects,, even if you used LoDs and instancing.
 
To be more clear, I'm not referring to "specular" highlights. I'm talking about actual reflections with Fresnel behavior.
 
Reflections are definitely important and useful and I think everyone agrees that we need them. But still, remember that planar reflections can be handled extremely efficiently and accurately with rasterization. That said, it'd be great to be able to shoot some secondary reflection rays!

Still, we don't need to throw out rasterization to shoot some secondary reflection rays ;) There's still no compelling reason IMHO to ray trace primary rays. I don't mean to beat a dead horse here, but it's important that people really treat this problem as one of data structure choice rather than "accuracy vs performance" or similar because it's quite possible to get 100% "accurate" results using any choice or mix of rasterization and ray tracing... certain sets of visibility queries are just more efficiently evaluated using one or the other, or even some point in the smooth and continuous domain of algorithms between the two.
 
I'm also not so sure that I'd be completely prepared to say that reflections alone are a good enough reason to go to pure raytracing. Yes, there are the arguments about consistency and simplicity and so on (Pete Shirley in particular seems to like those arguments), but to really make an argument for raytracing, you have to start getting into more complex uses of secondary rays and that gets into fields where you can still fake it in a rasterized world, but not without severe limitations which may not be acceptable in some applications, or may end up pigeonholing the usages. Once you step into that realm, though, you're talking about things that put power demands to say "just wait another 15 years or so, and raytracing will be there."

As for primary rays, about the only arguments I can see for that are for per-pixel projection even with non-linear warp operators (something that rasterizers will probably never do at any point in time), and subpixel geometry sampling. Of course, that's really a question of how compelling a reason that is. Things like subpixel geometry are generally something you want to avoid rather than utilize -- not just because you can't rasterize them, but because they're just wasteful. Non-linear warp operators and per-pixel projection is a far more generic thing, but the cases where we're truly at a loss without them are open to several alternatives (though said alternatives are guaranteed to suck in other ways).
 
Reflections are definitely important and useful and I think everyone agrees that we need them. But still, remember that planar reflections can be handled extremely efficiently and accurately with rasterization. That said, it'd be great to be able to shoot some secondary reflection rays!

Still, we don't need to throw out rasterization to shoot some secondary reflection rays ;) There's still no compelling reason IMHO to ray trace primary rays. I don't mean to beat a dead horse here, but it's important that people really treat this problem as one of data structure choice rather than "accuracy vs performance" or similar because it's quite possible to get 100% "accurate" results using any choice or mix of rasterization and ray tracing... certain sets of visibility queries are just more efficiently evaluated using one or the other, or even some point in the smooth and continuous domain of algorithms between the two.

Planar reflections are easily done by rasterization when it's limited to a small number of objects. But for any real world scenes, it's not really that limited. For example, if we take a simple cubic object, depending upon the view direction, you'd need to usually do alteast 3 planar reflection renderings. Not to mention blurring them to achieve the glossy effect instead of perfect specularity. Even a simple scene with a few dozen cubic shaped objects would need a whole lot of state/buffer changes and would bring things to a crawl, even if we could get away with low resolution render targets. Rasterization simply doesnt scale well when handling delta functions.

I'm currently investigating using rasterization instead of primary rays as a part of my summer project, so I'd say I'm not qualified to comment on how much faster it would be as compared to, say packet tracing, but I expect the latter would be on par if not faster when done on the CPU with all the optimizations thrown in (unfortunately I'm not allowed to use the GPU, so call it an apples-to-apples comparison if you will :smile:).
 
Last edited by a moderator:
I'm also not so sure that I'd be completely prepared to say that reflections alone are a good enough reason to go to pure raytracing. Yes, there are the arguments about consistency and simplicity and so on (Pete Shirley in particular seems to like those arguments), but to really make an argument for raytracing, you have to start getting into more complex uses of secondary rays and that gets into fields where you can still fake it in a rasterized world, but not without severe limitations which may not be acceptable in some applications, or may end up pigeonholing the usages. Once you step into that realm, though, you're talking about things that put power demands to say "just wait another 15 years or so, and raytracing will be there."

I completely agree with you on that, I don't believe moving to whitted raytracing is going to have any significant benefit, visually or otherwise. My only argument is with people saying reflection/refraction that's "handleable" with rasterization today is good enough. But I'm not so sure that it would take 15 years to accelerate path tracing or equivalent algorithms and get them running in real-time. Some of the stuff I'm working on gives mea time frame a third of that ;) No, I'm not a raytracing nut or anything, a hardcore D3D programmer here :LOL:
 
On the contrary, just take a look around, every single object (well except maybe extremely matt surfaces) reflects to a certain degree, and that adds a good deal to the realism! Tables, bottles, spoons, doors, cell phones, computer cases, even that cooler on your slick new 4870!
The thing is that the eye is very easily fooled by similar but incorrect reflections on those types of objects. Moreover, reflections are far from the limiting factor in realism, as plenty of objects in the real world have almost no reflection but still don't look real in realtime rendered scenes.

However, I will admit that there are some scenes that really do need raytracing. The best argument for raytracing I've seen is that ATI demo of the city:
http://www.youtube.com/watch?v=BzquM5Td6bM
The shiny cars and buildings look great with accurate reflections.
 
Is there any info on how that demo was rendered along with resolution and so forth? A PPT series ala the Toy Shop (Toy Store?) a couple years back would be nice.
 
Not to mention blurring them to achieve the glossy effect instead of perfect specularity.
Incidentally if you're talking about glossy reflections, rasterization tends to win by a *lot*. In those cases extreme accuracy per-ray isn't required and blurring is a hell of a lot cheaper than shooting the number of rays that are required to get any reasonable sampling of glossy reflections!

Even a simple scene with a few dozen cubic shaped objects would need a whole lot of state/buffer changes and would bring things to a crawl, even if we could get away with low resolution render targets. Rasterization simply doesnt scale well when handling delta functions.
Yeah, yeah if you create a scene with arbitrarily many entirely uncorrelated and incoherent rays saying rasterization is going to be slow there isn't exactly surprising ;) But it's not clear to me that such a scene is actually useful, or moreover that once you start to get a ton of rays firing around you can't find some rasterization-compatible coherence there. And to summarize what I think Mintmaster and I agree on here: the only case where throwing rasterization out the window entirely makes sense is if you indeed find a case where you need to shoot so many incoherent secondary rays that a bit less efficiency for the primary rays disappears in the noise.

I'm currently investigating using rasterization instead of primary rays as a part of my summer project, so I'd say I'm not qualified to comment on how much faster it would be as compared to, say packet tracing, but I expect the latter would be on par if not faster when done on the CPU with all the optimizations thrown in (unfortunately I'm not allowed to use the GPU, so call it an apples-to-apples comparison if you will :smile:).
In my experience for most scenes it still makes a huge difference... there are just a hell of a lot of primary rays compared to secondary rays still. And just to make your job more difficult, an "apples to apples" comparison technically requires recomputing your entire acceleration structure (kd-tree, bvh, bih or similar) on the fly, as that's effectively what simple rasterization is doing :) Basically you either have to optimize neither, or optimize the hell out of both to be "fair"... and when you do that I think you'll see that they're actually pretty similar if you haven't already. Once you start to think about tile-based rasterization, deferred rendering and similar things you'll be surprised how blurred the line becomes!

Anyways interesting stuff as always. Be sure to link the results of your study when it's done!
 
Back
Top