Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

Discussion in 'Rendering Technology and APIs' started by TheAlSpark, Oct 18, 2007.

  1. Frank

    Frank Certified not a majority Veteran

    Great interview!
     
  2. Frank

    Frank Certified not a majority Veteran

    I think he uses the name to describe solid objects to be rendered. Like you would want to build and animate objects from volumetric parts that have their own set of rules. Flex those muscles and shake those boobs!

    It simply means volumetric texels, and he's talking about sub-pixel structures.
     
  3. Frank

    Frank Certified not a majority Veteran

    Me and everyone else around here as well!
     
  4. 3dilettante

    3dilettante Legend Alpha

    Carmack's ballpark figure on Larrabee is interesting if not very broad.
    A factor of 2x performance advantage or disadvantage is a pretty big spread.
    The lower bound would probably be unacceptable. I can't think of many benchmarks between AMD and Nvidia where being half as fast wasn't getting your ass handed to you.

    In raw performance, he's positing a possible 4x advantage in raw performance and possibly 3-4x the clock speed.

    The clock speed possibility doesn't seem likely, given the target clocks for Larrabee and the clocks of current GPUs (G92's shaders are a few 100 MHz shy of the lower bound of Larrabee's clock rate with two years to go).

    The 4x advantage in raw performance seems like a stretch on an architecture that does not seem to have extreme ALU density as high up on the priority list as more specialized GPUs.

    It would take more than just a single process shrink to get a margin that high.
     
  5. scificube

    scificube Regular

    I interpreted John to mean those would be requirements of Larabee to match rasterizers of the day instead of predicting what Larabee's actual relative performance would be in comparison to them.
     
  6. Ilfirin

    Ilfirin Regular

    This might be a stretch of an interpretation but it seems to me that he's implying here that they are having to line the borders of their virtual texture pages with extra pixels which seems to suggest that they are doing more of a 'SVT' thing w/ the accompanied broken trilinear filtering
     
  7. MfA

    MfA Legend

    I'm not convinced a compressed octree will give big gains over compressed geometry. Ignoring that though the representation in itself doesn't necessitate ray tracing, you can "rasterize" an octree too (ie. splatting).
     
  8. hoho

    hoho Veteran

    Jörg Schmittler's PhD has some figures about the subject, though it is a bit old and doesn't consider some newer algorithms that have been developed since then (bvh instead of kd-tree). Anyone knows if any newer publications have been made?
     
  9. TimothyFarrar

    TimothyFarrar Regular

    I'd bet Carmack is really thinking about GPU raycasting into a virtual 3D texture, instead of using BVH or KD-trees or other acceleration structures.
     
  10. Frank

    Frank Certified not a majority Veteran

    I think he's talking about storing bounded volumes in a tree, most likely by using curved surfaces.
     
  11. Wysicon

    Wysicon Newcomer

    I dunno what drew me to this old thread, but here goes... :smile:

    Well, most people who dismiss raytracing do so saying that games dont have a million shiny spheres so the advantage that raytracing brings to the table with correct reflections/refractions is useless.

    On the contrary, just take a look around, every single object (well except maybe extremely matt surfaces) reflects to a certain degree, and that adds a good deal to the realism! Tables, bottles, spoons, doors, cell phones, computer cases, even that cooler on your slick new 4870! :grin: (On a side note, I remember the first few UE3 demos had shiny bricks even, but that maybe have been due to Epic's artists getting carried away with the all new shader pipeline :razz:)

    The point is, rasterization, even with DirectX 10.1's cube map arrays would never be able to achieve (these usually subtle, but important) reflections on all objects,, even if you used LoDs and instancing.
     
  12. Wysicon

    Wysicon Newcomer

    To be more clear, I'm not referring to "specular" highlights. I'm talking about actual reflections with Fresnel behavior.
     
  13. Andrew Lauritzen

    Andrew Lauritzen Moderator Moderator Veteran

    Reflections are definitely important and useful and I think everyone agrees that we need them. But still, remember that planar reflections can be handled extremely efficiently and accurately with rasterization. That said, it'd be great to be able to shoot some secondary reflection rays!

    Still, we don't need to throw out rasterization to shoot some secondary reflection rays ;) There's still no compelling reason IMHO to ray trace primary rays. I don't mean to beat a dead horse here, but it's important that people really treat this problem as one of data structure choice rather than "accuracy vs performance" or similar because it's quite possible to get 100% "accurate" results using any choice or mix of rasterization and ray tracing... certain sets of visibility queries are just more efficiently evaluated using one or the other, or even some point in the smooth and continuous domain of algorithms between the two.
     
  14. ShootMyMonkey

    ShootMyMonkey Veteran

    I'm also not so sure that I'd be completely prepared to say that reflections alone are a good enough reason to go to pure raytracing. Yes, there are the arguments about consistency and simplicity and so on (Pete Shirley in particular seems to like those arguments), but to really make an argument for raytracing, you have to start getting into more complex uses of secondary rays and that gets into fields where you can still fake it in a rasterized world, but not without severe limitations which may not be acceptable in some applications, or may end up pigeonholing the usages. Once you step into that realm, though, you're talking about things that put power demands to say "just wait another 15 years or so, and raytracing will be there."

    As for primary rays, about the only arguments I can see for that are for per-pixel projection even with non-linear warp operators (something that rasterizers will probably never do at any point in time), and subpixel geometry sampling. Of course, that's really a question of how compelling a reason that is. Things like subpixel geometry are generally something you want to avoid rather than utilize -- not just because you can't rasterize them, but because they're just wasteful. Non-linear warp operators and per-pixel projection is a far more generic thing, but the cases where we're truly at a loss without them are open to several alternatives (though said alternatives are guaranteed to suck in other ways).
     
  15. Wysicon

    Wysicon Newcomer

    Planar reflections are easily done by rasterization when it's limited to a small number of objects. But for any real world scenes, it's not really that limited. For example, if we take a simple cubic object, depending upon the view direction, you'd need to usually do alteast 3 planar reflection renderings. Not to mention blurring them to achieve the glossy effect instead of perfect specularity. Even a simple scene with a few dozen cubic shaped objects would need a whole lot of state/buffer changes and would bring things to a crawl, even if we could get away with low resolution render targets. Rasterization simply doesnt scale well when handling delta functions.

    I'm currently investigating using rasterization instead of primary rays as a part of my summer project, so I'd say I'm not qualified to comment on how much faster it would be as compared to, say packet tracing, but I expect the latter would be on par if not faster when done on the CPU with all the optimizations thrown in (unfortunately I'm not allowed to use the GPU, so call it an apples-to-apples comparison if you will :smile:).
     
    Last edited by a moderator: Jul 13, 2008
  16. Wysicon

    Wysicon Newcomer

    I completely agree with you on that, I don't believe moving to whitted raytracing is going to have any significant benefit, visually or otherwise. My only argument is with people saying reflection/refraction that's "handleable" with rasterization today is good enough. But I'm not so sure that it would take 15 years to accelerate path tracing or equivalent algorithms and get them running in real-time. Some of the stuff I'm working on gives mea time frame a third of that :wink: No, I'm not a raytracing nut or anything, a hardcore D3D programmer here :lol:
     
  17. Mintmaster

    Mintmaster Veteran

    The thing is that the eye is very easily fooled by similar but incorrect reflections on those types of objects. Moreover, reflections are far from the limiting factor in realism, as plenty of objects in the real world have almost no reflection but still don't look real in realtime rendered scenes.

    However, I will admit that there are some scenes that really do need raytracing. The best argument for raytracing I've seen is that ATI demo of the city:
    http://www.youtube.com/watch?v=BzquM5Td6bM
    The shiny cars and buildings look great with accurate reflections.
     
  18. Acert93

    Acert93 Artist formerly known as Acert93 Legend

    Is there any info on how that demo was rendered along with resolution and so forth? A PPT series ala the Toy Shop (Toy Store?) a couple years back would be nice.
     
  19. Andrew Lauritzen

    Andrew Lauritzen Moderator Moderator Veteran

    Incidentally if you're talking about glossy reflections, rasterization tends to win by a *lot*. In those cases extreme accuracy per-ray isn't required and blurring is a hell of a lot cheaper than shooting the number of rays that are required to get any reasonable sampling of glossy reflections!

    Yeah, yeah if you create a scene with arbitrarily many entirely uncorrelated and incoherent rays saying rasterization is going to be slow there isn't exactly surprising ;) But it's not clear to me that such a scene is actually useful, or moreover that once you start to get a ton of rays firing around you can't find some rasterization-compatible coherence there. And to summarize what I think Mintmaster and I agree on here: the only case where throwing rasterization out the window entirely makes sense is if you indeed find a case where you need to shoot so many incoherent secondary rays that a bit less efficiency for the primary rays disappears in the noise.

    In my experience for most scenes it still makes a huge difference... there are just a hell of a lot of primary rays compared to secondary rays still. And just to make your job more difficult, an "apples to apples" comparison technically requires recomputing your entire acceleration structure (kd-tree, bvh, bih or similar) on the fly, as that's effectively what simple rasterization is doing :) Basically you either have to optimize neither, or optimize the hell out of both to be "fair"... and when you do that I think you'll see that they're actually pretty similar if you haven't already. Once you start to think about tile-based rasterization, deferred rendering and similar things you'll be surprised how blurred the line becomes!

    Anyways interesting stuff as always. Be sure to link the results of your study when it's done!
     
  20. thambos

    thambos Newcomer

Loading...

Share This Page

Loading...