Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

Discussion in 'Rendering Technology and APIs' started by TheAlSpark, Oct 18, 2007.

  1. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    I think the ATI ping pong demo does something like this

    http://www.firingsquad.com/media/article_image.asp/2268/09

    filling the scene with a 3D array of cubemaps to get the required density of "GI" sampling.

    The paper I linked actually gave this specific example :smile: - the dataset for GI is slowly changing as a scene/camera animates and the eye finds it hard to see artefacts in GI.

    Jawed
     
  2. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    Cube maps are wholly inadequate for anything covering a decent portion of the scene. You can't even chop up large objects because you get discontinuity. Moreover, if you have 1000 objects are you going to render 1000 cube maps, each with 1000 objects in them?

    Like I was saying, reprojection only fixes the easy cases like static scenes with diffuse surfaces. GI has a domino effect. You move one thing and it changes the lighting everywhere. Framerate will be limited by things that reprojection can't address.
     
  3. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    You can certainly hope so, but you can't know that (if you know exactly how wrong you are you know exactly how large the bounding volume should be). I just don't see the point, doing it right is only a little more work.
     
  4. TimothyFarrar

    Regular

    Joined:
    Nov 7, 2007
    Messages:
    427
    Likes Received:
    0
    Location:
    Santa Clara, CA
    Jawed,

    Slightly OT, but have you tried the methods in "Accelerating Real-Time Shading with Reverse Reprojection Caching" to amortize the cost of doing relief mapping refinement over multiple frames?

    Also,

    What I was referring to is different, works in very fast moving scenes, and you only render one cubemap at the same world position as the camera (instead of multiple cubemaps sampling illumination from various areas of the scene). The concept without a layered cubemap will not work for 1st person (because large amounts of the cube would render the person), but will work well enough for a 3rd person game (but obviously is limited in only being able to get eye facing reflections).

    BTW, chapter 17 of GPU GEMS 3, "Robust Multiple Specular Reflections and Refactions" goes over a technique of ray marching in a cubemap. They use a bruit-force method of doing the entire ray march each frame. This is limited in framerate or quality. If you had a method to spread the ray march and refinement across multiple frames, then you are starting to get an idea of what I am talking about. Obviously there is a lot more involved to getting it to work well without visual problems...

    Mintmaster, frame rate is not limited by amount of pixels needing re-projection. Instead re-projection pixels result in only an approximate solution which is refined over the next few frames. Instead of thinking of re-projection in the form of "caching" fully finished shader computations, in the context I was describing, it is used to gather information from the previous frame to continue converging to the correct results.

    Keep in mind this is rendering and the "correct solution" is something which looks good enough visually and meets the frame rate requirement, instead of 100% "correct" ray tracing.
     
  5. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    :oops: I'm just an armchair (well wicker chair) enthusiast who last did rasterised 3D graphics programming back in the early 80s and nowadays just reads stuff to get the high level concepts. Actually, theoretically, I'm a ray-tracing fanboy, having been bitten by the bug way back then...

    Jawed
     
  6. TimothyFarrar

    Regular

    Joined:
    Nov 7, 2007
    Messages:
    427
    Likes Received:
    0
    Location:
    Santa Clara, CA
    Ahh the 80's ... I'll never forget "The Last Star Fighter" :wink:
     
  7. TimothyFarrar

    Regular

    Joined:
    Nov 7, 2007
    Messages:
    427
    Likes Received:
    0
    Location:
    Santa Clara, CA
    Sometimes it is faster to forgo exact occlusion computations and just insure that you have enough overlap to cover the screen (and handle coverage over the predicted position of objects in the next frame) and allow the Z buffer to do it's work. Kind of relates back to the issue where ray tracing has to be exact in its search (expensive), and with rendering you simply throw enough fragments to a pixel to insure that it is covered at least once (easy).
     
  8. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    You can't ensure your prediction won't be wrong enough to invalidate your insurance :p
     
  9. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    Another approach would be, to store all (main) beams in a tree, include the objects/surfaces hit at each intersection, and only recalculate the ones when the object/surface is transformed differently (dirty flag). But you really need the whole scene, with objects consisting of curved surfaces up front on the GPU for it to be practical.

    Althoug Cell sounds good for this as well, as long as you can partition your structures in a good way.
     
  10. INKster

    Veteran

    Joined:
    Apr 30, 2006
    Messages:
    2,110
    Likes Received:
    30
    Location:
    Io, lava pit number 12
    Just thought you might be interested to know David Kirk's thoughts on Real-Time Ray Tracing:

    http://www.pcper.com/article.php?aid=530


    For short, he advocates a "hybrid" solution midway between Rasterization and Ray Tracing.
     
  11. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    Good common sense at work.
     
    #111 nAo, Mar 7, 2008
    Last edited: Mar 7, 2008
  12. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Ryan on the other hand just doesn't get it though ...
    That's not what David Kirk said ... not in the fucking remotest sense. Getting to interview David Kirk and then spinning what he said into supporting your own bias is just disrespectful, your own bias being so hopelessly wrong makes it even worse :)
     
    #112 MfA, Mar 7, 2008
    Last edited by a moderator: Mar 7, 2008
  13. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    Veramente a così alto sospetto
    non ti fermar, se quella nol ti dice
    che lume fia tra 'l vero e lo 'ntelletto.
     
  14. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    The most ironic thing that could happen after all this is if NV60 or R800 (DX11) was/were more viable for real-time raytracing than Larrabee...
     
  15. Rshrout

    Newcomer

    Joined:
    Mar 7, 2008
    Messages:
    2
    Likes Received:
    0
    Are you serious? MY bias? If anything, all of my previous articles on the topic have leaned towards ray tracing being the superior long term option.

    Regardless, my pont is perfectly valid and correct. See these quotes from Dr. Kirk:

    This all points to work that hardware and software developers have done over the years to overcome those inherent draw backs that rasterization has for for some rendering. (To quote Dr. Kirk again: "Rasterization is blisteringly fast, but not well-suited to all visual effects.")
     
  16. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Welcome to the board, Ryan! Anyway, I personally don't think what you said is is really biased, but neither is it fair/accurate:

    First of all, I'd like to point out that raytracing also had plenty of research thrown its way in the last 10 years, and it'd also be much less viable if not for that research. The only real difference is that was done not by game developers, but by the academia. EDIT: What I'm implying here is that while there are 'clever tricks' for rasterization such as complex visibility determination schemes, there also are 'clever tricks' for raytracing; what else what today's acceleration structures, after all? They're hardly obvious, and they certainly didn't exist a few decades ago!

    Secondly, there never was a single point in the history of computer graphics where raytracing made more sense than rasterization outside of specific effects and when performance didn't matter much. Smart techniques didn't make rasterization viable; they merely kept it better than the alternatives.

    As a side note, I'm honestly tired of Intel's claims about using raytracing for shadow rays. Unless you want hard shadows ala Doom3, their described approach doesn't really work - and the industry isn't moving towards that, but rather away from it (and very rapidly indeed). The only good reason (for shadowing) not to use fixed-function rasterization hardware is if you want to use logarithmic shadowmap algorithms, IMO, and you might be better off doing rasterization than raytracing even then...
     
  17. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Point 1 is that rasterization uses hierarchical structures to cull and avoid visiting invisible triangles, but that is as much of a workaround as the ray-tracer's reliance on its own form of accelleration structure. Dr. Kirk pointed out that neither method is truly ahead in this regard.

    Point 2 is an admission that rasterization can approximate various effects pretty well. All graphics is approximation, so it's not really boosting either method over the other.

    Point 3 is a sign of the work done for rasterization, but it is also a recognition of the market. It doesn't necessarily reflect an inherent valuation of one method as being superior or inferior.

    Point 4 is a special case of specialized hardware vs. generalized. One possible interpretation is that rasterization at present is very amenable to running on more power-efficient hardware.

    His overall argument seems to be that ray tracing is useful, but it doesn't work miracles. It has a number of advantages, but it has costs that can seriously impact its usefullness in dynamic scenes at real-time frame rates.

    If anything, I think the tenor of your statements insinuates a level of bias I didn't see in Kirk's statements.
    You go out of your way to say how Nvidia's monetary interest is in rasterization, both before and after your discussion with Kirk.

    What are we supposed to assume from that? That you aren't implying Kirk is a corporate shill and so his points must be suspect?

    What did he say that's so unheard of or unreasonable?
    Ray tracing for much of the rending workloads we have is slower.
    Both methods use hierarichal methods for culling or accelleration.
    Both scale the depth of their approximations to achieve speed.
    Both wind up using a lot of common hardware.
    Both have different strengths and weaknesses.
    Both will likely be used in the future.
    GPU hardware is capable of ray tracing, and there are methods for increasing the applicability of raytracing using the same hardware as most of the GPU.
     
  18. Odin

    Newcomer

    Joined:
    Nov 17, 2007
    Messages:
    4
    Likes Received:
    0
    rshrout, I think you're fundamentally misunderstanding how CUDA would work. From the article:
    CUDA kernels can access D3D or OGL objects--at least textures, and maybe more. I assume this is how a hybrid rasterizer/raytracer (as Kirk is proposing) would be implemented.
     
  19. TimothyFarrar

    Regular

    Joined:
    Nov 7, 2007
    Messages:
    427
    Likes Received:
    0
    Location:
    Santa Clara, CA
    Arun, your point on shadows is quite exacting.

    I personally think Intel is using all this ray tracing stuff to divert people from a real interest in pushing Larrabee as a more pliable renderer than NVidia and AMD's offerings.

    As for ray tracing, it already has a future place in being used in a renderer's fragment shader doing relief mapping or correct (ie non-infinite) environment mapping ... just waiting until the hardware gets a little faster (perhaps next console generation) before this becomes common place in games.
     
  20. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    3dilettante: Nice point-by-point reply! :)

    I'm still not sure why I should believe that will be the case though? I agree it's likely, just not certain. Larrabee, because it's x86, has significant ISA overhead which can only be bypassed by creating a new vector-based API. Other architectures can be much more creative in terms of ISA to achieve other potentially more appealing trade-offs.

    That implies that (in theory) a from-the-grounds-up architecture can be (slightly?) more flexible than Larrabee for a given level of performance, not less. So the correct question to ask is not whether GPUs can be more flexible; they obviously can. The question is whether they want to be. Certainly if I was starting the design of a next-generation GPU today with the threat of Larrabee in mind, flexibility would be pretty high on my todo list... Same for one year ago. Two years ago? That becomes a much more difficult (and interesting) question.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...